Skip to content
Snippets Groups Projects
Unverified Commit db9852e8 authored by Max Gautier's avatar Max Gautier
Browse files

docs: reorganize "getting started" + cleanups old docs

Our README is currently pretty cluttered:
- Part of the README duplicates docs/getting_started/getting-started.md
-> Remove duplicates and extract useful info into the getting-started.md

- General info on Ansible environment troubleshooting
-> remove most of it as it's not specific to Kubespray, move to
docs/ansible/ansible.md
-> split inventory-related stuff of ansible.md into it's own file. This
should host documentation on how to manages Kubespray inventories in the
future.

ansible.md:
- remove the list of "Unused" variables, as:
  1. It's not accurate
  2. What matters is where users should put their variables
parent 6b14be66
Branches
Tags
No related merge requests found
...@@ -19,68 +19,7 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster. ...@@ -19,68 +19,7 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
#### Usage #### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible) See [Getting started](/docs/getting_started/getting-started.md)
then run the following steps:
```ShellSession
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with the ip of your nodes
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
# And be mind it will remove the current kubernetes cluster (if it's running)!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control node,
Python packages installed via `sudo pip install -r requirements.txt` will go to
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
This likely indicates that a task depends on a module present in ``requirements.txt``.
One way of addressing this is to uninstall the system Ansible package then
reinstall Ansible via ``pip``, but this not always possible and one must
take care regarding package versions.
A workaround consists of setting the `ANSIBLE_LIBRARY`
and `ANSIBLE_MODULE_UTILS` environment variables respectively to
the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
installation location, which is the ``Location`` shown by running
`pip show [package]` before executing `ansible-playbook`.
A simple way to ensure you get all the correct version of Ansible is to use
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.26.0
docker pull quay.io/kubespray/kubespray:v2.26.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.26.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
#### Collection #### Collection
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
* Ansible * Ansible
* [Ansible](/docs/ansible/ansible.md) * [Ansible](/docs/ansible/ansible.md)
* [Ansible Collection](/docs/ansible/ansible_collection.md) * [Ansible Collection](/docs/ansible/ansible_collection.md)
* [Inventory](/docs/ansible/inventory.md)
* [Vars](/docs/ansible/vars.md) * [Vars](/docs/ansible/vars.md)
* Cloud Controllers * Cloud Controllers
* [Openstack](/docs/cloud_controllers/openstack.md) * [Openstack](/docs/cloud_controllers/openstack.md)
......
...@@ -34,7 +34,6 @@ Based on the table below and the available python version for your ansible host ...@@ -34,7 +34,6 @@ Based on the table below and the available python version for your ansible host
|-----------------|----------------| |-----------------|----------------|
| >= 2.16.4 | 3.10-3.12 | | >= 2.16.4 | 3.10-3.12 |
## Customize Ansible vars ## Customize Ansible vars
Kubespray expects users to use one of the following variables sources for settings and customization: Kubespray expects users to use one of the following variables sources for settings and customization:
...@@ -48,7 +47,7 @@ Kubespray expects users to use one of the following variables sources for settin ...@@ -48,7 +47,7 @@ Kubespray expects users to use one of the following variables sources for settin
[!IMPORTANT] [!IMPORTANT]
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/. Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
Those vars are usually **not expected** (by Kubespray developpers) to be modified by end users, and not part of Kubespray Those vars are usually **not expected** (by Kubespray developers) to be modified by end users, and not part of Kubespray
interface. Thus they can change, disappear, or break stuff unexpectedly. interface. Thus they can change, disappear, or break stuff unexpectedly.
## Ansible tags ## Ansible tags
...@@ -189,42 +188,32 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \ ...@@ -189,42 +188,32 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
--tags download --skip-tags upload,upgrade --tags download --skip-tags upload,upgrade
``` ```
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing. Note: use `--tags` and `--skip-tags` wisely and only if you're 100% sure what you're doing.
## Bastion host
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.
```ShellSession
[bastion]
bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
## Mitogen ## Mitogen
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation. Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
## Beyond ansible 2.9 ## Troubleshooting Ansible issues
Ansible project has decided, in order to ease their maintenance burden, to split between Having the wrong version of ansible, ansible collections or python dependencies can cause issue.
two projects which are now joined under the Ansible umbrella. In particular, Kubespray ship custom modules which Ansible needs to find, for which you should specify [ANSIBLE_LIBRAY](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#adding-a-module-or-plugin-outside-of-a-collection)
Ansible-base (2.10.x branch) will contain just the ansible language implementation while ```ShellSession
ansible modules that were previously bundled into a single repository will be part of the export ANSIBLE_LIBRAY=<kubespray_dir>/library`
ansible 3.x package. Please see [this blog post](https://blog.while-true-do.io/ansible-release-3-0-0/) ```
that explains in detail the need and the evolution plan.
**Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U`. A simple way to ensure you get all the correct version of Ansible is to use
You first need to uninstall your old ansible (pre 2.10) version and install the new one. the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession ```ShellSession
pip uninstall ansible ansible-base ansible-core git checkout v2.26.0
cd kubespray/ docker pull quay.io/kubespray/kubespray:v2.26.0
pip install -U . docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.26.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```
# Inventory
The inventory is composed of 3 groups:
* **kube_node** : list of kubernetes nodes where the pods will run.
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
not _kube_node_.
There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
Below is a complete inventory example:
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube_control_plane]
node1
node2
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
```
## Inventory customization
See [Customize Ansible vars](/docs/ansible/ansible.md#customize-ansible-vars)
and [Ansible documentation on group_vars](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-many-machines-group-variables)
## Bastion host
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.
```ShellSession
[bastion]
bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
# Getting started # Getting started
## Building your own inventory ## Install ansible
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible).
an example inventory located
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
## Building your own inventory ## Building your own inventory
...@@ -13,9 +11,7 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the ...@@ -13,9 +11,7 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the
and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html), and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html),
and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md). and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
```ShellSession ```ShellSession
<your-favorite-editor> inventory/mycluster/inventory.ini <your-favorite-editor> inventory/mycluster/inventory.ini
# Review and change parameters under ``inventory/mycluster/group_vars`` # Review and change parameters under ``inventory/mycluster/group_vars``
...@@ -25,15 +21,13 @@ and [details on the inventory structure expected by Kubespray](/docs/ansible/inv ...@@ -25,15 +21,13 @@ and [details on the inventory structure expected by Kubespray](/docs/ansible/inv
<your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes <your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
``` ```
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars: ## Installing the cluster
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \ ansible-playbook -i inventory/mycluster/ cluster.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
``` ```
See more details in the [ansible guide](/docs/ansible/ansible.md).
### Adding nodes ### Adding nodes
You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters. You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment