Skip to content
Snippets Groups Projects
Unverified Commit 69ca3241 authored by Max Gautier's avatar Max Gautier
Browse files

Clean-up references to inventory_builder in docs

parent 56e41f06
No related branches found
No related tags found
No related merge requests found
...@@ -52,14 +52,6 @@ repos: ...@@ -52,14 +52,6 @@ repos:
- repo: local - repo: local
hooks: hooks:
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
additional_dependencies:
- tox==4.15.0
- id: check-readme-versions - id: check-readme-versions
name: check-readme-versions name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh entry: tests/scripts/check_readme_versions.sh
......
...@@ -26,9 +26,7 @@ then run the following steps: ...@@ -26,9 +26,7 @@ then run the following steps:
# Copy ``inventory/sample`` as ``inventory/mycluster`` # Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder # Update Ansible inventory file with the ip of your nodes
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars`` # Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/all/all.yml
......
...@@ -34,91 +34,23 @@ Based on the table below and the available python version for your ansible host ...@@ -34,91 +34,23 @@ Based on the table below and the available python version for your ansible host
|-----------------|----------------| |-----------------|----------------|
| >= 2.16.4 | 3.10-3.12 | | >= 2.16.4 | 3.10-3.12 |
## Inventory
The inventory is composed of 3 groups:
* **kube_node** : list of kubernetes nodes where the pods will run.
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
not _kube_node_.
There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
Below is a complete inventory example:
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube_control_plane]
node1
node2
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
```
## Group vars and overriding variables precedence
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``. ## Customize Ansible vars
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s_cluster.yml`.
There are also role vars for docker, kubernetes preinstall and control plane roles.
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e` runtime flags (most simple way) or other layers described in the docs.
Kubespray uses only a few layers to override things (or expect them to Kubespray expects users to use one of the following variables sources for settings and customization:
be overridden for roles):
| Layer | Comment | | Layer | Comment |
|----------------------------------------|------------------------------------------------------------------------------| |----------------------------------------|------------------------------------------------------------------------------|
| **role defaults** | provides best UX to override things for Kubespray deployments | | inventory vars | |
| inventory vars | Unused | | - **inventory group_vars** | most used |
| **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things | | - inventory host_vars | host specifc vars overrides, group_vars is usually more practical |
| inventory host_vars | Unused |
| playbook group_vars | Unused |
| playbook host_vars | Unused |
| **host facts** | Kubespray overrides for internal roles' logic, like state flags |
| play vars | Unused |
| play vars_prompt | Unused |
| play vars_files | Unused |
| registered vars | Unused |
| set_facts | Kubespray overrides those, for some places |
| **role and include vars** | Provides bad UX to override things! Use extra vars to enforce |
| block vars (only for tasks in block) | Kubespray overrides for internal roles' logic |
| task vars (only for the task) | Unused for roles, but only for helper scripts |
| **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` | | **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` |
[!IMPORTANT]
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
Those vars are usually **not expected** (by Kubespray developpers) to be modified by end users, and not part of Kubespray
interface. Thus they can change, disappear, or break stuff unexpectedly.
## Ansible tags ## Ansible tags
The following tags are defined in playbooks: The following tags are defined in playbooks:
......
...@@ -6,28 +6,24 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is ...@@ -6,28 +6,24 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located an example inventory located
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini). [here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
You can use an ## Building your own inventory
[inventory generator](https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes control plane roles from node role if the size exceeds a
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information.
Example inventory generator usage: Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the
[example inventory](/inventory/sample/inventory.ini)
and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html),
and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
```ShellSession
cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
```
Then use `inventory/mycluster/hosts.yml` as inventory file. ```ShellSession
## Starting custom deployment <your-favorite-editor> inventory/mycluster/inventory.ini
Once you have an inventory, you may want to customize deployment data vars # Review and change parameters under ``inventory/mycluster/group_vars``
and start the deployment: <your-favorite-editor> inventory/mycluster/group_vars/all.yml # for every node, including etcd
<your-favorite-editor> inventory/mycluster/group_vars/k8s_cluster.yml # for every node in the cluster (not etcd when it's separate)
<your-favorite-editor> inventory/mycluster/group_vars/kube_control_plane.yml # for the control plane
<your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
```
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars: **IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
......
...@@ -212,17 +212,15 @@ Copy ``inventory/sample`` as ``inventory/mycluster``: ...@@ -212,17 +212,15 @@ Copy ``inventory/sample`` as ``inventory/mycluster``:
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
``` ```
Update Ansible inventory file with inventory builder: Update the sample Ansible inventory file with ip given by gcloud:
```ShellSession ```ShellSession
declare -a IPS=($(gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way" --format="value(EXTERNAL_IP)" | tr '\n' ' ')) gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way"
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
``` ```
Open the generated `inventory/mycluster/hosts.yaml` file and adjust it so Open `inventory/mycluster/inventory.ini` file and add it so
that controller-0, controller-1 and controller-2 are control plane nodes and that controller-0, controller-1 and controller-2 in the `kube_control_plane` group and
worker-0, worker-1 and worker-2 are worker nodes. Also update the `ip` to the respective local VPC IP and worker-0, worker-1 and worker-2 in the `kube_node` group. Add respective `ip` to the respective local VPC IP for each node.
remove the `access_ip`.
The main configuration for the cluster is stored in The main configuration for the cluster is stored in
`inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we `inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we
...@@ -242,7 +240,7 @@ the kubernetes cluster, just change the 'false' to 'true' for ...@@ -242,7 +240,7 @@ the kubernetes cluster, just change the 'false' to 'true' for
Now we will deploy the configuration: Now we will deploy the configuration:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
``` ```
Ansible will now execute the playbook, this can take up to 20 minutes. Ansible will now execute the playbook, this can take up to 20 minutes.
...@@ -596,7 +594,7 @@ If you want to keep the VMs and just remove the cluster state, you can simply ...@@ -596,7 +594,7 @@ If you want to keep the VMs and just remove the cluster state, you can simply
run another Ansible playbook: run another Ansible playbook:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
``` ```
Resetting the cluster to the VMs original state usually takes about a couple Resetting the cluster to the VMs original state usually takes about a couple
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment