diff --git a/README.md b/README.md
index a23fc8e39970feaa02669626138f642afe85ba20..b5c72e29aec18639546290c6293238c7f8f6ab9b 100644
--- a/README.md
+++ b/README.md
@@ -19,68 +19,7 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
 
 #### Usage
 
-Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
-then run the following steps:
-
-```ShellSession
-# Copy ``inventory/sample`` as ``inventory/mycluster``
-cp -rfp inventory/sample inventory/mycluster
-
-# Update Ansible inventory file with the ip of your nodes
-
-# Review and change parameters under ``inventory/mycluster/group_vars``
-cat inventory/mycluster/group_vars/all/all.yml
-cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
-
-# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
-# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
-# uninstalling old packages and interacting with various systemd daemons.
-# Without --become the playbook will fail to run!
-# And be mind it will remove the current kubernetes cluster (if it's running)!
-ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root reset.yml
-
-# Deploy Kubespray with Ansible Playbook - run the playbook as root
-# The option `--become` is required, as for example writing SSL keys in /etc/,
-# installing packages and interacting with various systemd daemons.
-# Without --become the playbook will fail to run!
-ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml
-```
-
-Note: When Ansible is already installed via system packages on the control node,
-Python packages installed via `sudo pip install -r requirements.txt` will go to
-a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
-Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
-Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
-
-```raw
-ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
-```
-
-This likely indicates that a task depends on a module present in ``requirements.txt``.
-
-One way of addressing this is to uninstall the system Ansible package then
-reinstall Ansible via ``pip``, but this not always possible and one must
-take care regarding package versions.
-A workaround consists of setting the `ANSIBLE_LIBRARY`
-and `ANSIBLE_MODULE_UTILS` environment variables respectively to
-the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
-installation location, which is the ``Location`` shown by running
-`pip show [package]` before executing `ansible-playbook`.
-
-A simple way to ensure you get all the correct version of Ansible is to use
-the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
-You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
-to access the inventory and SSH key in the container, like this:
-
-```ShellSession
-git checkout v2.26.0
-docker pull quay.io/kubespray/kubespray:v2.26.0
-docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
-  --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
-  quay.io/kubespray/kubespray:v2.26.0 bash
-# Inside the container you may now run the kubespray playbooks:
-ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
-```
+See [Getting started](/docs/getting_started/getting-started.md)
 
 #### Collection
 
diff --git a/docs/_sidebar.md b/docs/_sidebar.md
index d30ec2a37c7595c891319c49bc15d580a453b695..c11a646df838ff98ad2afff28b98a7141b7f793f 100644
--- a/docs/_sidebar.md
+++ b/docs/_sidebar.md
@@ -14,6 +14,7 @@
 * Ansible
   * [Ansible](/docs/ansible/ansible.md)
   * [Ansible Collection](/docs/ansible/ansible_collection.md)
+  * [Inventory](/docs/ansible/inventory.md)
   * [Vars](/docs/ansible/vars.md)
 * Cloud Controllers
   * [Openstack](/docs/cloud_controllers/openstack.md)
diff --git a/docs/ansible/ansible.md b/docs/ansible/ansible.md
index 90b3ca0efe902b5cf97da9010caaa51e763b3ecd..dd42888025162cc011231b9a017b9b778c15879f 100644
--- a/docs/ansible/ansible.md
+++ b/docs/ansible/ansible.md
@@ -34,7 +34,6 @@ Based on the table below and the available python version for your ansible host
 |-----------------|----------------|
 | >= 2.16.4       | 3.10-3.12      |
 
-
 ## Customize Ansible vars
 
 Kubespray expects users to use one of the following variables sources for settings and customization:
@@ -48,7 +47,7 @@ Kubespray expects users to use one of the following variables sources for settin
 
 [!IMPORTANT]
 Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
-Those vars are usually **not expected** (by Kubespray developpers) to be modified by end users, and not part of Kubespray
+Those vars are usually **not expected** (by Kubespray developers) to be modified by end users, and not part of Kubespray
 interface. Thus they can change, disappear, or break stuff unexpectedly.
 
 ## Ansible tags
@@ -189,42 +188,32 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
     --tags download --skip-tags upload,upgrade
 ```
 
-Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
-
-## Bastion host
-
-If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
-you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
-simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
-bastion host.
-
-```ShellSession
-[bastion]
-bastion ansible_host=x.x.x.x
-```
-
-For more information about Ansible and bastion hosts, read
-[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
+Note: use `--tags` and `--skip-tags` wisely and only if you're 100% sure what you're doing.
 
 ## Mitogen
 
 Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
 
-## Beyond ansible 2.9
+## Troubleshooting Ansible issues
 
-Ansible project has decided, in order to ease their maintenance burden, to split between
-two projects which are now joined under the Ansible umbrella.
+Having the wrong version of ansible, ansible collections or python dependencies can cause issue.
+In particular, Kubespray ship custom modules which Ansible needs to find, for which you should specify [ANSIBLE_LIBRAY](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#adding-a-module-or-plugin-outside-of-a-collection)
 
-Ansible-base (2.10.x branch) will contain just the ansible language implementation while
-ansible modules that were previously bundled into a single repository will be part of the
-ansible 3.x package. Please see [this blog post](https://blog.while-true-do.io/ansible-release-3-0-0/)
-that explains in detail the need and the evolution plan.
+```ShellSession
+export ANSIBLE_LIBRAY=<kubespray_dir>/library`
+```
 
-**Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U`.
-You first need to uninstall your old ansible (pre 2.10) version and install the new one.
+A simple way to ensure you get all the correct version of Ansible is to use
+the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
+You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
+to access the inventory and SSH key in the container, like this:
 
 ```ShellSession
-pip uninstall ansible ansible-base ansible-core
-cd kubespray/
-pip install -U .
+git checkout v2.26.0
+docker pull quay.io/kubespray/kubespray:v2.26.0
+docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
+  --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
+  quay.io/kubespray/kubespray:v2.26.0 bash
+# Inside the container you may now run the kubespray playbooks:
+ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
 ```
diff --git a/docs/ansible/inventory.md b/docs/ansible/inventory.md
new file mode 100644
index 0000000000000000000000000000000000000000..58ea690b2159198bad4ca97f91fd5c677cebe88a
--- /dev/null
+++ b/docs/ansible/inventory.md
@@ -0,0 +1,71 @@
+# Inventory
+
+The inventory is composed of 3 groups:
+
+* **kube_node** : list of kubernetes nodes where the pods will run.
+* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
+* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
+
+When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
+If you want it a standalone, make sure those groups do not intersect.
+If you want the server to act both as control-plane and node, the server must be defined
+on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
+unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
+not _kube_node_.
+
+There are also two special groups:
+
+* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
+* **bastion** : configure a bastion host if your nodes are not directly reachable
+
+Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
+This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
+
+Below is a complete inventory example:
+
+```ini
+## Configure 'ip' variable to bind kubernetes services on a
+## different ip than the default iface
+node1 ansible_host=95.54.0.12 ip=10.3.0.1
+node2 ansible_host=95.54.0.13 ip=10.3.0.2
+node3 ansible_host=95.54.0.14 ip=10.3.0.3
+node4 ansible_host=95.54.0.15 ip=10.3.0.4
+node5 ansible_host=95.54.0.16 ip=10.3.0.5
+node6 ansible_host=95.54.0.17 ip=10.3.0.6
+
+[kube_control_plane]
+node1
+node2
+
+[etcd]
+node1
+node2
+node3
+
+[kube_node]
+node2
+node3
+node4
+node5
+node6
+```
+
+## Inventory customization
+
+See [Customize Ansible vars](/docs/ansible/ansible.md#customize-ansible-vars)
+and [Ansible documentation on group_vars](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-many-machines-group-variables)
+
+## Bastion host
+
+If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
+you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
+simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
+bastion host.
+
+```ShellSession
+[bastion]
+bastion ansible_host=x.x.x.x
+```
+
+For more information about Ansible and bastion hosts, read
+[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
diff --git a/docs/getting_started/getting-started.md b/docs/getting_started/getting-started.md
index 2c44a51d8cca99db9ac0522cdb75efa1cd3b9a52..77fdf244f14f4a4246384f08c618f80faa6a7310 100644
--- a/docs/getting_started/getting-started.md
+++ b/docs/getting_started/getting-started.md
@@ -1,10 +1,8 @@
 # Getting started
 
-## Building your own inventory
+## Install ansible
 
-Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
-an example inventory located
-[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
+Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible).
 
 ## Building your own inventory
 
@@ -13,9 +11,7 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the
 and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html),
 and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
 
-
 ```ShellSession
-
 <your-favorite-editor> inventory/mycluster/inventory.ini
 
 # Review and change parameters under ``inventory/mycluster/group_vars``
@@ -25,15 +21,13 @@ and [details on the inventory structure expected by Kubespray](/docs/ansible/inv
 <your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
 ```
 
-**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
+## Installing the cluster
 
 ```ShellSession
-ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
+ansible-playbook -i inventory/mycluster/ cluster.yml -b -v \
   --private-key=~/.ssh/private_key
 ```
 
-See more details in the [ansible guide](/docs/ansible/ansible.md).
-
 ### Adding nodes
 
 You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters.