Skip to content
Snippets Groups Projects
getting-started.md 6.87 KiB

Getting started

Building your own inventory

Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is an example inventory located here.

You can use an inventory generator to create or modify an Ansible inventory. Currently, it is limited in functionality and is only used for configuring a basic Kubespray cluster inventory, but it does support creating inventory file for large clusters as well. It now supports separated ETCD and Kubernetes control plane roles from node role if the size exceeds a certain threshold. Run python3 contrib/inventory_builder/inventory.py help for more information.

Example inventory generator usage:

cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Then use inventory/mycluster/hosts.yml as inventory file.

Starting custom deployment

Once you have an inventory, you may want to customize deployment data vars and start the deployment:

IMPORTANT: Edit my_inventory/groups_vars/*.yaml to override data vars:

ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
  --private-key=~/.ssh/private_key

See more details in the ansible guide.

Adding nodes

You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the cluster.yml playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters.

  • Add the new worker node to your inventory in the appropriate group (or utilize a dynamic inventory).
  • Run the ansible-playbook command, substituting cluster.yml for scale.yml:
ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \
  --private-key=~/.ssh/private_key

Remove nodes

You may want to remove control plane, worker, or etcd nodes from your existing cluster. This can be done by re-running the remove-node.yml playbook. First, all specified nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function. This is generally helpful when doing something like autoscaling your clusters. Of course, if a node is not working, you can remove the node and install it again.

Use --extra-vars "node=<nodename>,<nodename2>" to select the node(s) you want to delete.

ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"