diff --git a/docs/ansible.md b/docs/ansible.md index 0440eccf240f77eceb1e4718770ee058a485bd64..d8ca5a65755ba07caf50009bd5734718c616df89 100644 --- a/docs/ansible.md +++ b/docs/ansible.md @@ -20,7 +20,7 @@ When _kube_node_ contains _etcd_, you define your etcd cluster to be as well sch If you want it a standalone, make sure those groups do not intersect. If you want the server to act both as control-plane and node, the server must be defined on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and -unschedulable master, the server must be defined only in the _kube_control_plane_ and +unschedulable control plane, the server must be defined only in the _kube_control_plane_ and not _kube_node_. There are also two special groups: @@ -67,7 +67,7 @@ The group variables to control main deployment options are located in the direct Optional variables are located in the `inventory/sample/group_vars/all.yml`. Mandatory variables that are common for at least one role (or a node group) can be found in the `inventory/sample/group_vars/k8s_cluster.yml`. -There are also role vars for docker, kubernetes preinstall and master roles. +There are also role vars for docker, kubernetes preinstall and control plane roles. According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable), those cannot be overridden from the group vars. In order to override, one should use the `-e` runtime flags (most simple way) or other layers described in the docs. diff --git a/docs/getting-started.md b/docs/getting-started.md index ed90b88fb3037f30c9bc0b6bb8cc6ae76357e378..d72d25096e83a601612173b7fa50de15ec978a44 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -11,7 +11,7 @@ You can use an to create or modify an Ansible inventory. Currently, it is limited in functionality and is only used for configuring a basic Kubespray cluster inventory, but it does support creating inventory file for large clusters as well. It now supports -separated ETCD and Kubernetes master roles from node role if the size exceeds a +separated ETCD and Kubernetes control plane roles from node role if the size exceeds a certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information. Example inventory generator usage: @@ -40,7 +40,7 @@ See more details in the [ansible guide](/docs/ansible.md). ### Adding nodes -You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters. +You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters. - Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html)). - Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`: @@ -52,7 +52,7 @@ ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \ ### Remove nodes -You may want to remove **master**, **worker**, or **etcd** nodes from your +You may want to remove **control plane**, **worker**, or **etcd** nodes from your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all specified nodes will be drained, then stop some kubernetes services and delete some certificates, @@ -108,11 +108,11 @@ Accessing through Ingress is highly recommended. For proxy access, please note t For token authentication, guide to create Service Account is provided in [dashboard sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md) doc. Still take care of default namespace. -Access can also by achieved via ssh tunnel on a master : +Access can also by achieved via ssh tunnel on a control plane : ```bash -# localhost:8081 will be sent to master-1's own localhost:8081 -ssh -L8001:localhost:8001 user@master-1 +# localhost:8081 will be sent to control-plane-1's own localhost:8081 +ssh -L8001:localhost:8001 user@control-plane-1 sudo -i kubectl proxy ``` diff --git a/docs/kubernetes-reliability.md b/docs/kubernetes-reliability.md index 7daccab9a368e3ec3382729272a366dc8a977ad9..149ec845cee98cc9504c58b6a513dc89fee60342 100644 --- a/docs/kubernetes-reliability.md +++ b/docs/kubernetes-reliability.md @@ -21,7 +21,7 @@ By default the normal behavior looks like: > Kubernetes controller manager and Kubelet work asynchronously. It means that > the delay may include any network latency, API Server latency, etcd latency, -> latency caused by load on one's master nodes and so on. So if +> latency caused by load on one's control plane nodes and so on. So if > `--node-status-update-frequency` is set to 5s in reality it may appear in > etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum > nodes. diff --git a/docs/large-deployments.md b/docs/large-deployments.md index d412010293b453b0ba16d1bcc1541a969e994c49..57b1f7b13e7ca7978677127c934f7f55ad1a06f7 100644 --- a/docs/large-deployments.md +++ b/docs/large-deployments.md @@ -12,7 +12,7 @@ For a large scaled deployments, consider the following configuration changes: See download modes for details. * Adjust the `retry_stagger` global var as appropriate. It should provide sane - load on a delegate (the first K8s master node) then retrying failed + load on a delegate (the first K8s control plane node) then retrying failed push or download operations. * Tune parameters for DNS related applications diff --git a/docs/nodes.md b/docs/nodes.md index 8bf58f9ed96e89e7d1e5fd94f94194a6ba9a98bb..595e3aff4d3c0fc4346000f8066859c968235682 100644 --- a/docs/nodes.md +++ b/docs/nodes.md @@ -6,9 +6,9 @@ Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/i Currently you can't remove the first node in your kube_control_plane and etcd-master list. If you still want to remove this node you have to: -### 1) Change order of current masters +### 1) Change order of current control planes -Modify the order of your master list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example: +Modify the order of your control plane list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example: ```yaml children: @@ -71,13 +71,13 @@ Before using `--limit` run playbook `facts.yml` without the limit to refresh fac With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars: `-e node=NODE_NAME -e reset_nodes=false`. -Use this flag even when you remove other types of nodes like a master or etcd nodes. +Use this flag even when you remove other types of nodes like a control plane or etcd nodes. ### 4) Remove the node from the inventory That's it. -## Adding/replacing a master node +## Adding/replacing a control plane node ### 1) Run `cluster.yml` @@ -92,7 +92,7 @@ In all hosts, restart nginx-proxy pod. This pod is a local proxy for the apiserv docker ps | grep k8s_nginx-proxy_nginx-proxy | awk '{print $1}' | xargs docker restart ``` -### 3) Remove old master nodes +### 3) Remove old control plane nodes With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars. @@ -104,7 +104,7 @@ You need to make sure there are always an odd number of etcd nodes in the cluste ### 1) Add the new node running cluster.yml Update the inventory and run `cluster.yml` passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`. -If the node you want to add as an etcd node is already a worker or master node in your cluster, you have to remove him first using `remove-node.yml`. +If the node you want to add as an etcd node is already a worker or control plane node in your cluster, you have to remove him first using `remove-node.yml`. Run `upgrade-cluster.yml` also passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`. This is necessary to update all etcd configuration in the cluster. @@ -117,7 +117,7 @@ Otherwise the etcd cluster might still be processing the first join and fail on ### 2) Add the new node to apiserver config -In every master node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure the new etcd nodes are present in the apiserver command line parameter `--etcd-servers=...`. +In every control plane node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure the new etcd nodes are present in the apiserver command line parameter `--etcd-servers=...`. ## Removing an etcd node @@ -136,7 +136,7 @@ Run `cluster.yml` to regenerate the configuration files on all remaining nodes. ### 4) Remove the old etcd node from apiserver config -In every master node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure only active etcd nodes are still present in the apiserver command line parameter `--etcd-servers=...`. +In every control plane node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure only active etcd nodes are still present in the apiserver command line parameter `--etcd-servers=...`. ### 5) Shutdown the old instance diff --git a/docs/proxy.md b/docs/proxy.md index b9ecf03bc2a9a91b6e20dd5cc7690e8ea696bdec..9c72019d12782faf8feb50785c54adf0d550e72c 100644 --- a/docs/proxy.md +++ b/docs/proxy.md @@ -18,6 +18,6 @@ If you set http and https proxy, all nodes and loadbalancer will be excluded fro ## Exclude workers from no_proxy Since workers are included in the no_proxy variable, by default, docker engine will be restarted on all nodes (all -pods will restart) when adding or removing workers. To override this behaviour by only including master nodes in the +pods will restart) when adding or removing workers. To override this behaviour by only including control plane nodes in the no_proxy variable, set: `no_proxy_exclude_workers: true` diff --git a/docs/recover-control-plane.md b/docs/recover-control-plane.md index b454310f0a3f88ef8f564eabb2d3ae3ad0cd9d55..0b80da271dc4e3f10f44cd69990e4800f6eab6a2 100644 --- a/docs/recover-control-plane.md +++ b/docs/recover-control-plane.md @@ -20,7 +20,7 @@ __Note that you need at least one functional node to be able to recover using th ## Runbook * Move any broken etcd nodes into the "broken\_etcd" group, make sure the "etcd\_member\_name" variable is set. -* Move any broken master nodes into the "broken\_kube\_control\_plane" group. +* Move any broken control plane nodes into the "broken\_kube\_control\_plane" group. Then run the playbook with ```--limit etcd,kube_control_plane``` and increase the number of ETCD retries by setting ```-e etcd_retries=10``` or something even larger. The amount of retries required is difficult to predict. diff --git a/docs/roadmap.md b/docs/roadmap.md index 59a7ec80b8826ac086b0b303840f0df62cf1e349..9e8f9ac5e40729465a262fb38839305af9cb6a1e 100644 --- a/docs/roadmap.md +++ b/docs/roadmap.md @@ -28,7 +28,7 @@ - [x] Run kubernetes e2e tests - [ ] Test idempotency on single OS but for all network plugins/container engines - [ ] single test on AWS per day -- [ ] test scale up cluster: +1 etcd, +1 master, +1 node +- [ ] test scale up cluster: +1 etcd, +1 control plane, +1 node - [x] Reorganize CI test vars into group var files ## Lifecycle diff --git a/docs/test_cases.md b/docs/test_cases.md index 738b7b1969908b2e08d4bea8156416d7998bb1d4..1fdce682c45646ac3c275e9a82f8698507567f03 100644 --- a/docs/test_cases.md +++ b/docs/test_cases.md @@ -8,7 +8,7 @@ and the `etcd` group merged with the `kube_control_plane`. `separate` layout is when there is only node of each type, which includes a kube_control_plane, kube_node, and etcd cluster member. -`ha` layout consists of two etcd nodes, two masters and a single worker node, +`ha` layout consists of two etcd nodes, two control planes and a single worker node, with role intersection. `scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts diff --git a/docs/vars.md b/docs/vars.md index be5c7d93459ee999dc5efc98e98a3eb231e51d7b..ab7aa4f1a5a6a76418f3ee89ac0b46afbb704aea 100644 --- a/docs/vars.md +++ b/docs/vars.md @@ -180,7 +180,7 @@ node_taints: For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. Extra flags for the kubelet can be specified using these variables, -in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubelet YAML config file. The `kubelet_node_config_extra_args` apply kubelet settings only to nodes and not masters. Example: +in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubelet YAML config file. The `kubelet_node_config_extra_args` apply kubelet settings only to nodes and not control planes. Example: ```yml kubelet_config_extra_args: @@ -202,7 +202,7 @@ Previously, the same parameters could be passed as flags to kubelet binary with * *kubelet_custom_flags* * *kubelet_node_custom_flags* -The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not masters. Example: +The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not control planes. Example: ```yml kubelet_custom_flags: