Skip to content
Snippets Groups Projects
Commit a9b67d58 authored by Maxime Guyot's avatar Maxime Guyot Committed by Kubernetes Prow Robot
Browse files

Add markdown CI (#5380)

parent b1fbead5
No related branches found
No related tags found
No related merge requests found
...@@ -47,3 +47,11 @@ tox-inventory-builder: ...@@ -47,3 +47,11 @@ tox-inventory-builder:
- cd contrib/inventory_builder && tox - cd contrib/inventory_builder && tox
when: manual when: manual
except: ['triggers', 'master'] except: ['triggers', 'master']
markdownlint:
stage: unit-tests
image: node
before_script:
- npm install -g markdownlint-cli
script:
- markdownlint README.md docs --ignore docs/_sidebar.md
---
MD013: false
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png) # Deploy a Production Ready Kubernetes Cluster
Deploy a Production Ready Kubernetes Cluster ![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
============================================
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**. If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/) You can get your invite [here](http://slack.k8s.io/)
...@@ -12,8 +11,7 @@ You can get your invite [here](http://slack.k8s.io/) ...@@ -12,8 +11,7 @@ You can get your invite [here](http://slack.k8s.io/)
- Supports most popular **Linux distributions** - Supports most popular **Linux distributions**
- **Continuous integration tests** - **Continuous integration tests**
Quick Start ## Quick Start
-----------
To deploy the cluster you can use : To deploy the cluster you can use :
...@@ -21,6 +19,7 @@ To deploy the cluster you can use : ...@@ -21,6 +19,7 @@ To deploy the cluster you can use :
#### Usage #### Usage
```ShellSession
# Install dependencies from ``requirements.txt`` # Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt sudo pip install -r requirements.txt
...@@ -40,12 +39,15 @@ To deploy the cluster you can use : ...@@ -40,12 +39,15 @@ To deploy the cluster you can use :
# installing packages and interacting with various systemd daemons. # installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run! # Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu). Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with: As a consequence, `ansible-playbook` command will fail with:
```
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
``` ```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault"). probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible. One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
...@@ -56,16 +58,19 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en ...@@ -56,16 +58,19 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en
For Vagrant we need to install python dependencies for provisioning tasks. For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed: Check if Python and pip are installed:
```ShellSession
python -V && pip -V python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/> If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements Install the necessary requirements
```ShellSession
sudo pip install -r requirements.txt sudo pip install -r requirements.txt
vagrant up vagrant up
```
Documents ## Documents
---------
- [Requirements](#requirements) - [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md) - [Kubespray vs ...](docs/comparisons.md)
...@@ -91,8 +96,7 @@ Documents ...@@ -91,8 +96,7 @@ Documents
- [Upgrades basics](docs/upgrades.md) - [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md) - [Roadmap](docs/roadmap.md)
Supported Linux Distributions ## Supported Linux Distributions
-----------------------------
- **Container Linux by CoreOS** - **Container Linux by CoreOS**
- **Debian** Buster, Jessie, Stretch, Wheezy - **Debian** Buster, Jessie, Stretch, Wheezy
...@@ -105,8 +109,7 @@ Supported Linux Distributions ...@@ -105,8 +109,7 @@ Supported Linux Distributions
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
Supported Components ## Supported Components
--------------------
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.16.3 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.16.3
...@@ -132,8 +135,8 @@ Supported Components ...@@ -132,8 +135,8 @@ Supported Components
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) was updated to 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin). Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) was updated to 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Requirements ## Requirements
------------
- **Minimum required version of Kubernetes is v1.15** - **Minimum required version of Kubernetes is v1.15**
- **Ansible v2.7.8 (or newer, but [not 2.8.x](https://github.com/kubernetes-sigs/kubespray/issues/4778)) and python-netaddr is installed on the machine - **Ansible v2.7.8 (or newer, but [not 2.8.x](https://github.com/kubernetes-sigs/kubespray/issues/4778)) and python-netaddr is installed on the machine
that will run Ansible commands** that will run Ansible commands**
...@@ -155,8 +158,7 @@ These limits are safe guarded by Kubespray. Actual requirements for your workloa ...@@ -155,8 +158,7 @@ These limits are safe guarded by Kubespray. Actual requirements for your workloa
- Node - Node
- Memory: 1024 MB - Memory: 1024 MB
Network Plugins ## Network Plugins
---------------
You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`) You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
...@@ -189,22 +191,19 @@ The choice is defined with the variable `kube_network_plugin`. There is also an ...@@ -189,22 +191,19 @@ The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).
Community docs and resources ## Community docs and resources
----------------------------
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/) - [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr - [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty - [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8) - [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
Tools and projects on top of Kubespray ## Tools and projects on top of Kubespray
--------------------------------------
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst) - [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform) - [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
CI Tests ## CI Tests
--------
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines) [![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
......
Ansible variables # Ansible variables
===============
## Inventory
Inventory
-------------
The inventory is composed of 3 groups: The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run. * **kube-node** : list of kubernetes nodes where the pods will run.
...@@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting ...@@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting
the _etcd_ group into the _k8s-cluster_, unless you are certain the _etcd_ group into the _k8s-cluster_, unless you are certain
to do that and you have it fully contained in the latter: to do that and you have it fully contained in the latter:
``` ```ShellSession
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
``` ```
...@@ -32,7 +30,7 @@ There are also two special groups: ...@@ -32,7 +30,7 @@ There are also two special groups:
Below is a complete inventory example: Below is a complete inventory example:
``` ```ini
## Configure 'ip' variable to bind kubernetes services on a ## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface ## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1 node1 ansible_host=95.54.0.12 ip=10.3.0.1
...@@ -63,8 +61,7 @@ kube-node ...@@ -63,8 +61,7 @@ kube-node
kube-master kube-master
``` ```
Group vars and overriding variables precedence ## Group vars and overriding variables precedence
----------------------------------------------
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``. The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
...@@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l ...@@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l
task vars (only for the task) | Unused for roles, but only for helper scripts task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
Ansible tags ## Ansible tags
------------
The following tags are defined in playbooks: The following tags are defined in playbooks:
| Tag name | Used for | Tag name | Used for
...@@ -145,21 +142,25 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all ...@@ -145,21 +142,25 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for" tags found in the codebase. New tags will be listed with the empty "Used for"
field. field.
Example commands ## Example commands
----------------
Example command to filter and apply only DNS configuration tasks and skip Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers: everything else related to host OS configuration and downloading images of containers:
``` ```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
``` ```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files: And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
``` ```
And this prepares all container images locally (at the ansible runner node) without installing And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes: or upgrading related stuff or trying to upload container to K8s cluster nodes:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
-e download_run_once=true -e download_localhost=true \ -e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade --tags download --skip-tags upload,upgrade
...@@ -167,14 +168,14 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \ ...@@ -167,14 +168,14 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing. Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
Bastion host ## Bastion host
--------------
If you prefer to not make your nodes publicly accessible (nodes with private IPs only), If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion, you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host. bastion host.
``` ```ShellSession
[bastion] [bastion]
bastion ansible_host=x.x.x.x bastion ansible_host=x.x.x.x
``` ```
......
## Architecture compatibility # Architecture compatibility
The following table shows the impact of the CPU architecture on compatible features: The following table shows the impact of the CPU architecture on compatible features:
- amd64: Cluster using only x86/amd64 CPUs - amd64: Cluster using only x86/amd64 CPUs
- arm64: Cluster using only arm64 CPUs - arm64: Cluster using only arm64 CPUs
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs - amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs
......
Atomic host bootstrap # Atomic host bootstrap
=====================
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`. Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
Note: Flannel is the only plugin that has currently been tested with atomic Note: Flannel is the only plugin that has currently been tested with atomic
### Vagrant ## Vagrant
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host * For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`. * Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2` * Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted. * Networking on vagrant hosts has to be brought up manually once they are booted.
``` ```ShellSession
vagrant ssh vagrant ssh
sudo /sbin/ifup enp0s8 sudo /sbin/ifup enp0s8
``` ```
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/ * For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://dl.fedoraproject.org/pub/alt/atomic/stable/ * For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>
Then you can proceed to [cluster deployment](#run-deployment) Then you can proceed to [cluster deployment](#run-deployment)
AWS # AWS
===============
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider. To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
...@@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic ...@@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic
You can now create your cluster! You can now create your cluster!
### Dynamic Inventory ### ## Dynamic Inventory
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome. There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
This will produce an inventory that is passed into Ansible that looks like the following: This will produce an inventory that is passed into Ansible that looks like the following:
```
```json
{ {
"_meta": { "_meta": {
"hostvars": { "hostvars": {
...@@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f ...@@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f
``` ```
Guide: Guide:
- Create instances in AWS as needed. - Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd` - Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory. - Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal: - Set the following AWS credentials and info as environment variables in your terminal:
```
```ShellSession
export AWS_ACCESS_KEY_ID="xxxxx" export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy" export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2" export REGION="us-east-2"
``` ```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml` - We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
## Kubespray configuration ## Kubespray configuration
...@@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use ...@@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000. aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead. aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment. aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.
Azure # Azure
===============
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`. To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
...@@ -7,38 +6,43 @@ All your instances are required to run in a resource group and a routing table h ...@@ -7,38 +6,43 @@ All your instances are required to run in a resource group and a routing table h
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status) Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
### Parameters ## Parameters
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file. Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
After installation you have to run `azure login` to get access to your account. After installation you have to run `azure login` to get access to your account.
### azure\_tenant\_id + azure\_subscription\_id
#### azure\_tenant\_id + azure\_subscription\_id
run `azure account show` to retrieve your subscription id and tenant id: run `azure account show` to retrieve your subscription id and tenant id:
`azure_tenant_id` -> Tenant ID field `azure_tenant_id` -> Tenant ID field
`azure_subscription_id` -> ID field `azure_subscription_id` -> ID field
### azure\_location
#### azure\_location
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list` The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
### azure\_resource\_group
#### azure\_resource\_group
The name of the resource group your instances are in, can be retrieved via `azure group list` The name of the resource group your instances are in, can be retrieved via `azure group list`
#### azure\_vnet\_name ### azure\_vnet\_name
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list` The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
#### azure\_subnet\_name ### azure\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME` The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
#### azure\_security\_group\_name ### azure\_security\_group\_name
The name of the network security group your instances are in, can be retrieved via `azure network nsg list` The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
#### azure\_aad\_client\_id + azure\_aad\_client\_secret ### azure\_aad\_client\_id + azure\_aad\_client\_secret
These will have to be generated first: These will have to be generated first:
- Create an Azure AD Application with: - Create an Azure AD Application with:
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET` `azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
display name, identifier-uri, homepage and the password can be chosen display name, identifier-uri, homepage and the password can be chosen
...@@ -51,24 +55,28 @@ This is the AppId from the last command ...@@ -51,24 +55,28 @@ This is the AppId from the last command
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret. azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
#### azure\_loadbalancer\_sku ### azure\_loadbalancer\_sku
Sku of Load Balancer and Public IP. Candidate values are: basic and standard. Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
#### azure\_exclude\_master\_from\_standard\_lb ### azure\_exclude\_master\_from\_standard\_lb
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer. azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
#### azure\_disable\_outbound\_snat ### azure\_disable\_outbound\_snat
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`. azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
#### azure\_primary\_availability\_set\_name ### azure\_primary\_availability\_set\_name
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure (Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field. pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
#### azure\_use\_instance\_metadata ### azure\_use\_instance\_metadata
Use instance metadata service where possible
Use instance metadata service where possible
## Provisioning Azure with Resource Group Templates ## Provisioning Azure with Resource Group Templates
......
Calico # Calico
===========
N.B. **Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
---
**N.B. Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3 If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3
After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3. After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3.
**PLEASE TEST upgrade before upgrading production cluster.** **PLEASE TEST upgrade before upgrading production cluster.**
---
Check if the calico-node container is running Check if the calico-node container is running
``` ```ShellSession
docker ps | grep calico docker ps | grep calico
``` ```
The **calicoctl** command allows to check the status of the network workloads. The **calicoctl** command allows to check the status of the network workloads.
* Check the status of Calico nodes * Check the status of Calico nodes
``` ```ShellSession
calicoctl node status calicoctl node status
``` ```
or for versions prior to *v1.0.0*: or for versions prior to *v1.0.0*:
``` ```ShellSession
calicoctl status calicoctl status
``` ```
* Show the configured network subnet for containers * Show the configured network subnet for containers
``` ```ShellSession
calicoctl get ippool -o wide calicoctl get ippool -o wide
``` ```
or for versions prior to *v1.0.0*: or for versions prior to *v1.0.0*:
``` ```ShellSession
calicoctl pool show calicoctl pool show
``` ```
* Show the workloads (ip addresses of containers and their located) * Show the workloads (ip addresses of containers and their located)
``` ```ShellSession
calicoctl get workloadEndpoint -o wide calicoctl get workloadEndpoint -o wide
``` ```
and and
``` ```ShellSession
calicoctl get hostEndpoint -o wide calicoctl get hostEndpoint -o wide
``` ```
or for versions prior *v1.0.0*: or for versions prior *v1.0.0*:
``` ```ShellSession
calicoctl endpoint show --detail calicoctl endpoint show --detail
``` ```
##### Optional : Define network backend ## Configuration
### Optional : Define network backend
In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value. In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value.
To re-define you need to edit the inventory and add a group variable `calico_network_backend` To re-define you need to edit the inventory and add a group variable `calico_network_backend`
``` ```yml
calico_network_backend: none calico_network_backend: none
``` ```
##### Optional : Define the default pool CIDR ### Optional : Define the default pool CIDR
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool. By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml): In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
``` ```ShellSession
calico_pool_cidr: 10.233.64.0/20 calico_pool_cidr: 10.233.64.0/20
``` ```
##### Optional : BGP Peering with border routers ### Optional : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes. In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located. For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
...@@ -84,11 +85,11 @@ The following variables need to be set: ...@@ -84,11 +85,11 @@ The following variables need to be set:
`peer_with_router` to enable the peering with the datacenter's border router (default value: false). `peer_with_router` to enable the peering with the datacenter's border router (default value: false).
you'll need to edit the inventory and add a hostvar `local_as` by node. you'll need to edit the inventory and add a hostvar `local_as` by node.
``` ```ShellSession
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
``` ```
##### Optional : Defining BGP peers ### Optional : Defining BGP peers
Peers can be defined using the `peers` variable (see docs/calico_peer_example examples). Peers can be defined using the `peers` variable (see docs/calico_peer_example examples).
In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global". In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global".
...@@ -97,16 +98,17 @@ NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining bot ...@@ -97,16 +98,17 @@ NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining bot
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs. Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml) This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
```
```yml
calico_advertise_cluster_ips: true calico_advertise_cluster_ips: true
``` ```
##### Optional : Define global AS number ### Optional : Define global AS number
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key). Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).
It defaults to "64512". It defaults to "64512".
##### Optional : BGP Peering with route reflectors ### Optional : BGP Peering with route reflectors
At large scale you may want to disable full node-to-node mesh in order to At large scale you may want to disable full node-to-node mesh in order to
optimize your BGP topology and improve `calico-node` containers' start times. optimize your BGP topology and improve `calico-node` containers' start times.
...@@ -114,8 +116,8 @@ optimize your BGP topology and improve `calico-node` containers' start times. ...@@ -114,8 +116,8 @@ optimize your BGP topology and improve `calico-node` containers' start times.
To do so you can deploy BGP route reflectors and peer `calico-node` with them as To do so you can deploy BGP route reflectors and peer `calico-node` with them as
recommended here: recommended here:
* https://hub.docker.com/r/calico/routereflector/ * <https://hub.docker.com/r/calico/routereflector/>
* https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric * <https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric>
You need to edit your inventory and add: You need to edit your inventory and add:
...@@ -127,7 +129,7 @@ You need to edit your inventory and add: ...@@ -127,7 +129,7 @@ You need to edit your inventory and add:
Here's an example of Kubespray inventory with standalone route reflectors: Here's an example of Kubespray inventory with standalone route reflectors:
``` ```ini
[all] [all]
rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10 rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10
rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11 rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11
...@@ -177,35 +179,35 @@ The inventory above will deploy the following topology assuming that calico's ...@@ -177,35 +179,35 @@ The inventory above will deploy the following topology assuming that calico's
![Image](figures/kubespray-calico-rr.png?raw=true) ![Image](figures/kubespray-calico-rr.png?raw=true)
##### Optional : Define default endpoint to host action ### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see <https://github.com/projectcalico/felix/issues/660> and <https://github.com/projectcalico/calicoctl/issues/1389).> Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
To re-define default action please set the following variable in your inventory: To re-define default action please set the following variable in your inventory:
```
```yml
calico_endpoint_to_host_action: "ACCEPT" calico_endpoint_to_host_action: "ACCEPT"
``` ```
##### Optional : Define address on which Felix will respond to health requests ## Optional : Define address on which Felix will respond to health requests
Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost. Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost.
To re-define health host please set the following variable in your inventory: To re-define health host please set the following variable in your inventory:
```
```yml
calico_healthhost: "0.0.0.0" calico_healthhost: "0.0.0.0"
``` ```
Cloud providers configuration ## Cloud providers configuration
=============================
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined. Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
##### Optional : Ignore kernel's RPF check setting ### Optional : Ignore kernel's RPF check setting
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting: By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
``` ```yml
calico_node_ignorelooserpf: true calico_node_ignorelooserpf: true
``` ```
...@@ -213,7 +215,7 @@ Note that in OpenStack you must allow `ipip` traffic in your security groups, ...@@ -213,7 +215,7 @@ Note that in OpenStack you must allow `ipip` traffic in your security groups,
otherwise you will experience timeouts. otherwise you will experience timeouts.
To do this you must add a rule which allows it, for example: To do this you must add a rule which allows it, for example:
``` ```ShellSession
neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t
``` ```
Cinder CSI Driver # Cinder CSI Driver
===============
Cinder CSI driver allows you to provision volumes over an OpenStack deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions. Cinder CSI driver allows you to provision volumes over an OpenStack deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.
...@@ -15,11 +14,11 @@ If you want to deploy the cinder provisioner used with Cinder CSI Driver, you sh ...@@ -15,11 +14,11 @@ If you want to deploy the cinder provisioner used with Cinder CSI Driver, you sh
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled. You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled.
## Usage example ## ## Usage example
To check if Cinder CSI Driver works properly, see first that the cinder-csi pods are running: To check if Cinder CSI Driver works properly, see first that the cinder-csi pods are running:
``` ```ShellSession
$ kubectl -n kube-system get pods | grep cinder $ kubectl -n kube-system get pods | grep cinder
csi-cinder-controllerplugin-7f8bf99785-cpb5v 5/5 Running 0 100m csi-cinder-controllerplugin-7f8bf99785-cpb5v 5/5 Running 0 100m
csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
...@@ -27,7 +26,7 @@ csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m ...@@ -27,7 +26,7 @@ csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
Check the associated storage class (if you enabled persistent_volumes): Check the associated storage class (if you enabled persistent_volumes):
``` ```ShellSession
$ kubectl get storageclass $ kubectl get storageclass
NAME PROVISIONER AGE NAME PROVISIONER AGE
cinder-csi cinder.csi.openstack.org 100m cinder-csi cinder.csi.openstack.org 100m
...@@ -35,7 +34,7 @@ cinder-csi cinder.csi.openstack.org 100m ...@@ -35,7 +34,7 @@ cinder-csi cinder.csi.openstack.org 100m
You can run a PVC and an Nginx Pod using this file `nginx.yaml`: You can run a PVC and an Nginx Pod using this file `nginx.yaml`:
``` ```yml
--- ---
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
...@@ -75,7 +74,8 @@ spec: ...@@ -75,7 +74,8 @@ spec:
Apply this conf to your cluster: ```kubectl apply -f nginx.yml``` Apply this conf to your cluster: ```kubectl apply -f nginx.yml```
You should see the PVC provisioned and bound: You should see the PVC provisioned and bound:
```
```ShellSession
$ kubectl get pvc $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi RWO cinder-csi 8s csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi RWO cinder-csi 8s
...@@ -83,17 +83,20 @@ csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi ...@@ -83,17 +83,20 @@ csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi
And the volume mounted to the Nginx Pod (wait until the Pod is Running): And the volume mounted to the Nginx Pod (wait until the Pod is Running):
``` ```ShellSession
kubectl exec -it nginx -- df -h | grep /var/lib/www/html kubectl exec -it nginx -- df -h | grep /var/lib/www/html
/dev/vdb 976M 2.6M 958M 1% /var/lib/www/html /dev/vdb 976M 2.6M 958M 1% /var/lib/www/html
``` ```
## Compatibility with in-tree cloud provider ## ## Compatibility with in-tree cloud provider
It is not necessary to enable OpenStack as a cloud provider for Cinder CSI Driver to work. It is not necessary to enable OpenStack as a cloud provider for Cinder CSI Driver to work.
Though, you can run both the in-tree openstack cloud provider and the Cinder CSI Driver at the same time. The storage class provisioners associated to each one of them are differently named. Though, you can run both the in-tree openstack cloud provider and the Cinder CSI Driver at the same time. The storage class provisioners associated to each one of them are differently named.
## Cinder v2 support ## ## Cinder v2 support
For the moment, only Cinder v3 is supported by the CSI Driver. For the moment, only Cinder v3 is supported by the CSI Driver.
## More info ## ## More info
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md). For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md).
Cloud providers # Cloud providers
==============
#### Provisioning ## Provisioning
You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation. You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
#### Deploy kubernetes ## Deploy kubernetes
With ansible-playbook command With ansible-playbook command
```
```ShellSession
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
``` ```
Kubespray vs [Kops](https://github.com/kubernetes/kops) # Comparaison
---------------
## Kubespray vs [Kops](https://github.com/kubernetes/kops)
Kubespray runs on bare metal and most clouds, using Ansible as its substrate for Kubespray runs on bare metal and most clouds, using Ansible as its substrate for
provisioning and orchestration. Kops performs the provisioning and orchestration provisioning and orchestration. Kops performs the provisioning and orchestration
...@@ -10,8 +11,7 @@ however, is more tightly integrated with the unique features of the clouds it ...@@ -10,8 +11,7 @@ however, is more tightly integrated with the unique features of the clouds it
supports so it could be a better choice if you know that you will only be using supports so it could be a better choice if you know that you will only be using
one platform for the foreseeable future. one platform for the foreseeable future.
Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm) ## Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm)
------------------
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
management, including self-hosted layouts, dynamic discovery services and so management, including self-hosted layouts, dynamic discovery services and so
......
Contiv # Contiv
======
Here is the [Contiv documentation](http://contiv.github.io/documents/). Here is the [Contiv documentation](http://contiv.github.io/documents/).
...@@ -10,7 +9,6 @@ There are two ways to manage Contiv: ...@@ -10,7 +9,6 @@ There are two ways to manage Contiv:
* a web UI managed by the api proxy service * a web UI managed by the api proxy service
* a CLI named `netctl` * a CLI named `netctl`
### Interfaces ### Interfaces
#### The Web Interface #### The Web Interface
...@@ -27,7 +25,6 @@ contiv_generate_certificate: true ...@@ -27,7 +25,6 @@ contiv_generate_certificate: true
The default credentials to log in are: admin/admin. The default credentials to log in are: admin/admin.
#### The Command Line Interface #### The Command Line Interface
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster: The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
...@@ -44,7 +41,6 @@ contiv_netmaster_port: 9999 ...@@ -44,7 +41,6 @@ contiv_netmaster_port: 9999
The CLI doesn't use the authentication process needed by the web interface. The CLI doesn't use the authentication process needed by the web interface.
### Network configuration ### Network configuration
The default configuration uses VXLAN to create an overlay. Two networks are created by default: The default configuration uses VXLAN to create an overlay. Two networks are created by default:
......
...@@ -6,6 +6,7 @@ Example with Ansible: ...@@ -6,6 +6,7 @@ Example with Ansible:
Before running the cluster playbook you must satisfy the following requirements: Before running the cluster playbook you must satisfy the following requirements:
General CoreOS Pre-Installation Notes: General CoreOS Pre-Installation Notes:
- Ensure that the bin_dir is set to `/opt/bin` - Ensure that the bin_dir is set to `/opt/bin`
- ansible_python_interpreter should be `/opt/bin/python`. This will be laid down by the bootstrap task. - ansible_python_interpreter should be `/opt/bin/python`. This will be laid down by the bootstrap task.
- The default resolvconf_mode setting of `docker_dns` **does not** work for CoreOS. This is because we do not edit the systemd service file for docker on CoreOS nodes. Instead, just use the `host_resolvconf` mode. It should work out of the box. - The default resolvconf_mode setting of `docker_dns` **does not** work for CoreOS. This is because we do not edit the systemd service file for docker on CoreOS nodes. Instead, just use the `host_resolvconf` mode. It should work out of the box.
......
CRI-O # CRI-O
===============
[CRI-O] is a lightweight container runtime for Kubernetes. [CRI-O] is a lightweight container runtime for Kubernetes.
Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster. Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster.
...@@ -10,14 +9,14 @@ Kubespray supports basic functionality for using CRI-O as the default container ...@@ -10,14 +9,14 @@ Kubespray supports basic functionality for using CRI-O as the default container
_To use CRI-O instead of Docker, set the following variables:_ _To use CRI-O instead of Docker, set the following variables:_
#### all.yml ## all.yml
```yaml ```yaml
download_container: false download_container: false
skip_downloads: false skip_downloads: false
``` ```
#### k8s-cluster.yml ## k8s-cluster.yml
```yaml ```yaml
etcd_deployment_type: host etcd_deployment_type: host
......
Debian Jessie # Debian Jessie
===============
Debian Jessie installation Notes: Debian Jessie installation Notes:
...@@ -9,7 +8,7 @@ Debian Jessie installation Notes: ...@@ -9,7 +8,7 @@ Debian Jessie installation Notes:
to /etc/default/grub. Then update with to /etc/default/grub. Then update with
``` ```ShellSession
sudo update-grub sudo update-grub
sudo update-grub2 sudo update-grub2
sudo reboot sudo reboot
...@@ -23,7 +22,7 @@ Debian Jessie installation Notes: ...@@ -23,7 +22,7 @@ Debian Jessie installation Notes:
- Add the Ansible repository and install Ansible to get a proper version - Add the Ansible repository and install Ansible to get a proper version
``` ```ShellSession
sudo add-apt-repository ppa:ansible/ansible sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update sudo apt-get update
sudo apt-get install ansible sudo apt-get install ansible
...@@ -34,5 +33,4 @@ Debian Jessie installation Notes: ...@@ -34,5 +33,4 @@ Debian Jessie installation Notes:
```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr``` ```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment) Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)
K8s DNS stack by Kubespray # K8s DNS stack by Kubespray
======================
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/) For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
[cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md) [cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md)
...@@ -9,19 +8,19 @@ to serve as an authoritative DNS server for a given ``dns_domain`` and its ...@@ -9,19 +8,19 @@ to serve as an authoritative DNS server for a given ``dns_domain`` and its
Other nodes in the inventory, like external storage nodes or a separate etcd cluster Other nodes in the inventory, like external storage nodes or a separate etcd cluster
node group, considered non-cluster and left up to the user to configure DNS resolve. node group, considered non-cluster and left up to the user to configure DNS resolve.
## DNS variables
DNS variables
=============
There are several global variables which can be used to modify DNS settings: There are several global variables which can be used to modify DNS settings:
#### ndots ### ndots
ndots value to be used in ``/etc/resolv.conf`` ndots value to be used in ``/etc/resolv.conf``
It is important to note that multiple search domains combined with high ``ndots`` It is important to note that multiple search domains combined with high ``ndots``
values lead to poor performance of DNS stack, so please choose it wisely. values lead to poor performance of DNS stack, so please choose it wisely.
#### searchdomains ### searchdomains
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``). Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
Most Linux systems limit the total number of search domains to 6 and the total length of all search domains Most Linux systems limit the total number of search domains to 6 and the total length of all search domains
...@@ -30,57 +29,68 @@ to 256 characters. Depending on the length of ``dns_domain``, you're limited to ...@@ -30,57 +29,68 @@ to 256 characters. Depending on the length of ``dns_domain``, you're limited to
Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as
additional search domains. Please take this into the accounts for the limits. additional search domains. Please take this into the accounts for the limits.
#### nameservers ### nameservers
This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts
``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable ``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable
is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified). is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified).
#### upstream_dns_servers ### upstream_dns_servers
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
DNS servers in early cluster deployment when no cluster DNS is available yet. DNS servers in early cluster deployment when no cluster DNS is available yet.
DNS modes supported by Kubespray ## DNS modes supported by Kubespray
============================
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``. You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
## dns_mode ### dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available: ``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### coredns (default) #### dns_mode: coredns (default)
This installs CoreDNS as the default cluster DNS for all queries. This installs CoreDNS as the default cluster DNS for all queries.
#### coredns_dual #### dns_mode: coredns_dual
This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack. This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack.
#### manual #### dns_mode: manual
This does not install coredns, but allows you to specify This does not install coredns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS. `manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after Use this method if you plan to install your own DNS server in the cluster after
initial deployment. initial deployment.
#### none #### dns_mode: none
This does not install any of DNS solution at all. This basically disables cluster DNS completely and This does not install any of DNS solution at all. This basically disables cluster DNS completely and
leaves you with a non functional cluster. leaves you with a non functional cluster.
## resolvconf_mode ## resolvconf_mode
``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers. ``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
There are three modes available: There are three modes available:
#### docker_dns (default) ### resolvconf_mode: docker_dns (default)
This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags. This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags.
The following nameservers are added to the docker daemon (in the same order as listed here): The following nameservers are added to the docker daemon (in the same order as listed here):
* cluster nameserver (depends on dns_mode) * cluster nameserver (depends on dns_mode)
* content of optional upstream_dns_servers variable * content of optional upstream_dns_servers variable
* host system nameservers (read from hosts /etc/resolv.conf) * host system nameservers (read from hosts /etc/resolv.conf)
The following search domains are added to the docker daemon (in the same order as listed here): The following search domains are added to the docker daemon (in the same order as listed here):
* cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``) * cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``)
* content of optional searchdomains variable * content of optional searchdomains variable
* host system search domains (read from hosts /etc/resolv.conf) * host system search domains (read from hosts /etc/resolv.conf)
The following dns options are added to the docker daemon The following dns options are added to the docker daemon
* ndots:{{ ndots }} * ndots:{{ ndots }}
* timeout:2 * timeout:2
* attempts:2 * attempts:2
...@@ -96,7 +106,8 @@ DNS queries to the cluster DNS will timeout after a few seconds, resulting in th ...@@ -96,7 +106,8 @@ DNS queries to the cluster DNS will timeout after a few seconds, resulting in th
used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS
servers, which in turn will forward queries to the system nameserver if required. servers, which in turn will forward queries to the system nameserver if required.
#### host_resolvconf #### resolvconf_mode: host_resolvconf
This activates the classic Kubespray behavior that modifies the hosts ``/etc/resolv.conf`` file and dhclient This activates the classic Kubespray behavior that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode). configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode).
...@@ -108,21 +119,21 @@ the other nameservers as backups. ...@@ -108,21 +119,21 @@ the other nameservers as backups.
Also note, existing records will be purged from the `/etc/resolv.conf`, Also note, existing records will be purged from the `/etc/resolv.conf`,
including resolvconf's base/head/cloud-init config files and those that come from dhclient. including resolvconf's base/head/cloud-init config files and those that come from dhclient.
#### none #### resolvconf_mode: none
Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases. Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases.
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
cluster service names. cluster service names.
## Nodelocal DNS cache ## Nodelocal DNS cache
Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default). Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default).
More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md). More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md).
**As per the 2.10 release, Nodelocal DNS cache is enabled by default.** **As per the 2.10 release, Nodelocal DNS cache is enabled by default.**
## Limitations
Limitations
-----------
* Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can * Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can
not answer with authority to arbitrary recursive resolvers. This task is left not answer with authority to arbitrary recursive resolvers. This task is left
......
Downloading binaries and containers # Downloading binaries and containers
===================================
Kubespray supports several download/upload modes. The default is: Kubespray supports several download/upload modes. The default is:
...@@ -30,11 +29,13 @@ Container images may be defined by its repo and tag, for example: ...@@ -30,11 +29,13 @@ Container images may be defined by its repo and tag, for example:
Note, the SHA256 digest and the image tag must be both specified and correspond Note, the SHA256 digest and the image tag must be both specified and correspond
to each other. The given example above is represented by the following vars: to each other. The given example above is represented by the following vars:
```yaml ```yaml
dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193 dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193
dnsmasq_image_repo: andyshinn/dnsmasq dnsmasq_image_repo: andyshinn/dnsmasq
dnsmasq_image_tag: '2.72' dnsmasq_image_tag: '2.72'
``` ```
The full list of available vars may be found in the download's ansible role defaults. Those also allow to specify custom urls and local repositories for binaries and container The full list of available vars may be found in the download's ansible role defaults. Those also allow to specify custom urls and local repositories for binaries and container
images as well. See also the DNS stack docs for the related intranet configuration, images as well. See also the DNS stack docs for the related intranet configuration,
so the hosts can resolve those urls and repos. so the hosts can resolve those urls and repos.
......
Flannel # Flannel
==============
* Flannel configuration file should have been created there * Flannel configuration file should have been created there
``` ```ShellSession
cat /run/flannel/subnet.env cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.233.0.0/18 FLANNEL_NETWORK=10.233.0.0/18
FLANNEL_SUBNET=10.233.16.1/24 FLANNEL_SUBNET=10.233.16.1/24
...@@ -13,7 +12,7 @@ FLANNEL_IPMASQ=false ...@@ -13,7 +12,7 @@ FLANNEL_IPMASQ=false
* Check if the network interface has been created * Check if the network interface has been created
``` ```ShellSession
ip a show dev flannel.1 ip a show dev flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff
...@@ -25,7 +24,7 @@ ip a show dev flannel.1 ...@@ -25,7 +24,7 @@ ip a show dev flannel.1
* Try to run a container and check its ip address * Try to run a container and check its ip address
``` ```ShellSession
kubectl run test --image=busybox --command -- tail -f /dev/null kubectl run test --image=busybox --command -- tail -f /dev/null
replicationcontroller "test" created replicationcontroller "test" created
...@@ -33,7 +32,7 @@ kubectl describe po test-34ozs | grep ^IP ...@@ -33,7 +32,7 @@ kubectl describe po test-34ozs | grep ^IP
IP: 10.233.16.2 IP: 10.233.16.2
``` ```
``` ```ShellSession
kubectl exec test-34ozs -- ip a show dev eth0 kubectl exec test-34ozs -- ip a show dev eth0
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment