Skip to content
Snippets Groups Projects
Commit e1600188 authored by AtzeDeVries's avatar AtzeDeVries
Browse files

Fixed conflicts, ipip:true as defualt and added ipip_mode

parents f5ef02d4 99202328
No related branches found
No related tags found
No related merge requests found
Showing with 85 additions and 76 deletions
......@@ -96,7 +96,7 @@ You need to edit your inventory and add:
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))
Here's an example of Kargo inventory with route reflectors:
Here's an example of Kubespray inventory with route reflectors:
```
[all]
......@@ -145,11 +145,11 @@ cluster_id="1.0.0.1"
The inventory above will deploy the following topology assuming that calico's
`global_as_num` is set to `65400`:
![Image](figures/kargo-calico-rr.png?raw=true)
![Image](figures/kubespray-calico-rr.png?raw=true)
##### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kargo) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) withing the same node are dropped.
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) withing the same node are dropped.
To re-define default action please set the following variable in your inventory:
......
......@@ -3,17 +3,17 @@ Cloud providers
#### Provisioning
You can use kargo-cli to start new instances on cloud providers
You can use kubespray-cli to start new instances on cloud providers
here's an example
```
kargo [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
kubespray [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
```
#### Deploy kubernetes
With kargo-cli
With kubespray-cli
```
kargo deploy [--aws|--gce] -u admin
kubespray deploy [--aws|--gce] -u admin
```
Or ansible-playbook command
......
Kargo vs [Kops](https://github.com/kubernetes/kops)
Kubespray vs [Kops](https://github.com/kubernetes/kops)
---------------
Kargo runs on bare metal and most clouds, using Ansible as its substrate for
Kubespray runs on bare metal and most clouds, using Ansible as its substrate for
provisioning and orchestration. Kops performs the provisioning and orchestration
itself, and as such is less flexible in deployment platforms. For people with
familiarity with Ansible, existing Ansible deployments or the desire to run a
Kubernetes cluster across multiple platforms, Kargo is a good choice. Kops,
Kubernetes cluster across multiple platforms, Kubespray is a good choice. Kops,
however, is more tightly integrated with the unique features of the clouds it
supports so it could be a better choice if you know that you will only be using
one platform for the foreseeable future.
Kargo vs [Kubeadm](https://github.com/kubernetes/kubeadm)
Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm)
------------------
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
management, including self-hosted layouts, dynamic discovery services and so
on. Had it belong to the new [operators world](https://coreos.com/blog/introducing-operators.html),
it would've likely been named a "Kubernetes cluster operator". Kargo however,
it would've likely been named a "Kubernetes cluster operator". Kubespray however,
does generic configuration management tasks from the "OS operators" ansible
world, plus some initial K8s clustering (with networking plugins included) and
control plane bootstrapping. Kargo [strives](https://github.com/kubernetes-incubator/kargo/issues/553)
control plane bootstrapping. Kubespray [strives](https://github.com/kubernetes-incubator/kubespray/issues/553)
to adopt kubeadm as a tool in order to consume life cycle management domain
knowledge from it and offload generic OS configuration things from it, which
hopefully benefits both sides.
CoreOS bootstrap
===============
Example with **kargo-cli**:
Example with **kubespray-cli**:
```
kargo deploy --gce --coreos
kubespray deploy --gce --coreos
```
Or with Ansible:
......
K8s DNS stack by Kargo
K8s DNS stack by Kubespray
======================
For K8s cluster nodes, kargo configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
[cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md)
to serve as an authoritative DNS server for a given ``dns_domain`` and its
``svc, default.svc`` default subdomains (a total of ``ndots: 5`` max levels).
......@@ -44,13 +44,13 @@ DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode``
DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
DNS modes supported by kargo
DNS modes supported by Kubespray
============================
You can modify how kargo sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
## dns_mode
``dns_mode`` configures how kargo will setup cluster DNS. There are three modes available:
``dns_mode`` configures how Kubespray will setup cluster DNS. There are three modes available:
#### dnsmasq_kubedns (default)
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
......@@ -67,7 +67,7 @@ This does not install any of dnsmasq and kubedns/skydns. This basically disables
leaves you with a non functional cluster.
## resolvconf_mode
``resolvconf_mode`` configures how kargo will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
There are three modes available:
#### docker_dns (default)
......@@ -100,7 +100,7 @@ used as a backup nameserver. After cluster DNS is running, all queries will be a
servers, which in turn will forward queries to the system nameserver if required.
#### host_resolvconf
This activates the classic kargo behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
This activates the classic Kubespray behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
......@@ -120,7 +120,7 @@ cluster service names.
Limitations
-----------
* Kargo has yet ways to configure Kubedns addon to forward requests SkyDns can
* Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can
not answer with authority to arbitrary recursive resolvers. This task is left
for future. See [official SkyDns docs](https://github.com/skynetservices/skydns)
for details.
......
Downloading binaries and containers
===================================
Kargo supports several download/upload modes. The default is:
Kubespray supports several download/upload modes. The default is:
* Each node downloads binaries and container images on its own, which is
``download_run_once: False``.
......
Getting started
===============
The easiest way to run the deployement is to use the **kargo-cli** tool.
A complete documentation can be found in its [github repository](https://github.com/kubespray/kargo-cli).
The easiest way to run the deployement is to use the **kubespray-cli** tool.
A complete documentation can be found in its [github repository](https://github.com/kubespray/kubespray-cli).
Here is a simple example on AWS:
* Create instances and generate the inventory
```
kargo aws --instances 3
kubespray aws --instances 3
```
* Run the deployment
```
kargo deploy --aws -u centos -n calico
kubespray deploy --aws -u centos -n calico
```
Building your own inventory
......@@ -23,12 +23,12 @@ Building your own inventory
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located
[here](https://github.com/kubernetes-incubator/kargo/blob/master/inventory/inventory.example).
[here](https://github.com/kubernetes-incubator/kubespray/blob/master/inventory/inventory.example).
You can use an
[inventory generator](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py)
[inventory generator](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only use for making a basic Kargo cluster, but it does
functionality and is only use for making a basic Kubespray cluster, but it does
support creating large clusters. It now supports
separated ETCD and Kubernetes master roles from node role if the size exceeds a
certain threshold. Run inventory.py help for more information.
......
......@@ -22,7 +22,7 @@ Kube-apiserver
--------------
K8s components require a loadbalancer to access the apiservers via a reverse
proxy. Kargo includes support for an nginx-based proxy that resides on each
proxy. Kubespray includes support for an nginx-based proxy that resides on each
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios
......@@ -30,12 +30,12 @@ where an external LB or virtual IP management is inconvenient. This option is
configured by the variable `loadbalancer_apiserver_localhost` (defaults to `True`).
You may also define the port the local internal loadbalancer users by changing,
`nginx_kube_apiserver_port`. This defaults to the value of `kube_apiserver_port`.
It is also import to note that Kargo will only configure kubelet and kube-proxy
It is also import to note that Kubespray will only configure kubelet and kube-proxy
on non-master nodes to use the local internal loadbalancer.
If you choose to NOT use the local internal loadbalancer, you will need to configure
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
a user and is not covered by ansible roles in Kargo. By default, it only configures
a user and is not covered by ansible roles in Kubespray. By default, it only configures
a non-HA endpoint, which points to the `access_ip` or IP address of the first server
node in the `kube-master` group. It can also configure clients to use endpoints
for a given loadbalancer type. The following diagram shows how traffic to the
......
Network Checker Application
===========================
With the ``deploy_netchecker`` var enabled (defaults to false), Kargo deploys a
With the ``deploy_netchecker`` var enabled (defaults to false), Kubespray deploys a
Network Checker Application from the 3rd side `l23network/k8s-netchecker` docker
images. It consists of the server and agents trying to reach the server by usual
for Kubernetes applications network connectivity meanings. Therefore, this
......@@ -17,7 +17,7 @@ any of the cluster nodes:
```
curl http://localhost:31081/api/v1/connectivity_check
```
Note that Kargo does not invoke the check but only deploys the application, if
Note that Kubespray does not invoke the check but only deploys the application, if
requested.
There are related application specifc variables:
......
Kargo's roadmap
Kubespray's roadmap
=================
### Kubeadm
- Propose kubeadm as an option in order to setup the kubernetes cluster.
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kargo/issues/553)
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kubespray/issues/553)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kargo/issues/320)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
- the playbook would install and configure docker/rkt and the etcd cluster
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
- a "kubespray" container would be deployed (kargo-cli, ansible-playbook, kpm)
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook, kpm)
- to be discussed, a way to provide the inventory
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kargo/issues/321)
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
### Provisionning and cloud providers
- [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kargo/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kargo/issues/280)
- [x] **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kargo/issues/234)
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
- [x] **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kubespray/issues/234)
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
https://github.com/kubernetes/kubernetes/issues/18112)
......@@ -37,14 +37,14 @@ That would probably improve deployment speed and certs management [#553](https:/
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node
### Lifecycle
- [ ] Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kargo/issues/553)
- [x] Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kargo/issues/154)
- [ ] Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kubespray/issues/553)
- [x] Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kubespray/issues/154)
- [ ] Drain worker node when shutting down/deleting an instance
- [ ] Upgrade granularity: select components to upgrade and skip others
### Networking
- [ ] romana.io support [#160](https://github.com/kubespray/kargo/issues/160)
- [ ] Configure network policy for Calico. [#159](https://github.com/kubespray/kargo/issues/159)
- [ ] romana.io support [#160](https://github.com/kubespray/kubespray/issues/160)
- [ ] Configure network policy for Calico. [#159](https://github.com/kubespray/kubespray/issues/159)
- [ ] Opencontrail
- [x] Canal
- [x] Cloud Provider native networking (instead of our network plugins)
......@@ -53,14 +53,14 @@ That would probably improve deployment speed and certs management [#553](https:/
- (to be discussed) option to set a loadbalancer for the apiservers like ucarp/packemaker/keepalived
While waiting for the issue [kubernetes/kubernetes#18174](https://github.com/kubernetes/kubernetes/issues/18174) to be fixed.
### Kargo-cli
### Kubespray-cli
- Delete instances
- `kargo vagrant` to setup a test cluster locally
- `kargo azure` for Microsoft Azure support
- `kubespray vagrant` to setup a test cluster locally
- `kubespray azure` for Microsoft Azure support
- switch to Terraform instead of Ansible for provisionning
- update $HOME/.kube/config when a cluster is deployed. Optionally switch to this context
### Kargo API
### Kubespray API
- Perform all actions through an **API**
- Store inventories / configurations of mulltiple clusters
- make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
......@@ -87,8 +87,8 @@ Include optionals deployments to init the cluster:
### Others
- remove nodes (adding is already supported)
- being able to choose any k8s version (almost done)
- **rkt** support [#59](https://github.com/kubespray/kargo/issues/59)
- **rkt** support [#59](https://github.com/kubespray/kubespray/issues/59)
- Review documentation (split in categories)
- **consul** -> if officialy supported by k8s
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kargo/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kargo/issues/329)
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kubespray/issues/329)
Upgrading Kubernetes in Kargo
Upgrading Kubernetes in Kubespray
=============================
#### Description
Kargo handles upgrades the same way it handles initial deployment. That is to
Kubespray handles upgrades the same way it handles initial deployment. That is to
say that each component is laid down in a fixed order. You should be able to
upgrade from Kargo tag 2.0 up to the current master without difficulty. You can
upgrade from Kubespray tag 2.0 up to the current master without difficulty. You can
also individually control versions of components by explicitly defining their
versions. Here are all version vars for each component:
......@@ -35,7 +35,7 @@ ansible-playbook cluster.yml -i inventory/inventory.cfg -e kube_version=v1.4.6
#### Graceful upgrade
Kargo also supports cordon, drain and uncordoning of nodes when performing
Kubespray also supports cordon, drain and uncordoning of nodes when performing
a cluster upgrade. There is a separate playbook used for this purpose. It is
important to note that upgrade-cluster.yml can only be used for upgrading an
existing cluster. That means there must be at least 1 kube-master already
......
Configurable Parameters in Kargo
Configurable Parameters in Kubespray
================================
#### Generic Ansible variables
......@@ -12,7 +12,7 @@ Some variables of note include:
* *ansible_default_ipv4.address*: IP address Ansible automatically chooses.
Generated based on the output from the command ``ip -4 route get 8.8.8.8``
#### Common vars that are used in Kargo
#### Common vars that are used in Kubespray
* *calico_version* - Specify version of Calico to use
* *calico_cni_version* - Specify version of Calico CNI plugin to use
......@@ -35,16 +35,16 @@ Some variables of note include:
* *access_ip* - IP for other hosts to use to connect to. Often required when
deploying from a cloud, such as OpenStack or GCE and you have separate
public/floating and private IPs.
* *ansible_default_ipv4.address* - Not Kargo-specific, but it is used if ip
* *ansible_default_ipv4.address* - Not Kubespray-specific, but it is used if ip
and access_ip are undefined
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
address instead of localhost for kube-masters and kube-master[0] for
kube-nodes. See more details in the
[HA guide](https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md).
[HA guide](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md).
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to
the apiserver internally load balanced endpoint. Mutual exclusive to the
`loadbalancer_apiserver`. See more details in the
[HA guide](https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md).
[HA guide](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md).
#### Cluster variables
......@@ -79,13 +79,13 @@ other settings from your existing /etc/resolv.conf are lost. Set the following
variables to match your requirements.
* *upstream_dns_servers* - Array of upstream DNS servers configured on host in
addition to Kargo deployed DNS
addition to Kubespray deployed DNS
* *nameservers* - Array of DNS servers configured for use in dnsmasq
* *searchdomains* - Array of up to 4 search domains
* *skip_dnsmasq* - Don't set up dnsmasq (use only KubeDNS)
For more information, see [DNS
Stack](https://github.com/kubernetes-incubator/kargo/blob/master/docs/dns-stack.md).
Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-stack.md).
#### Other service variables
......@@ -114,5 +114,5 @@ The possible vars are:
#### User accounts
Kargo sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
Kubespray sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
passwords default to changeme. You can set this by changing ``kube_api_pwd``.
......@@ -39,7 +39,7 @@ vault group.
It is *highly* recommended that these secrets are removed from the servers after
your cluster has been deployed, and kept in a safe location of your choosing.
Naturally, the seriousness of the situation depends on what you're doing with
your Kargo cluster, but with these secrets, an attacker will have the ability
your Kubespray cluster, but with these secrets, an attacker will have the ability
to authenticate to almost everything in Kubernetes and decode all private
(HTTPS) traffic on your network signed by Vault certificates.
......
......@@ -11,7 +11,7 @@
- hosts: localhost
gather_facts: False
roles:
- { role: kargo-defaults}
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr
......@@ -22,7 +22,7 @@
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false
roles:
- { role: kargo-defaults}
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr
......@@ -34,7 +34,7 @@
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kargo-defaults}
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
#Handle upgrades to master components first to maintain backwards compat.
......@@ -42,7 +42,7 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: 1
roles:
- { role: kargo-defaults}
- { role: kubespray-defaults}
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: kubernetes/master, tags: master }
......@@ -53,8 +53,8 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: "{{ serial | default('20%') }}"
roles:
- { role: kargo-defaults}
- { role: kubespray-defaults}
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: upgrade/post-upgrade, tags: post-upgrade }
- { role: kargo-defaults}
- { role: kubespray-defaults}
......@@ -83,6 +83,9 @@ bin_dir: /usr/local/bin
## Please note that overlay2 is only supported on newer kernels
#docker_storage_options: -s overlay2
# Uncomment this if you have more than 3 nameservers, then we'll only use the first 3.
#docker_dns_servers_strict: false
## Default packages to install within the cluster, f.e:
#kpm_packages:
# - name: kube-system/grafana
......
......@@ -133,3 +133,8 @@ efk_enabled: false
# Helm deployment
helm_enabled: false
# dnsmasq
# dnsmasq_upstream_dns_servers:
# - /resolvethiszone.with/10.0.4.250
# - 8.8.8.8
......@@ -66,7 +66,7 @@ options:
description:
- present handles checking existence or creating if definition file provided,
absent handles deleting resource(s) based on other options,
latest handles creating ore updating based on existence,
latest handles creating or updating based on existence,
reloaded handles updating resource(s) definition using definition file,
stopped handles stopping resource(s) based on other options.
requirements:
......
......@@ -14,5 +14,5 @@
when: reset_confirmation != "yes"
roles:
- { role: kargo-defaults}
- { role: kubespray-defaults}
- { role: reset, tags: reset }
......@@ -2,3 +2,4 @@
pypy_version: 2.4.0
pip_python_modules:
- httplib2
- six
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment