Skip to content
Snippets Groups Projects
  • Bogdan Dobrelya's avatar
    32cd6e99
    Add etcd proxy support · 32cd6e99
    Bogdan Dobrelya authored
    
    * Enforce a etcd-proxy role to a k8s-cluster group members. This
    provides an HA layout for all of the k8s cluster internal clients.
    * Proxies to be run on each node in the group as a separate etcd
    instances with a readwrite proxy mode and listen the given endpoint,
    which is either the access_ip:2379 or the localhost:2379.
    * A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
    loadbalancers and use the etcd members IPs as a comma-separated
    list. Otherwise, clients shall use the local endpoint provided by a
    etcd-proxy instances on each etcd node. A Netwroking plugins always
    use that access mode.
    * Fix apiserver's etcd servers args to use the etcd_access_endpoint.
    * Fix networking plugins flannel/calico to use the etcd_endpoint.
    * Fix name env var for non masters to be set as well.
    * Fix etcd_client_url was not used anywhere and other etcd_* facts
    evaluation was duplicated in a few places.
    * Define proxy modes only in the env file, if not a master. Del
    an automatic proxy mode decisions for etcd nodes in init/unit scripts.
    * Use Wants= instead of Requires= as "This is the recommended way to
    hook start-up of one unit to the start-up of another unit"
    * Make apiserver/calico Wants= etcd-proxy to keep it always up
    
    Signed-off-by: default avatarBogdan Dobrelya <bdobrelia@mirantis.com>
    Co-authored-by: default avatarMatthew Mosesohn <mmosesohn@mirantis.com>
    32cd6e99
    History
    Add etcd proxy support
    Bogdan Dobrelya authored
    
    * Enforce a etcd-proxy role to a k8s-cluster group members. This
    provides an HA layout for all of the k8s cluster internal clients.
    * Proxies to be run on each node in the group as a separate etcd
    instances with a readwrite proxy mode and listen the given endpoint,
    which is either the access_ip:2379 or the localhost:2379.
    * A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
    loadbalancers and use the etcd members IPs as a comma-separated
    list. Otherwise, clients shall use the local endpoint provided by a
    etcd-proxy instances on each etcd node. A Netwroking plugins always
    use that access mode.
    * Fix apiserver's etcd servers args to use the etcd_access_endpoint.
    * Fix networking plugins flannel/calico to use the etcd_endpoint.
    * Fix name env var for non masters to be set as well.
    * Fix etcd_client_url was not used anywhere and other etcd_* facts
    evaluation was duplicated in a few places.
    * Define proxy modes only in the env file, if not a master. Del
    an automatic proxy mode decisions for etcd nodes in init/unit scripts.
    * Use Wants= instead of Requires= as "This is the recommended way to
    hook start-up of one unit to the start-up of another unit"
    * Make apiserver/calico Wants= etcd-proxy to keep it always up
    
    Signed-off-by: default avatarBogdan Dobrelya <bdobrelia@mirantis.com>
    Co-authored-by: default avatarMatthew Mosesohn <mmosesohn@mirantis.com>

Ansible variables

Inventory

The inventory is composed of 3 groups:

  • kube-node : list of kubernetes nodes where the pods will run.
  • kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run. Note: if you want the server to act both as master and node the server must be defined on both groups kube-master and kube-node
  • etcd: list of server to compose the etcd server. you should have at least 3 servers for failover purposes.

Below is a complete inventory example:

## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_ssh_host=95.54.0.12  # ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13  # ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14  # ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15  # ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16  # ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17  # ip=10.3.0.6

[kube-master]
node1
node2

[etcd]
node1
node2
node3

[kube-node]
node2
node3
node4
node5
node6

[k8s-cluster:children]
kube-node
kube-master
etcd

Group vars

The main variables to change are located in the directory inventory/group_vars/all.yml.