Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
  • v2.27.0
  • v2.25.1
  • v2.24.3
  • v2.26.0
  • v2.24.2
  • v2.25.0
  • v2.24.1
  • v2.22.2
  • v2.23.3
  • v2.24.0
  • v2.23.2
  • v2.23.1
  • v2.23.0
  • v2.22.1
  • v2.22.0
  • v2.21.0
  • v2.20.0
  • v2.19.1
  • v2.18.2
  • v2.19.0
21 results

kubespray

  • Clone with SSH
  • Clone with HTTPS
  • user avatar
    Bogdan Dobrelya authored
    * Enforce a etcd-proxy role to a k8s-cluster group members. This
    provides an HA layout for all of the k8s cluster internal clients.
    * Proxies to be run on each node in the group as a separate etcd
    instances with a readwrite proxy mode and listen the given endpoint,
    which is either the access_ip:2379 or the localhost:2379.
    * A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
    loadbalancers and use the etcd members IPs as a comma-separated
    list. Otherwise, clients shall use the local endpoint provided by a
    etcd-proxy instances on each etcd node. A Netwroking plugins always
    use that access mode.
    * Fix apiserver's etcd servers args to use the etcd_access_endpoint.
    * Fix networking plugins flannel/calico to use the etcd_endpoint.
    * Fix name env var for non masters to be set as well.
    * Fix etcd_client_url was not used anywhere and other etcd_* facts
    evaluation was duplicated in a few places.
    * Define proxy modes only in the env file, if not a master. Del
    an automatic proxy mode decisions for etcd nodes in init/unit scripts.
    * Use Wants= instead of Requires= as "This is the recommended way to
    hook start-up of one unit to the start-up of another unit"
    * Make apiserver/calico Wants= etcd-proxy to keep it always up
    
    Signed-off-by: default avatarBogdan Dobrelya <bdobrelia@mirantis.com>
    Co-authored-by: default avatarMatthew Mosesohn <mmosesohn@mirantis.com>
    32cd6e99
    History

    Kubespray Logo

    ##Deploy a production ready kubernetes cluster

    If you have questions, you can invite yourself to chat with us on Slack! SlackStatus

    • Can be deployed on AWS, GCE, OpenStack or Baremetal
    • High available cluster
    • Composable (Choice of the network plugin for instance)
    • Support most popular Linux distributions
    • Continuous integration tests

    To deploy the cluster you can use :

    kargo-cli
    Ansible usual commands
    vagrant by simply running vagrant up (for tests purposes)

    Supported Linux distributions

    • CoreOS
    • Debian Wheezy, Jessie
    • Ubuntu 14.10, 15.04, 15.10, 16.04
    • Fedora 23
    • CentOS/RHEL 7

    Versions

    kubernetes v1.3.0
    etcd v3.0.1
    calicoctl v0.20.0
    flanneld v0.5.5
    weave v1.5.0
    docker v1.10.3

    Requirements

    • The target servers must have access to the Internet in order to pull docker images.
    • The firewalls are not managed, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall
    • Copy your ssh keys to all the servers part of your inventory.
    • Ansible v2.x and python-netaddr

    Network plugins

    You can choose between 3 network plugins. (default: flannel with vxlan backend)

    • flannel: gre/vxlan (layer 2) networking.

    • calico: bgp (layer 3) networking.

    • weave: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
      (Please refer to weave troubleshooting documentation)

    The choice is defined with the variable kube_network_plugin

    CI Tests

    Build Status

    Google Compute Engine

              | Calico        | Flannel       | Weave         |

    ------------- | ------------- | ------------- | ------------- | Ubuntu Xenial |Build Status|Build Status|Build Status| CentOS 7 |Build Status|Build Status|Build Status| CoreOS (stable) |Build Status|Build Status|Build Status|

    CI tests sponsored by Google (GCE), and teuto.net for OpenStack.