Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
  • v2.27.0
  • v2.25.1
  • v2.24.3
  • v2.26.0
  • v2.24.2
  • v2.25.0
  • v2.24.1
  • v2.22.2
  • v2.23.3
  • v2.24.0
  • v2.23.2
  • v2.23.1
  • v2.23.0
  • v2.22.1
  • v2.22.0
  • v2.21.0
  • v2.20.0
  • v2.19.1
  • v2.18.2
  • v2.19.0
21 results

contiv.md

Blame
    • unclejack's avatar
      e5d353d0
      contiv network support (#1914) · e5d353d0
      unclejack authored
      
      * Add Contiv support
      
      Contiv is a network plugin for Kubernetes and Docker. It supports
      vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
      multiple networks and bridging pods onto physical networks.
      
      * Update contiv version to 1.1.4
      
      Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.
      
      * Load openvswitch module to workaround on CentOS7.4
      
      * Set contiv cni version to 0.1.0
      
      Correct contiv CNI version to 0.1.0.
      
      * Use kube_apiserver_endpoint for K8S_API_SERVER
      
      Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
      to a available endpoint no matter if there's a loadbalancer or not.
      
      * Make contiv use its own etcd
      
      Before this commit, contiv is using a etcd proxy mode to k8s etcd,
      this work fine when the etcd hosts are co-located with contiv etcd
      proxy, however the k8s peering certs are only in etcd group, as a
      result the etcd-proxy is not able to peering with the k8s etcd on
      etcd group, plus the netplugin is always trying to find the etcd
      endpoint on localhost, this will cause problem for all netplugins
      not runnign on etcd group nodes.
      This commit make contiv uses its own etcd, separate from k8s one.
      on kube-master nodes (where net-master runs), it will run as leader
      mode and on all rest nodes it will run as proxy mode.
      
      * Use cp instead of rsync to copy cni binaries
      
      Since rsync has been removed from hyperkube, this commit changes it
      to use cp instead.
      
      * Make contiv-etcd able to run on master nodes
      
      * Add rbac_enabled flag for contiv pods
      
      * Add contiv into CNI network plugin lists
      
      * migrate contiv test to tests/files
      
      Signed-off-by: default avatarCristian Staretu <cristian.staretu@gmail.com>
      
      * Add required rules for contiv netplugin
      
      * Better handling json return of fwdMode
      
      * Make contiv etcd port configurable
      
      * Use default var instead of templating
      
      * roles/download/defaults/main.yml: use contiv 1.1.7
      
      Signed-off-by: default avatarCristian Staretu <cristian.staretu@gmail.com>
      e5d353d0
      History
      contiv network support (#1914)
      unclejack authored
      
      * Add Contiv support
      
      Contiv is a network plugin for Kubernetes and Docker. It supports
      vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
      multiple networks and bridging pods onto physical networks.
      
      * Update contiv version to 1.1.4
      
      Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.
      
      * Load openvswitch module to workaround on CentOS7.4
      
      * Set contiv cni version to 0.1.0
      
      Correct contiv CNI version to 0.1.0.
      
      * Use kube_apiserver_endpoint for K8S_API_SERVER
      
      Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
      to a available endpoint no matter if there's a loadbalancer or not.
      
      * Make contiv use its own etcd
      
      Before this commit, contiv is using a etcd proxy mode to k8s etcd,
      this work fine when the etcd hosts are co-located with contiv etcd
      proxy, however the k8s peering certs are only in etcd group, as a
      result the etcd-proxy is not able to peering with the k8s etcd on
      etcd group, plus the netplugin is always trying to find the etcd
      endpoint on localhost, this will cause problem for all netplugins
      not runnign on etcd group nodes.
      This commit make contiv uses its own etcd, separate from k8s one.
      on kube-master nodes (where net-master runs), it will run as leader
      mode and on all rest nodes it will run as proxy mode.
      
      * Use cp instead of rsync to copy cni binaries
      
      Since rsync has been removed from hyperkube, this commit changes it
      to use cp instead.
      
      * Make contiv-etcd able to run on master nodes
      
      * Add rbac_enabled flag for contiv pods
      
      * Add contiv into CNI network plugin lists
      
      * migrate contiv test to tests/files
      
      Signed-off-by: default avatarCristian Staretu <cristian.staretu@gmail.com>
      
      * Add required rules for contiv netplugin
      
      * Better handling json return of fwdMode
      
      * Make contiv etcd port configurable
      
      * Use default var instead of templating
      
      * roles/download/defaults/main.yml: use contiv 1.1.7
      
      Signed-off-by: default avatarCristian Staretu <cristian.staretu@gmail.com>

    Contiv

    Here is the Contiv documentation.

    Administrate Contiv

    There are two ways to manage Contiv:

    • a web UI managed by the api proxy service
    • a CLI named netctl

    Interfaces

    The Web Interface

    This UI is hosted on all kubernetes master nodes. The service is available at https://<one of your master node>:10000.

    You can configure the api proxy by overriding the following variables:

    contiv_enable_api_proxy: true
    contiv_api_proxy_port: 10000
    contiv_generate_certificate: true

    The default credentials to log in are: admin/admin.

    The Command Line Interface

    The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:

    export NETMASTER=http://127.0.0.1:9999

    The port can be changed by overriding the following variable:

    contiv_netmaster_port: 9999

    The CLI doesn't use the authentication process needed by the web interface.

    Network configuration

    The default configuration uses VXLAN to create an overlay. Two networks are created by default:

    • contivh1: an infrastructure network. It allows nodes to access the pods IPs. It is mandatory in a Kubernetes environment that uses VXLAN.
    • default-net : the default network that hosts pods.

    You can change the default network configuration by overriding the contiv_networks variable.

    The default forward mode is set to routing:

    contiv_fwd_mode: routing

    The following is an example of how you can use VLAN instead of VXLAN:

    contiv_fwd_mode: bridge
    contiv_vlan_interface: eth0
    contiv_networks:
      - name: default-net
        subnet: "{{ kube_pods_subnet }}"
        gateway: "{{ kube_pods_subnet|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
        encap: vlan
        pkt_tag: 10