Skip to content
  1. Jan 23, 2018
  2. Jan 22, 2018
  3. Jan 18, 2018
  4. Jan 12, 2018
  5. Jan 09, 2018
  6. Jan 05, 2018
  7. Jan 03, 2018
  8. Jan 02, 2018
  9. Dec 25, 2017
    • Matthew Mosesohn's avatar
      Update Kubernetes to v1.9.0 (#2100) · ad6fecef
      Matthew Mosesohn authored
      Update checksum for kubeadm
      Use v1.9.0 kubeadm params
      Include hash of ca.crt for kubeadm join
      Update tag for testing upgrades
      Add workaround for testing upgrades
      Remove scale CI scenarios because of slow inventory parsing
      in ansible 2.4.x.
      
      Change region for tests to us-central1 to
      improve ansible performance
      ad6fecef
  10. Dec 23, 2017
  11. Dec 22, 2017
  12. Dec 20, 2017
  13. Dec 19, 2017
  14. Dec 12, 2017
  15. Dec 11, 2017
  16. Dec 06, 2017
  17. Dec 05, 2017
    • Chad Swenson's avatar
      Support for disabling apiserver insecure port · b8788421
      Chad Swenson authored
      This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).
      
      Rework of #1937 with kubeadm support
      
      Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
      b8788421
  18. Nov 29, 2017
    • Steven Hardy's avatar
      Allow setting --bind-address for apiserver hyperkube (#1985) · d39a88d6
      Steven Hardy authored
      * Allow setting --bind-address for apiserver hyperkube
      
      This is required if you wish to configure a loadbalancer (e.g haproxy)
      running on the master nodes without choosing a different port for the
      vip from that used by the API - in this case you need the API to bind to
      a specific interface, then haproxy can bind the same port on the VIP:
      
      root@overcloud-controller-0 ~]# netstat -taupen | grep 6443
      tcp        0      0 192.168.24.6:6443       0.0.0.0:*               LISTEN      0          680613     134504/haproxy
      tcp        0      0 192.168.24.16:6443      0.0.0.0:*               LISTEN      0          653329     131423/hyperkube
      tcp        0      0 192.168.24.16:6443      192.168.24.16:58404     ESTABLISHED 0          652991     131423/hyperkube
      tcp        0      0 192.168.24.16:58404     192.168.24.16:6443      ESTABLISHED 0          652986     131423/hyperkube
      
      This can be achieved e.g via:
      
      kube_apiserver_bind_address: 192.168.24.16
      
      * Address code review feedback
      
      * Update kube-apiserver.manifest.j2
      d39a88d6
    • unclejack's avatar
      contiv network support (#1914) · e5d353d0
      unclejack authored
      
      
      * Add Contiv support
      
      Contiv is a network plugin for Kubernetes and Docker. It supports
      vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
      multiple networks and bridging pods onto physical networks.
      
      * Update contiv version to 1.1.4
      
      Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.
      
      * Load openvswitch module to workaround on CentOS7.4
      
      * Set contiv cni version to 0.1.0
      
      Correct contiv CNI version to 0.1.0.
      
      * Use kube_apiserver_endpoint for K8S_API_SERVER
      
      Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
      to a available endpoint no matter if there's a loadbalancer or not.
      
      * Make contiv use its own etcd
      
      Before this commit, contiv is using a etcd proxy mode to k8s etcd,
      this work fine when the etcd hosts are co-located with contiv etcd
      proxy, however the k8s peering certs are only in etcd group, as a
      result the etcd-proxy is not able to peering with the k8s etcd on
      etcd group, plus the netplugin is always trying to find the etcd
      endpoint on localhost, this will cause problem for all netplugins
      not runnign on etcd group nodes.
      This commit make contiv uses its own etcd, separate from k8s one.
      on kube-master nodes (where net-master runs), it will run as leader
      mode and on all rest nodes it will run as proxy mode.
      
      * Use cp instead of rsync to copy cni binaries
      
      Since rsync has been removed from hyperkube, this commit changes it
      to use cp instead.
      
      * Make contiv-etcd able to run on master nodes
      
      * Add rbac_enabled flag for contiv pods
      
      * Add contiv into CNI network plugin lists
      
      * migrate contiv test to tests/files
      
      Signed-off-by: default avatarCristian Staretu <cristian.staretu@gmail.com>
      
      * Add required rules for contiv netplugin
      
      * Better handling json return of fwdMode
      
      * Make contiv etcd port configurable
      
      * Use default var instead of templating
      
      * roles/download/defaults/main.yml: use contiv 1.1.7
      
      Signed-off-by: default avatarCristian Staretu <cristian.staretu@gmail.com>
      e5d353d0
    • Di Xu's avatar
      de422c82
  19. Nov 23, 2017
  20. Nov 15, 2017
  21. Nov 14, 2017
  22. Nov 13, 2017
  23. Nov 08, 2017
    • Chad Swenson's avatar
      Master component and kubelet container upgrade fixes · e9f795c5
      Chad Swenson authored
      * Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail
      * Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
      e9f795c5
  24. Nov 06, 2017
    • Chad Swenson's avatar
      Support for disabling apiserver insecure port · 0c7e1889
      Chad Swenson authored
      This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:
      
      1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
      2. Add apiserver client cert/key to the "Wait for apiserver up" checks
      3. Update apiserver liveness probe to use HTTPS ports
      4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
      5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.
      
      Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.
      
      Options:
      
      1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
      2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
      3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)
      
      Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
      0c7e1889
    • Günther Grill's avatar
    • Haiwei Liu's avatar
      Add support cAdvisor (#1908) · ad0cd693
      Haiwei Liu authored
      
      
      Signed-off-by: default avatarHaiwei Liu <carllhw@gmail.com>
      ad0cd693
  25. Nov 05, 2017
  26. Nov 03, 2017
  27. Nov 02, 2017
Loading