Skip to content
  1. Nov 23, 2017
    • Bogdan Dobrelya's avatar
      Defaults for apiserver_loadbalancer_domain_name (#1993) · 8aafe643
      Bogdan Dobrelya authored
      
      
      * Defaults for apiserver_loadbalancer_domain_name
      
      When loadbalancer_apiserver is defined, use the
      apiserver_loadbalancer_domain_name with a given default value.
      
      Fix unconsistencies for checking if apiserver_loadbalancer_domain_name
      is defined AND using it with a default value provided at once.
      
      Signed-off-by: default avatarBogdan Dobrelya <bogdando@mail.ru>
      
      * Define defaults for LB modes in common defaults
      
      Adjust the defaults for apiserver_loadbalancer_domain_name and
      loadbalancer_apiserver_localhost to come from a single source, which is
      kubespray-defaults. Removes some confusion and simplefies the code.
      
      Signed-off-by: default avatarBogdan Dobrelya <bogdando@mail.ru>
      8aafe643
  2. Nov 14, 2017
  3. Nov 08, 2017
    • Chad Swenson's avatar
      Master component and kubelet container upgrade fixes · e9f795c5
      Chad Swenson authored
      * Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail
      * Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
      e9f795c5
  4. Nov 07, 2017
    • Chad Swenson's avatar
      Support for disabling apiserver insecure port · 0c7e1889
      Chad Swenson authored
      This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:
      
      1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
      2. Add apiserver client cert/key to the "Wait for apiserver up" checks
      3. Update apiserver liveness probe to use HTTPS ports
      4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
      5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.
      
      Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.
      
      Options:
      
      1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
      2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
      3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)
      
      Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
      0c7e1889
  5. Oct 31, 2017
  6. Oct 27, 2017
  7. Oct 26, 2017
  8. Oct 24, 2017
  9. Oct 20, 2017
  10. Oct 19, 2017
  11. Oct 18, 2017
  12. Oct 17, 2017
  13. Oct 15, 2017
  14. Oct 13, 2017
  15. Oct 12, 2017
  16. Oct 05, 2017
  17. Oct 04, 2017
  18. Oct 03, 2017
  19. Oct 01, 2017
  20. Sep 27, 2017
  21. Sep 26, 2017
    • tanshanshan's avatar
      when and run_once are reduplicative (#1694) · 477afa87
      tanshanshan authored
      477afa87
    • Matthew Mosesohn's avatar
      Upgrade to kubeadm (#1667) · bd272e0b
      Matthew Mosesohn authored
      * Enable upgrade to kubeadm
      
      * fix kubedns upgrade
      
      * try upgrade route
      
      * use init/upgrade strategy for kubeadm and ignore kubedns svc
      
      * Use bin_dir for kubeadm
      
      * delete more secrets
      
      * fix waiting for terminating pods
      
      * Manually enforce kube-proxy for kubeadm deploy
      
      * remove proxy. update to kubeadm 1.8.0rc1
      bd272e0b
  22. Sep 25, 2017
  23. Sep 20, 2017
  24. Sep 16, 2017
  25. Sep 15, 2017
  26. Sep 13, 2017
    • Matthew Mosesohn's avatar
      kubeadm support (#1631) · 67447260
      Matthew Mosesohn authored
      * kubeadm support
      
      * move k8s master to a subtask
      * disable k8s secrets when using kubeadm
      * fix etcd cert serial var
      * move simple auth users to master role
      * make a kubeadm-specific env file for kubelet
      * add non-ha CI job
      
      * change ci boolean vars to json format
      
      * fixup
      
      * Update create-gce.yml
      
      * Update create-gce.yml
      
      * Update create-gce.yml
      67447260
  27. Sep 10, 2017
  28. Sep 01, 2017
  29. Aug 31, 2017
  30. Aug 25, 2017
    • Chad Swenson's avatar
      Initial version of Flannel using CNI (#1486) · a39e78d4
      Chad Swenson authored
      * Updates Controller Manager/Kubelet with Flannel's required configuration for CNI
      * Removes old Flannel installation
      * Install CNI enabled Flannel DaemonSet/ConfigMap/CNI bins and config (with portmap plugin) on host
      * Uses RBAC if enabled
      * Fixed an issue that could occur if br_netfilter is not a module and net.bridge.bridge-nf-call-iptables sysctl was not set
      a39e78d4
Loading