Skip to content
  1. Jun 07, 2018
  2. May 16, 2018
    • Christopher J. Ruwe's avatar
      assert that number of pods on node does not exceed CIDR address range · c1bc4615
      Christopher J. Ruwe authored
      The number of pods on a given node is determined by the  --max-pods=k
      directive. When the address space is exhausted, no more pods can be
      scheduled even if from the --max-pods-perspective, the node still has
      capacity.
      
      The special case that a pod is scheduled and uses the node IP in the
      host network namespace is too "soft" to derive a guarantee.
      
      Comparing kubelet_max_pods with kube_network_node_prefix when given
      allows to assert that pod limits match the CIDR address space.
      c1bc4615
  3. May 15, 2018
  4. May 14, 2018
  5. May 11, 2018
    • Matthew Mosesohn's avatar
      refactor vault role (#2733) · 07cc9819
      Matthew Mosesohn authored
      * Move front-proxy-client certs back to kube mount
      
      We want the same CA for all k8s certs
      
      * Refactor vault to use a third party module
      
      The module adds idempotency and reduces some of the repetitive
      logic in the vault role
      
      Requires ansible-modules-hashivault on ansible node and hvac
      on the vault hosts themselves
      
      Add upgrade test scenario
      Remove bootstrap-os tags from tasks
      
      * fix upgrade issues
      
      * improve unseal logic
      
      * specify ca and fix etcd check
      
      * Fix initialization check
      
      bump machine size
      07cc9819
  6. May 08, 2018
  7. May 01, 2018
  8. Apr 30, 2018
  9. Apr 29, 2018
  10. Apr 27, 2018
  11. Apr 26, 2018
  12. Apr 24, 2018
  13. Apr 23, 2018
  14. Apr 22, 2018
  15. Apr 19, 2018
  16. Apr 12, 2018
  17. Apr 11, 2018
  18. Apr 10, 2018
  19. Apr 09, 2018
  20. Apr 07, 2018
  21. Apr 06, 2018
  22. Apr 04, 2018
  23. Apr 02, 2018
  24. Apr 01, 2018
    • woopstar's avatar
      Etcd cluster setup makeover · 86e3506a
      woopstar authored
      The current way to setup the etc cluster is messy and buggy.
      
      - It checks for cluster is healthy before the cluster is even created.
      - The unit files are started on handlers, not in the task, so you mess with "flush handlers".
      - The join_member.yml is not used.
      - etcd events cluster is not configured for kubeadm
      - remove duplicate runs between running the role on etcd nodes and k8s nodes
      86e3506a
  25. Mar 31, 2018
Loading