Skip to content
  1. Jun 06, 2022
  2. Apr 07, 2022
  3. Feb 15, 2022
  4. Feb 02, 2022
    • Ilya Margolin's avatar
      Fix kubelet_kubelet_cgroups_cgroupfs (#8500) · aed187e5
      Ilya Margolin authored
      If kubelet is run with systemd (as it always is when using kubespray),
      it starts in systemd's /system.slice/kubelet.service cgroup.
      
      This commit prevents a creation and usage of a second unrelated cgroup.
      aed187e5
  5. Jan 24, 2022
  6. Nov 05, 2021
  7. Nov 26, 2020
  8. Sep 23, 2020
  9. Sep 04, 2020
  10. Aug 13, 2020
  11. Jul 13, 2020
  12. Jun 26, 2020
  13. Apr 07, 2020
  14. Mar 31, 2020
    • Vinayaka V Ladwa's avatar
      Azure vmss - kubelet: failed to get instance ID from cloud provider: instance... · f8ad44a9
      Vinayaka V Ladwa authored
      Azure vmss - kubelet: failed to get instance ID from cloud provider: instance not found #5824 (#5855)
      
      * kubernetes-sigs-kubespray #5824
      
      Added support nodes which are part of Virtual Machine Scale Sets(VMSS)
      
      * kubernetes-sigs-kubespray #5824
      
      * kubernetes-sigs-kubespray #5824
      
      Added comments and updatetd azure docs.
      
      * kubernetes-sigs-kubespray #5824
      
      Added supported values comments for "azure_vmtype" in azure.yml
      f8ad44a9
  15. Feb 15, 2020
  16. Dec 03, 2019
  17. Oct 30, 2019
  18. Sep 10, 2019
    • Matthew Mosesohn's avatar
      Add support for k8s v1.16.0-beta.2 (#5148) · 27ec548b
      Matthew Mosesohn authored
      Cleaned up deprecated APIs:
      apps/v1beta1
      apps/v1beta2
      extensions/v1beta1 for ds,deploy,rs
      
      Add workaround for deploying helm using incompatible
      deployment manifest.
      Change-Id: I78b36741348f47a999df3841ee63cf4e6f377830
      27ec548b
  19. Sep 05, 2019
  20. Jul 30, 2019
  21. Jul 02, 2019
    • okamototk's avatar
      Use K8s 1.15 (#4905) · f2b8a361
      okamototk authored
      * Use K8s 1.15
      
      * Use Kubernetes 1.15 and use kubeadm.k8s.io/v1beta2 for
        InitConfiguration.
      * bump to v1.15.0
      
      * Remove k8s 1.13 checksums.
      
      * Update README kubernetes version 1.15.0.
      
      * Update metrics server 0.3.3 for k8s 1.15
      
      * Remove less than k8s 1.14 related code
      
      * Use kubeadm with --upload-certs instead of --experimental-upload-certs due to depricate
      
      * Update dnsautoscaler 1.6.0
      
      * Skip certificateKey if it's not defined
      
      * Add kubeadm-conftolplane.v2beta2 for k8s 1.15 or later
      
      * Support kubeadm control plane for k8s 1.15
      
      * Update sonobuoy version 0.15.0 for k8s 1.15
      f2b8a361
  22. May 19, 2019
  23. Apr 24, 2019
    • Vincent Gramer's avatar
      support azure loadbalancer standard sku (#4150) (#4476) · f47a6662
      Vincent Gramer authored
      add the support of the folling property in azure-credential-check.yml
        - azure_loadbalancer_sku: Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
        - azure_exclude_master_from_standard_lb: excludes master nodes from standard load balancer.
        - azure_disable_outbound_snat: disables the outbound SNAT for public load balancer rules
        - useInstanceMetadata: Use instance metadata service where possible
        - azure_primary_availability_set: (Optional) The name of the availability set that should be used as the load balancer backend
      f47a6662
  24. Apr 10, 2019
  25. Apr 08, 2019
  26. Mar 05, 2019
  27. Feb 25, 2019
  28. Jan 03, 2019
    • Chad Swenson's avatar
      Fix kube-proxy configuration for kubeadm (#3958) · 80379f6c
      Chad Swenson authored
      - Creates and defaults an ansible variable for every configuration option in the `kubeproxy.config.k8s.io/v1alpha1` type spec
        - Fixes vars that were orphaned by removing non-kubeadm
        - Fixes previously harcoded kubeadm values
      - Introduces a `main` directory for role default files per component (requires ansible 2.6.0+)
        - Split out just `kube-proxy.yml` in this first effort
      - Removes the kube-proxy server field patch task
      
      We should continue to pull out other components from `main.yml` into their own defaults files as I did here for `defaults/main/kube-proxy.yml`. I hope for and will need others to join me in this refactoring across the project until each component config template has a matching role defaults file, with shared defaults in `kubespray-defaults` or `downloads`
      80379f6c
  29. Dec 07, 2018
  30. Dec 06, 2018
    • Andreas Krüger's avatar
      Remove non-kubeadm deployment (#3811) · ddffdb63
      Andreas Krüger authored
      * Remove non-kubeadm deployment
      
      * More cleanup
      
      * More cleanup
      
      * More cleanup
      
      * More cleanup
      
      * Fix gitlab
      
      * Try stop gce first before absent to make the delete process work
      
      * More cleanup
      
      * Fix bug with checking if kubeadm has already run
      
      * Fix bug with checking if kubeadm has already run
      
      * More fixes
      
      * Fix test
      
      * fix
      
      * Fix gitlab checkout untill kubespray 2.8 is on quay
      
      * Fixed
      
      * Add upgrade path from non-kubeadm to kubeadm. Revert ssl path
      
      * Readd secret checking
      
      * Do gitlab checks from v2.7.0 test upgrade path to 2.8.0
      
      * fix typo
      
      * Fix CI jobs to kubeadm again. Fix broken hyperkube path
      
      * Fix gitlab
      
      * Fix rotate tokens
      
      * More fixes
      
      * More fixes
      
      * Fix tokens
      ddffdb63
  31. Nov 27, 2018
  32. Nov 10, 2018
  33. Oct 12, 2018
  34. Oct 11, 2018
  35. Sep 19, 2018
  36. Sep 03, 2018
  37. Aug 28, 2018
  38. Jul 19, 2018
  39. Jun 28, 2018
  40. May 16, 2018
    • Christopher J. Ruwe's avatar
      assert that number of pods on node does not exceed CIDR address range · c1bc4615
      Christopher J. Ruwe authored
      The number of pods on a given node is determined by the  --max-pods=k
      directive. When the address space is exhausted, no more pods can be
      scheduled even if from the --max-pods-perspective, the node still has
      capacity.
      
      The special case that a pod is scheduled and uses the node IP in the
      host network namespace is too "soft" to derive a guarantee.
      
      Comparing kubelet_max_pods with kube_network_node_prefix when given
      allows to assert that pod limits match the CIDR address space.
      c1bc4615
Loading