- Oct 13, 2020
-
-
yelhouti authored
-
- Aug 27, 2020
-
-
Barry Melbourne authored
-
- May 29, 2020
-
-
jeanfabrice authored
-
- Apr 25, 2020
-
-
Joel Seguillon authored
* bump to dashboard 2.0 rc6 with metrics scrapper * fix missing yaml seperator making Replicaset complaining about missing ServiceAccount * unwanted legay gross hack forgot to remove before * no need namespace on CrBinding * bump to 2.0.0 release * remove dashboard_metrics_scrapper_enabled
-
- Sep 10, 2019
-
-
Matthew Mosesohn authored
Cleaned up deprecated APIs: apps/v1beta1 apps/v1beta2 extensions/v1beta1 for ds,deploy,rs Add workaround for deploying helm using incompatible deployment manifest. Change-Id: I78b36741348f47a999df3841ee63cf4e6f377830
-
- Aug 09, 2019
-
-
Danilo Riecken P. de Morais authored
-
- May 08, 2019
-
-
Andreas Krüger authored
* Minor cleanups * Add comment in docs that nodelocaldns cache is enabled by default
-
- May 03, 2019
-
-
MarkusTeufelberger authored
-
- Apr 24, 2019
-
-
Matthew Mosesohn authored
We don't need to support upgrades from 2 year old installs, just from the last major version. Also changed most retried tasks to 1s delay instead of longer.
-
- Apr 23, 2019
-
-
Maxime Guyot authored
This reverts commit f8fdc0cd.
-
andreyshestakov authored
-
- Apr 17, 2019
-
-
Maxime Guyot authored
-
- Apr 01, 2019
-
-
Matthew Mosesohn authored
Both kubedns and dnsmasq modes are long not maintained. We should run dns_late steps at the end because sshd makes DNS lookups during Ansible run and has 2s timeouts for each failed lookup trying to connect to coredns before it is ready.
-
- Mar 28, 2019
-
-
Stefan Prietl authored
This commit adapts the "Lay Down KubeDNS Template" task to use the static files moved by pull request [1] [1] https://github.com/kubernetes-sigs/kubespray/pull/4341
-
- Mar 13, 2019
-
-
Matthew Mosesohn authored
* Move most coredns templates to static files This should speed up the task slightly * yaml lint fixes
-
- Jan 29, 2019
-
-
Thomas Nys authored
* Set cluster DNS correctly in case of nodelocal dns cache * Pass in cluster_ip based on dns mode * Disable nodelocaldns by default * Fix syntax error * Fix syntax issue * Add nodelocadns ip to vars of node installation * Change location of nodelocaldns_ip * Try to remove newlines from jinja template * Add debug for config file * Move parameter logic outside of template * Adapt templates after feedback * Remove debugging
-
- Jan 28, 2019
-
-
Danny Kulchinsky authored
* Mount host /run/xtables.lock in nodelocaldns container * fix typo in nodelocaldns daemonset manifest yml * Add prometheus scrape annotation, updateStrategy and reduce termination grace period * fix indentation * actually fix it.. * Bump k8s-dns-node-cache tag to 1.15.1 (fixes https://github.com/kubernetes/dns/issues/282)
-
- Dec 11, 2018
-
-
Thomas Nys authored
* Add support for running a nodelocal dns cache After encountering dns issues in a cluster I was recently working on I noticed Kubernetes 1.13 introduced support for running a nodelocal dns cache. I believe this can usefull for more people. https://github.com/kubernetes/kubernetes/commit/73b548db06c5e293533344c5b6171e955eac9ff1 https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md * Add requested changes * Add additional requested changes + documentation * Add requested changes after review * Replace incorrect variable
-
- Dec 06, 2018
-
-
Andreas Krüger authored
* Remove non-kubeadm deployment * More cleanup * More cleanup * More cleanup * More cleanup * Fix gitlab * Try stop gce first before absent to make the delete process work * More cleanup * Fix bug with checking if kubeadm has already run * Fix bug with checking if kubeadm has already run * More fixes * Fix test * fix * Fix gitlab checkout untill kubespray 2.8 is on quay * Fixed * Add upgrade path from non-kubeadm to kubeadm. Revert ssl path * Readd secret checking * Do gitlab checks from v2.7.0 test upgrade path to 2.8.0 * fix typo * Fix CI jobs to kubeadm again. Fix broken hyperkube path * Fix gitlab * Fix rotate tokens * More fixes * More fixes * Fix tokens
-
- Dec 04, 2018
-
-
Chad Swenson authored
Added a loop_control label to a few tasks that flood our logs.
-
- Nov 15, 2018
-
-
Andreas Krüger authored
* Update DNS Autoscaler to latest * Update CoreDNS to latest * Update KubeDNS to latest * Add KubeDNS config map * Fix filename * Add missing selector to DNS Autoscaler * Add missing tolerations
-
Andreas Krüger authored
* Enable AutoScaler for CoreDNS * Only use one template for dns autoscaler * Rename a few variables for replicas and minimum pods * Rename a few variables for replicas and minimum pods * Remove replicas to make autoscale work * Cleanup kubedns-autoscaler as it has been renamed
-
- Nov 14, 2018
-
-
Ryler Hockenbury authored
* Revert netchecker image and version * Create namespace for netchecker * Remove extra slashes
-
- Oct 17, 2018
-
-
Erwan Miran authored
* failed * version_compare * succeeded * skipped * success * version_compare becomes version since ansible 2.5 * ansible minimal version updated in doc and spec * last version_compare
-
- Oct 08, 2018
-
-
Matthew Mosesohn authored
-
- Aug 22, 2018
-
-
Erwan Miran authored
-
Wong Hoi Sing Edison authored
-
- Aug 09, 2018
-
-
Erwan Miran authored
-
- Aug 07, 2018
-
-
Mathieu Herbert authored
-
- Mar 30, 2018
-
-
Matthew Mosesohn authored
Kubernetes makes this namespace automatically, so there is no need for kubespray to manage it.
-
- Mar 29, 2018
-
-
Kuldip Madnani authored
* Added retries in pre-upgrade.yml and retries while applying kube-dns.yml * Removed trailing spaces
-
- Mar 17, 2018
-
-
woopstar authored
Added CoreDNS to downloads Updated with labels. Should now work without RBAC too Fix DNS settings on hosts Rename CoreDNS service from kube-dns to coredns Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html Updated docs with CoreDNS info Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806' Set dns list correct. Thanks to @whereismyjetpack Only download KubeDNS or CoreDNS if selected Move dns cleanup to its own file and import tasks based on dns mode Fix install of KubeDNS when dnsmask_kubedns mode is selected Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf. Run DNS manifests for CoreDNS and KubeDNS Set skydns servers on dual stack deployment Use only one template for CoreDNS dual deployment Set correct cluster ip for the dns server
-
- Feb 05, 2018
-
-
Wong Hoi Sing Edison authored
-
Wong Hoi Sing Edison authored
-
- Jan 29, 2018
-
-
Matthew Mosesohn authored
import_tasks will consume far less memory, so it should be used whenever it is compatible.
-
- Dec 05, 2017
-
-
Chad Swenson authored
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). Rework of #1937 with kubeadm support Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
-
- Nov 15, 2017
-
-
Chad Swenson authored
-
Chad Swenson authored
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated. * New login/auth options that use apiserver auth proxying by default * Requires RBAC in `authorization_modes` * Only serves over https * No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL: * Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials * Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ * It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
-
- Nov 14, 2017
-
-
Matthew Mosesohn authored
-
- Nov 07, 2017
-
-
Chad Swenson authored
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to: 1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS) 2. Add apiserver client cert/key to the "Wait for apiserver up" checks 3. Update apiserver liveness probe to use HTTPS ports 4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately) 5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well. Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check. Options: 1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not. 2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port) 3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest) Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
-