- Mar 29, 2018
-
-
Kuldip Madnani authored
* Added retries in pre-upgrade.yml and retries while applying kube-dns.yml * Removed trailing spaces
-
- Mar 17, 2018
-
-
woopstar authored
Added CoreDNS to downloads Updated with labels. Should now work without RBAC too Fix DNS settings on hosts Rename CoreDNS service from kube-dns to coredns Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html Updated docs with CoreDNS info Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806' Set dns list correct. Thanks to @whereismyjetpack Only download KubeDNS or CoreDNS if selected Move dns cleanup to its own file and import tasks based on dns mode Fix install of KubeDNS when dnsmask_kubedns mode is selected Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf. Run DNS manifests for CoreDNS and KubeDNS Set skydns servers on dual stack deployment Use only one template for CoreDNS dual deployment Set correct cluster ip for the dns server
-
- Feb 28, 2018
-
-
Dmitry Vlasov authored
-
- Feb 05, 2018
-
-
Wong Hoi Sing Edison authored
-
Wong Hoi Sing Edison authored
-
- Jan 30, 2018
-
-
RongZhang authored
Bump kube-dns to 1.14.8
-
- Jan 29, 2018
-
-
Matthew Mosesohn authored
import_tasks will consume far less memory, so it should be used whenever it is compatible.
-
- Jan 10, 2018
-
-
rong.zhang authored
-
- Jan 04, 2018
-
-
rong.zhang authored
-
- Dec 22, 2017
-
-
rong.zhang authored
-
- Dec 18, 2017
-
-
rong.zhang authored
-
- Dec 13, 2017
-
-
rong.zhang authored
-
rong.zhang authored
Update dependencies to be compatible with Kubernetes v1.8
-
- Dec 05, 2017
-
-
Chad Swenson authored
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). Rework of #1937 with kubeadm support Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
-
- Nov 15, 2017
-
-
Chad Swenson authored
-
Chad Swenson authored
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated. * New login/auth options that use apiserver auth proxying by default * Requires RBAC in `authorization_modes` * Only serves over https * No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL: * Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials * Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ * It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
-
- Nov 14, 2017
-
-
Matthew Mosesohn authored
-
- Nov 07, 2017
-
-
Chad Swenson authored
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to: 1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS) 2. Add apiserver client cert/key to the "Wait for apiserver up" checks 3. Update apiserver liveness probe to use HTTPS ports 4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately) 5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well. Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check. Options: 1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not. 2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port) 3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest) Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
-
- Oct 30, 2017
-
-
Andrew Greenwood authored
-
- Oct 26, 2017
-
-
Matthew Mosesohn authored
This should be done after kubeconfig is set for admin and before network plugins are up.
-
- Oct 24, 2017
-
-
Matthew Mosesohn authored
This is to work around #1856 which can occur when kubelet hostname and resolvable hostname (or cloud instance name) do not match.
-
- Oct 23, 2017
-
-
pmontanari authored
Match kubedns_version with roles/download/defaults/main.yml:kubedns_version: 1.14.5
-
- Oct 18, 2017
-
-
Matthew Mosesohn authored
-
- Oct 17, 2017
-
-
Aivars Sterns authored
-
刘旭 authored
-
- Oct 05, 2017
-
-
Aivars Sterns authored
-
- Sep 26, 2017
-
-
Matthew Mosesohn authored
* Enable upgrade to kubeadm * fix kubedns upgrade * try upgrade route * use init/upgrade strategy for kubeadm and ignore kubedns svc * Use bin_dir for kubeadm * delete more secrets * fix waiting for terminating pods * Manually enforce kube-proxy for kubeadm deploy * remove proxy. update to kubeadm 1.8.0rc1
-
- Sep 15, 2017
-
-
Matthew Mosesohn authored
* fix apply for netchecker upgrade and graceful upgrade * Speed up daemonset upgrades. Make check wait for ds upgrades.
-
- Sep 13, 2017
-
-
Matthew Mosesohn authored
* kubeadm support * move k8s master to a subtask * disable k8s secrets when using kubeadm * fix etcd cert serial var * move simple auth users to master role * make a kubeadm-specific env file for kubelet * add non-ha CI job * change ci boolean vars to json format * fixup * Update create-gce.yml * Update create-gce.yml * Update create-gce.yml
-
- Sep 10, 2017
-
-
Matthew Mosesohn authored
-
Matthew Mosesohn authored
* Fix netchecker update side effect kubectl apply should only be used on resources created with kubectl apply. To workaround this, we should apply the old manifest before upgrading it. * Update 030_check-network.yml
-
Matthew Mosesohn authored
* Add kube dashboard, enabled by default Also add rbac role for kube user * Update main.yml
-
- Sep 05, 2017
-
-
Matthew Mosesohn authored
* Use kubectl apply instead of create/replace Disable checks for existing resources to speed up execution. * Fix non-rbac deployment of resources as a list * Fix autoscaler tolerations field * set all kube resources to state=latest * Update netchecker and weave
-
- Sep 04, 2017
-
-
Matthew Mosesohn authored
Canal will be covered by a separate PR
-
Matthew Mosesohn authored
Refactored how rbac_enabled is set Added RBAC to ubuntu-canal-ha CI job Added rbac for calico policy controller
-
- Aug 24, 2017
-
-
Matthew Mosesohn authored
Added toleration to DNS, netchecker, fluentd, canal, and calico policy. Also small fixes to make yamllint pass.
-
Brad Beam authored
* Adding yaml linter to ci check * Minor linting fixes from yamllint * Changing CI to install python pkgs from requirements.txt - adding in a secondary requirements.txt for tests - moving yamllint to tests requirements
-
- Aug 18, 2017
-
-
Matthew Mosesohn authored
* Bump tag for upgrade CI, fix netchecker upgrade netchecker-server was changed from pod to deployment, so we need an upgrade hook for it. CI now uses v2.1.1 as a basis for upgrade. * Fix upgrades for certs from non-rbac to rbac
-
- Aug 14, 2017
-
-
Brad Beam authored
-
- Jul 17, 2017
-
-
jwfang authored
-