- Nov 29, 2017
-
-
riverzhang authored
Delete helm home
-
- Nov 23, 2017
-
-
Bogdan Dobrelya authored
* Defaults for apiserver_loadbalancer_domain_name When loadbalancer_apiserver is defined, use the apiserver_loadbalancer_domain_name with a given default value. Fix unconsistencies for checking if apiserver_loadbalancer_domain_name is defined AND using it with a default value provided at once. Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru> * Define defaults for LB modes in common defaults Adjust the defaults for apiserver_loadbalancer_domain_name and loadbalancer_apiserver_localhost to come from a single source, which is kubespray-defaults. Removes some confusion and simplefies the code. Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
-
- Nov 16, 2017
-
-
Spencer Smith authored
-
- Nov 15, 2017
-
-
Matthew Mosesohn authored
-
Chad Swenson authored
-
- Nov 14, 2017
-
-
Matthew Mosesohn authored
-
chenhonggc authored
-
- Nov 13, 2017
-
-
Aivars Sterns authored
-
neith00 authored
* adding mount for kubelet to enable rbd mounts * fix conditionnal variable name
-
Stanislav Makar authored
Closes: #1967
-
Hyunsun Moon authored
-
Günther Grill authored
-
- Nov 12, 2017
-
-
Maxim Krasilnikov authored
-
- Nov 10, 2017
- Nov 09, 2017
-
-
Spencer Smith authored
Master component and kubelet container upgrade fixes
-
Brad Beam authored
Support for disabling apiserver insecure port
-
- Nov 08, 2017
-
-
Spencer Smith authored
provide environment for rkt trust and run with etcd
-
Spencer Smith authored
-
Chad Swenson authored
* Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail * Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
-
- Nov 06, 2017
-
-
Chad Swenson authored
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to: 1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS) 2. Add apiserver client cert/key to the "Wait for apiserver up" checks 3. Update apiserver liveness probe to use HTTPS ports 4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately) 5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well. Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check. Options: 1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not. 2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port) 3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest) Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
-
Aivars Sterns authored
-
Amit Kumar Jaiswal authored
-
Günther Grill authored
-
Haiwei Liu authored
Signed-off-by: Haiwei Liu <carllhw@gmail.com>
-
Rob Hirschfeld authored
-
- Nov 05, 2017
-
-
Stanislav Makar authored
-
- Nov 04, 2017
-
-
Spencer Smith authored
Fix bad handler directory name in kubeadm role
-
Spencer Smith authored
Remove proxy settings from etcd and kubernetes/master roles
-
Spencer Smith authored
Flannel RBAC Fix
-
Spencer Smith authored
Docker Version Update
-
- Nov 03, 2017
-
-
Chad Swenson authored
Update default docker version to 17.03.1
-
Matthew Mosesohn authored
* Set host IP for kubelet always Use ansible default IP if ip var is not set. * Update main.yml
-
Kevin Lefevre authored
* update helm to v2.7.0 * Update main.yml
-
Günther Grill authored
* Change deprecated vagrant ansible flag 'sudo' to 'become' * Emphasize, that the name of the pip_pyton_modules is only considered in coreos * Remove useless unused variable * Fix warning when jinja2 template-delimiters used in when statement There is no need for jinja2 template-delimiters like {{ }} or {% %} any more. They can just be omitted as described in https://github.com/ansible/ansible/issues/22397 * Fix broken link in getting-started guide
-
Günther Grill authored
* Change deprecated vagrant ansible flag 'sudo' to 'become' * Workaround ansible bug where access var via dict doesn't get real value When accessing a variable via it's name "{{ foo }}" its value is retrieved. But when the variable value is retrieved via the vars-dict "{{ vars['foo'] }}" this doesn't resolve the expression of the variable any more due to a bug. So e.g. a expression foo="{{ 1 == 1 }}" isn't longer resolved but just returned as string "1 == 1". * Make file yamllint complient
-
Spencer Smith authored
-
Matthew Mosesohn authored
-
Chad Swenson authored
When proxy vars are set, `uri` module tasks will attempt to route traffic through the proxy. This causes the "Wait for" tasks in the `etcd` and `kubernetes/master` roles to hang, as localhost connections struggle with a proxy. As far as I know these roles only need local/cluster networking, so a proxy doesn't apply here anyway.
-