Skip to content
Snippets Groups Projects
Commit e1600188 authored by AtzeDeVries's avatar AtzeDeVries
Browse files

Fixed conflicts, ipip:true as defualt and added ipip_mode

parents f5ef02d4 99202328
Branches
Tags
No related merge requests found
Showing
with 162 additions and 139 deletions
...@@ -24,7 +24,7 @@ explain why. ...@@ -24,7 +24,7 @@ explain why.
- **Version of Ansible** (`ansible --version`): - **Version of Ansible** (`ansible --version`):
**Kargo version (commit) (`git rev-parse --short HEAD`):** **Kubespray version (commit) (`git rev-parse --short HEAD`):**
**Network plugin used**: **Network plugin used**:
......
...@@ -90,8 +90,9 @@ before_script: ...@@ -90,8 +90,9 @@ before_script:
- pwd - pwd
- ls - ls
- echo ${PWD} - echo ${PWD}
- echo "${STARTUP_SCRIPT}"
- > - >
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local
${LOG_LEVEL} ${LOG_LEVEL}
-e cloud_image=${CLOUD_IMAGE} -e cloud_image=${CLOUD_IMAGE}
-e cloud_region=${CLOUD_REGION} -e cloud_region=${CLOUD_REGION}
...@@ -103,6 +104,7 @@ before_script: ...@@ -103,6 +104,7 @@ before_script:
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e mode=${CLUSTER_MODE} -e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID} -e test_id=${TEST_ID}
-e startup_script="'${STARTUP_SCRIPT}'"
# Check out latest tag if testing upgrade # Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags # Uncomment when gitlab kargo repo has tags
...@@ -116,7 +118,7 @@ before_script: ...@@ -116,7 +118,7 @@ before_script:
${SSH_ARGS} ${SSH_ARGS}
${LOG_LEVEL} ${LOG_LEVEL}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER} -e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS} -e bootstrap_os=${BOOTSTRAP_OS}
-e cert_management=${CERT_MGMT:-script} -e cert_management=${CERT_MGMT:-script}
-e cloud_provider=gce -e cloud_provider=gce
...@@ -125,6 +127,7 @@ before_script: ...@@ -125,6 +127,7 @@ before_script:
-e download_run_once=${DOWNLOAD_RUN_ONCE} -e download_run_once=${DOWNLOAD_RUN_ONCE}
-e etcd_deployment_type=${ETCD_DEPLOYMENT} -e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT} -e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE} -e resolvconf_mode=${RESOLVCONF_MODE}
...@@ -134,30 +137,31 @@ before_script: ...@@ -134,30 +137,31 @@ before_script:
# Repeat deployment if testing upgrade # Repeat deployment if testing upgrade
- > - >
if [ "${UPGRADE_TEST}" != "false" ]; then if [ "${UPGRADE_TEST}" != "false" ]; then
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml"; test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml"; test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
pip install ansible==2.3.0; pip install ansible==2.3.0;
git checkout "${CI_BUILD_REF}"; git checkout "${CI_BUILD_REF}";
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER
${SSH_ARGS} ${SSH_ARGS}
${LOG_LEVEL} ${LOG_LEVEL}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER} -e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS} -e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce -e cloud_provider=gce
-e deploy_netchecker=true -e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST} -e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE} -e download_run_once=${DOWNLOAD_RUN_ONCE}
-e etcd_deployment_type=${ETCD_DEPLOYMENT} -e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT} -e kubedns_min_replicas=1
-e local_release_dir=${PWD}/downloads -e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e resolvconf_mode=${RESOLVCONF_MODE} -e local_release_dir=${PWD}/downloads
-e weave_cpu_requests=${WEAVE_CPU_LIMIT} -e resolvconf_mode=${RESOLVCONF_MODE}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT} -e weave_cpu_requests=${WEAVE_CPU_LIMIT}
--limit "all:!fake_hosts" -e weave_cpu_limit=${WEAVE_CPU_LIMIT}
$PLAYBOOK; --limit "all:!fake_hosts"
$PLAYBOOK;
fi fi
# Tests Cases # Tests Cases
...@@ -173,40 +177,41 @@ before_script: ...@@ -173,40 +177,41 @@ before_script:
## Idempotency checks 1/5 (repeat deployment) ## Idempotency checks 1/5 (repeat deployment)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS} -e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST} -e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE} -e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true -e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE} -e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e etcd_deployment_type=${ETCD_DEPLOYMENT} -e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT} -e kubedns_min_replicas=1
--limit "all:!fake_hosts" -e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
--limit "all:!fake_hosts"
cluster.yml; cluster.yml;
fi fi
## Idempotency checks 2/5 (Advanced DNS checks) ## Idempotency checks 2/5 (Advanced DNS checks)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL; tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi fi
## Idempotency checks 3/5 (reset deployment) ## Idempotency checks 3/5 (reset deployment)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS} -e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes -e reset_confirmation=yes
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
reset.yml; reset.yml;
fi fi
...@@ -214,28 +219,29 @@ before_script: ...@@ -214,28 +219,29 @@ before_script:
## Idempotency checks 4/5 (redeploy after reset) ## Idempotency checks 4/5 (redeploy after reset)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS} -e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST} -e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE} -e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true -e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE} -e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e etcd_deployment_type=${ETCD_DEPLOYMENT} -e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT} -e kubedns_min_replicas=1
--limit "all:!fake_hosts" -e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
--limit "all:!fake_hosts"
cluster.yml; cluster.yml;
fi fi
## Idempotency checks 5/5 (Advanced DNS checks) ## Idempotency checks 5/5 (Advanced DNS checks)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL; tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi fi
...@@ -261,6 +267,8 @@ before_script: ...@@ -261,6 +267,8 @@ before_script:
CLUSTER_MODE: separate CLUSTER_MODE: separate
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12 RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
##User-data to simply turn off coreos upgrades
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables .ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
...@@ -271,6 +279,7 @@ before_script: ...@@ -271,6 +279,7 @@ before_script:
UPGRADE_TEST: "basic" UPGRADE_TEST: "basic"
CLUSTER_MODE: ha CLUSTER_MODE: ha
UPGRADE_TEST: "graceful" UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.rhel7_weave_variables: &rhel7_weave_variables .rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
...@@ -278,6 +287,7 @@ before_script: ...@@ -278,6 +287,7 @@ before_script:
CLOUD_IMAGE: rhel-7 CLOUD_IMAGE: rhel-7
CLOUD_REGION: europe-west1-b CLOUD_REGION: europe-west1-b
CLUSTER_MODE: default CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.centos7_flannel_variables: &centos7_flannel_variables .centos7_flannel_variables: &centos7_flannel_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
...@@ -285,13 +295,15 @@ before_script: ...@@ -285,13 +295,15 @@ before_script:
CLOUD_IMAGE: centos-7 CLOUD_IMAGE: centos-7
CLOUD_REGION: us-west1-a CLOUD_REGION: us-west1-a
CLUSTER_MODE: default CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.debian8_calico_variables: &debian8_calico_variables .debian8_calico_variables: &debian8_calico_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: calico KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: debian-8-kubespray CLOUD_IMAGE: debian-8-kubespray
CLOUD_REGION: us-central1-b CLOUD_REGION: us-central1-b
CLUSTER_MODE: default CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.coreos_canal_variables: &coreos_canal_variables .coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
...@@ -302,6 +314,7 @@ before_script: ...@@ -302,6 +314,7 @@ before_script:
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
IDEMPOT_CHECK: "true" IDEMPOT_CHECK: "true"
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12 RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables .rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-gce-special # stage: deploy-gce-special
...@@ -309,6 +322,7 @@ before_script: ...@@ -309,6 +322,7 @@ before_script:
CLOUD_IMAGE: rhel-7 CLOUD_IMAGE: rhel-7
CLOUD_REGION: us-east1-b CLOUD_REGION: us-east1-b
CLUSTER_MODE: separate CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables .ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-gce-special # stage: deploy-gce-special
...@@ -317,6 +331,7 @@ before_script: ...@@ -317,6 +331,7 @@ before_script:
CLOUD_REGION: us-central1-b CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate CLUSTER_MODE: separate
IDEMPOT_CHECK: "false" IDEMPOT_CHECK: "false"
STARTUP_SCRIPT: ""
.centos7_calico_ha_variables: &centos7_calico_ha_variables .centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-gce-special # stage: deploy-gce-special
...@@ -327,6 +342,7 @@ before_script: ...@@ -327,6 +342,7 @@ before_script:
CLOUD_REGION: europe-west1-b CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha-scale CLUSTER_MODE: ha-scale
IDEMPOT_CHECK: "true" IDEMPOT_CHECK: "true"
STARTUP_SCRIPT: ""
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables .coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special # stage: deploy-gce-special
...@@ -336,6 +352,7 @@ before_script: ...@@ -336,6 +352,7 @@ before_script:
CLUSTER_MODE: ha-scale CLUSTER_MODE: ha-scale
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12 RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables .ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
...@@ -345,6 +362,7 @@ before_script: ...@@ -345,6 +362,7 @@ before_script:
CLUSTER_MODE: separate CLUSTER_MODE: separate
ETCD_DEPLOYMENT: rkt ETCD_DEPLOYMENT: rkt
KUBELET_DEPLOYMENT: rkt KUBELET_DEPLOYMENT: rkt
STARTUP_SCRIPT: ""
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables .ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
...@@ -353,6 +371,7 @@ before_script: ...@@ -353,6 +371,7 @@ before_script:
CLOUD_IMAGE: ubuntu-1604-xenial CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto) # Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-sep: coreos-calico-sep:
...@@ -588,7 +607,7 @@ ci-authorized: ...@@ -588,7 +607,7 @@ ci-authorized:
script: script:
- /bin/sh scripts/premoderator.sh - /bin/sh scripts/premoderator.sh
except: ['triggers', 'master'] except: ['triggers', 'master']
syntax-check: syntax-check:
<<: *job <<: *job
stage: unit-tests stage: unit-tests
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## Deploy a production ready kubernetes cluster ## Deploy a production ready kubernetes cluster
If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kargo**. If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kubespray**.
- Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal** - Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal**
- **High available** cluster - **High available** cluster
...@@ -13,13 +13,13 @@ If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), ...@@ -13,13 +13,13 @@ If you have questions, join us on the [kubernetes slack](https://slack.k8s.io),
To deploy the cluster you can use : To deploy the cluster you can use :
[**kargo-cli**](https://github.com/kubespray/kargo-cli) <br> [**kubespray-cli**](https://github.com/kubespray/kubespray-cli) <br>
**Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py) <br> **Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py) <br>
**vagrant** by simply running `vagrant up` (for tests purposes) <br> **vagrant** by simply running `vagrant up` (for tests purposes) <br>
* [Requirements](#requirements) * [Requirements](#requirements)
* [Kargo vs ...](docs/comparisons.md) * [Kubespray vs ...](docs/comparisons.md)
* [Getting started](docs/getting-started.md) * [Getting started](docs/getting-started.md)
* [Ansible inventory and tags](docs/ansible.md) * [Ansible inventory and tags](docs/ansible.md)
* [Integration with existing ansible repo](docs/integration.md) * [Integration with existing ansible repo](docs/integration.md)
...@@ -98,22 +98,22 @@ option to leverage built-in cloud provider networking instead. ...@@ -98,22 +98,22 @@ option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).
## Community docs and resources ## Community docs and resources
- [kubernetes.io/docs/getting-started-guides/kargo/](https://kubernetes.io/docs/getting-started-guides/kargo/) - [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/)
- [kargo, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr - [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty - [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernets Cluster with Kargo (video)](https://www.youtube.com/watch?v=N9q51JgbWu8) - [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
## Tools and projects on top of Kargo ## Tools and projects on top of Kubespray
- [Digital Rebar](https://github.com/digitalrebar/digitalrebar) - [Digital Rebar](https://github.com/digitalrebar/digitalrebar)
- [Kargo-cli](https://github.com/kubespray/kargo-cli) - [Kubespray-cli](https://github.com/kubespray/kubespray-cli)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer) - [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-incubator/kargo/tree/master/contrib/terraform) - [Terraform Contrib](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform)
## CI Tests ## CI Tests
![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png) ![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/pipelines) </br> [![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines) </br>
CI/end-to-end tests sponsored by Google (GCE), DigitalOcean, [teuto.net](https://teuto.net/) (openstack). CI/end-to-end tests sponsored by Google (GCE), DigitalOcean, [teuto.net](https://teuto.net/) (openstack).
See the [test matrix](docs/test_cases.md) for details. See the [test matrix](docs/test_cases.md) for details.
# Release Process # Release Process
The Kargo Project is released on an as-needed basis. The process is as follows: The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release 1. An issue is proposing a new release with a changelog since the last release
2. At least on of the [OWNERS](OWNERS) must LGTM this release 2. At least on of the [OWNERS](OWNERS) must LGTM this release
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` 3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
4. The release issue is closed 4. The release issue is closed
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kargo $VERSION is released` 5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
## Major/minor releases, merge freezes and milestones ## Major/minor releases, merge freezes and milestones
* Kargo does not maintain stable branches for releases. Releases are tags, not * Kubespray does not maintain stable branches for releases. Releases are tags, not
branches, and there are no backports. Therefore, there is no need for merge branches, and there are no backports. Therefore, there is no need for merge
freezes as well. freezes as well.
...@@ -20,21 +20,21 @@ The Kargo Project is released on an as-needed basis. The process is as follows: ...@@ -20,21 +20,21 @@ The Kargo Project is released on an as-needed basis. The process is as follows:
support lifetime, which ends once the milestone closed. Then only a next major support lifetime, which ends once the milestone closed. Then only a next major
or minor release can be done. or minor release can be done.
* Kargo major and minor releases are bound to the given ``kube_version`` major/minor * Kubespray major and minor releases are bound to the given ``kube_version`` major/minor
version numbers and other components' arbitrary versions, like etcd or network plugins. version numbers and other components' arbitrary versions, like etcd or network plugins.
Older or newer versions are not supported and not tested for the given release. Older or newer versions are not supported and not tested for the given release.
* There is no unstable releases and no APIs, thus Kargo doesn't follow * There is no unstable releases and no APIs, thus Kubespray doesn't follow
[semver](http://semver.org/). Every version describes only a stable release. [semver](http://semver.org/). Every version describes only a stable release.
Breaking changes, if any introduced by changed defaults or non-contrib ansible roles' Breaking changes, if any introduced by changed defaults or non-contrib ansible roles'
playbooks, shall be described in the release notes. Other breaking changes, if any in playbooks, shall be described in the release notes. Other breaking changes, if any in
the contributed addons or bound versions of Kubernetes and other components, are the contributed addons or bound versions of Kubernetes and other components, are
considered out of Kargo scope and are up to the components' teams to deal with and considered out of Kubespray scope and are up to the components' teams to deal with and
document. document.
* Minor releases can change components' versions, but not the major ``kube_version``. * Minor releases can change components' versions, but not the major ``kube_version``.
Greater ``kube_version`` requires a new major or minor release. For example, if Kargo v2.0.0 Greater ``kube_version`` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to ``kube_version: 1.4.x``, ``calico_version: 0.22.0``, ``etcd_version: v3.0.6``, is bound to ``kube_version: 1.4.x``, ``calico_version: 0.22.0``, ``etcd_version: v3.0.6``,
then Kargo v2.1.0 may be bound to only minor changes to ``kube_version``, like v1.5.1 then Kubespray v2.1.0 may be bound to only minor changes to ``kube_version``, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3. and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kargo v3.x.x shall be bound to ``kube_version: 2.x.x`` respectively. And Kubespray v3.x.x shall be bound to ``kube_version: 2.x.x`` respectively.
...@@ -13,7 +13,7 @@ SUPPORTED_OS = { ...@@ -13,7 +13,7 @@ SUPPORTED_OS = {
"coreos-stable" => {box: "coreos-stable", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]}, "coreos-stable" => {box: "coreos-stable", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]}, "coreos-alpha" => {box: "coreos-alpha", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]}, "coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "ubuntu"}, "ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "vagrant"},
} }
# Defaults for config options defined in CONFIG # Defaults for config options defined in CONFIG
...@@ -100,6 +100,10 @@ Vagrant.configure("2") do |config| ...@@ -100,6 +100,10 @@ Vagrant.configure("2") do |config|
end end
end end
$shared_folders.each do |src, dst|
config.vm.synced_folder src, dst
end
config.vm.provider :virtualbox do |vb| config.vm.provider :virtualbox do |vb|
vb.gui = $vm_gui vb.gui = $vm_gui
vb.memory = $vm_memory vb.memory = $vm_memory
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
- hosts: localhost - hosts: localhost
gather_facts: False gather_facts: False
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]} - { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr - hosts: k8s-cluster:etcd:calico-rr
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled. # fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false ansible_ssh_pipelining: false
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os} - { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr - hosts: k8s-cluster:etcd:calico-rr
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
- hosts: k8s-cluster:etcd:calico-rr - hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: kernel-upgrade, tags: kernel-upgrade, when: kernel_upgrade is defined and kernel_upgrade } - { role: kernel-upgrade, tags: kernel-upgrade, when: kernel_upgrade is defined and kernel_upgrade }
- { role: kubernetes/preinstall, tags: preinstall } - { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker } - { role: docker, tags: docker }
...@@ -36,38 +36,38 @@ ...@@ -36,38 +36,38 @@
- hosts: etcd:k8s-cluster:vault - hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults, when: "cert_management == 'vault'" } - { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" } - { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" }
- hosts: etcd - hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true } - { role: etcd, tags: etcd, etcd_cluster_setup: true }
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false } - { role: etcd, tags: etcd, etcd_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault - hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: vault, tags: vault, when: "cert_management == 'vault'"} - { role: vault, tags: vault, when: "cert_management == 'vault'"}
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: kubernetes/node, tags: node } - { role: kubernetes/node, tags: node }
- { role: network_plugin, tags: network } - { role: network_plugin, tags: network }
- hosts: kube-master - hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: kubernetes/master, tags: master } - { role: kubernetes/master, tags: master }
- { role: kubernetes-apps/network_plugin, tags: network } - { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller } - { role: kubernetes-apps/policy_controller, tags: policy-controller }
...@@ -75,18 +75,18 @@ ...@@ -75,18 +75,18 @@
- hosts: calico-rr - hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network } - { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq } - { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf } - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
- hosts: kube-master[0] - hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kargo-defaults} - { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps } - { role: kubernetes-apps, tags: apps }
...@@ -33,10 +33,10 @@ class SearchEC2Tags(object): ...@@ -33,10 +33,10 @@ class SearchEC2Tags(object):
hosts = {} hosts = {}
hosts['_meta'] = { 'hostvars': {} } hosts['_meta'] = { 'hostvars': {} }
##Search ec2 three times to find nodes of each group type. Relies on kargo-role key/value. ##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
for group in ["kube-master", "kube-node", "etcd"]: for group in ["kube-master", "kube-node", "etcd"]:
hosts[group] = [] hosts[group] = []
tag_key = "kargo-role" tag_key = "kubespray-role"
tag_value = ["*"+group+"*"] tag_value = ["*"+group+"*"]
region = os.environ['REGION'] region = os.environ['REGION']
......
...@@ -5,7 +5,7 @@ Provision the base infrastructure for a Kubernetes cluster by using [Azure Resou ...@@ -5,7 +5,7 @@ Provision the base infrastructure for a Kubernetes cluster by using [Azure Resou
## Status ## Status
This will provision the base infrastructure (vnet, vms, nics, ips, ...) needed for Kubernetes in Azure into the specified This will provision the base infrastructure (vnet, vms, nics, ips, ...) needed for Kubernetes in Azure into the specified
Resource Group. It will not install Kubernetes itself, this has to be done in a later step by yourself (using kargo of course). Resource Group. It will not install Kubernetes itself, this has to be done in a later step by yourself (using kubespray of course).
## Requirements ## Requirements
...@@ -47,7 +47,7 @@ $ ./clear-rg.sh <resource_group_name> ...@@ -47,7 +47,7 @@ $ ./clear-rg.sh <resource_group_name>
**WARNING** this really deletes everything from your resource group, including everything that was later created by you! **WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Generating an inventory for kargo ## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call: After you have applied the templates, you can generate an inventory with this call:
...@@ -55,10 +55,10 @@ After you have applied the templates, you can generate an inventory with this ca ...@@ -55,10 +55,10 @@ After you have applied the templates, you can generate an inventory with this ca
$ ./generate-inventory.sh <resource_group_name> $ ./generate-inventory.sh <resource_group_name>
``` ```
It will create the file ./inventory which can then be used with kargo, e.g.: It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell ```shell
$ cd kargo-root-dir $ cd kubespray-root-dir
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/group_vars/all.yml" cluster.yml $ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/group_vars/all.yml" cluster.yml
``` ```
...@@ -65,7 +65,7 @@ HOST_PREFIX = os.environ.get("HOST_PREFIX", "node") ...@@ -65,7 +65,7 @@ HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
# Configurable as shell vars end # Configurable as shell vars end
class KargoInventory(object): class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None): def __init__(self, changed_hosts=None, config_file=None):
self.config = configparser.ConfigParser(allow_no_value=True, self.config = configparser.ConfigParser(allow_no_value=True,
...@@ -337,7 +337,7 @@ MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200 ...@@ -337,7 +337,7 @@ MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
def main(argv=None): def main(argv=None):
if not argv: if not argv:
argv = sys.argv[1:] argv = sys.argv[1:]
KargoInventory(argv, CONFIG_FILE) KubesprayInventory(argv, CONFIG_FILE)
if __name__ == "__main__": if __name__ == "__main__":
sys.exit(main()) sys.exit(main())
[metadata] [metadata]
name = kargo-inventory-builder name = kubespray-inventory-builder
version = 0.1 version = 0.1
...@@ -31,7 +31,7 @@ class TestInventory(unittest.TestCase): ...@@ -31,7 +31,7 @@ class TestInventory(unittest.TestCase):
sys_mock.exit = mock.Mock() sys_mock.exit = mock.Mock()
super(TestInventory, self).setUp() super(TestInventory, self).setUp()
self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4'] self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4']
self.inv = inventory.KargoInventory() self.inv = inventory.KubesprayInventory()
def test_get_ip_from_opts(self): def test_get_ip_from_opts(self):
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2" optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
......
# Kargo on KVM Virtual Machines hypervisor preparation # Kubespray on KVM Virtual Machines hypervisor preparation
A simple playbook to ensure your system has the right settings to enable Kargo A simple playbook to ensure your system has the right settings to enable Kubespray
deployment on VMs. deployment on VMs.
This playbook does not create Virtual Machines, nor does it run Kargo itself. This playbook does not create Virtual Machines, nor does it run Kubespray itself.
### User creation ### User creation
If you want to create a user for running Kargo deployment, you should specify If you want to create a user for running Kubespray deployment, you should specify
both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`. both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`.
#k8s_deployment_user: kargo #k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa #k8s_deployment_user_pkey_path: /tmp/ssh_rsa
...@@ -12,9 +12,9 @@ ...@@ -12,9 +12,9 @@
line: 'br_netfilter' line: 'br_netfilter'
when: br_netfilter is defined and ansible_os_family == 'Debian' when: br_netfilter is defined and ansible_os_family == 'Debian'
- name: Add br_netfilter into /etc/modules-load.d/kargo.conf - name: Add br_netfilter into /etc/modules-load.d/kubespray.conf
copy: copy:
dest: /etc/modules-load.d/kargo.conf dest: /etc/modules-load.d/kubespray.conf
content: |- content: |-
### This file is managed by Ansible ### This file is managed by Ansible
br-netfilter br-netfilter
......
# Deploying a Kargo Kubernetes Cluster with GlusterFS # Deploying a Kubespray Kubernetes Cluster with GlusterFS
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section. You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section.
...@@ -6,7 +6,7 @@ You can either deploy using Ansible on its own by supplying your own inventory f ...@@ -6,7 +6,7 @@ You can either deploy using Ansible on its own by supplying your own inventory f
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group. In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kargo root folder, and execute (supposing that the machines are all using ubuntu): Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
``` ```
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml
...@@ -28,7 +28,7 @@ k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_us ...@@ -28,7 +28,7 @@ k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_us
## Using Terraform and Ansible ## Using Terraform and Ansible
First step is to fill in a `my-kargo-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like: First step is to fill in a `my-kubespray-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
``` ```
cluster_name = "cluster1" cluster_name = "cluster1"
...@@ -65,15 +65,15 @@ $ echo Setting up Terraform creds && \ ...@@ -65,15 +65,15 @@ $ echo Setting up Terraform creds && \
export TF_VAR_auth_url=${OS_AUTH_URL} export TF_VAR_auth_url=${OS_AUTH_URL}
``` ```
Then, standing on the kargo directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster: Then, standing on the kubespray directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
``` ```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
``` ```
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping). This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
Then, provision your Kubernetes (Kargo) cluster with the following ansible call: Then, provision your Kubernetes (kubespray) cluster with the following ansible call:
``` ```
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
...@@ -88,5 +88,5 @@ ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./co ...@@ -88,5 +88,5 @@ ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./co
If you need to destroy the cluster, you can run: If you need to destroy the cluster, you can run:
``` ```
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
``` ```
...@@ -33,7 +33,7 @@ export AWS_DEFAULT_REGION="zzz" ...@@ -33,7 +33,7 @@ export AWS_DEFAULT_REGION="zzz"
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory` - Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Once the infrastructure is created, you can run the kargo playbooks and supply inventory/hosts with the `-i` flag. - Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
**Troubleshooting** **Troubleshooting**
...@@ -54,4 +54,4 @@ It could happen that Terraform doesnt create an Ansible Inventory file automatic ...@@ -54,4 +54,4 @@ It could happen that Terraform doesnt create an Ansible Inventory file automatic
Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones. Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones.
![AWS Infrastructure with Terraform ](docs/aws_kargo.png) ![AWS Infrastructure with Terraform ](docs/aws_kubespray.png)
...@@ -157,7 +157,7 @@ resource "aws_instance" "k8s-worker" { ...@@ -157,7 +157,7 @@ resource "aws_instance" "k8s-worker" {
/* /*
* Create Kargo Inventory File * Create Kubespray Inventory File
* *
*/ */
data "template_file" "inventory" { data "template_file" "inventory" {
......
...@@ -75,25 +75,25 @@ According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variab ...@@ -75,25 +75,25 @@ According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variab
those cannot be overriden from the group vars. In order to override, one should use those cannot be overriden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs. the `-e ` runtime flags (most simple way) or other layers described in the docs.
Kargo uses only a few layers to override things (or expect them to Kubespray uses only a few layers to override things (or expect them to
be overriden for roles): be overriden for roles):
Layer | Comment Layer | Comment
------|-------- ------|--------
**role defaults** | provides best UX to override things for Kargo deployments **role defaults** | provides best UX to override things for Kubespray deployments
inventory vars | Unused inventory vars | Unused
**inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things **inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
inventory host_vars | Unused inventory host_vars | Unused
playbook group_vars | Unuses playbook group_vars | Unused
playbook host_vars | Unused playbook host_vars | Unused
**host facts** | Kargo overrides for internal roles' logic, like state flags **host facts** | Kubespray overrides for internal roles' logic, like state flags
play vars | Unused play vars | Unused
play vars_prompt | Unused play vars_prompt | Unused
play vars_files | Unused play vars_files | Unused
registered vars | Unused registered vars | Unused
set_facts | Kargo overrides those, for some places set_facts | Kubespray overrides those, for some places
**role and include vars** | Provides bad UX to override things! Use extra vars to enforce **role and include vars** | Provides bad UX to override things! Use extra vars to enforce
block vars (only for tasks in block) | Kargo overrides for internal roles' logic block vars (only for tasks in block) | Kubespray overrides for internal roles' logic
task vars (only for the task) | Unused for roles, but only for helper scripts task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
...@@ -124,12 +124,12 @@ The following tags are defined in playbooks: ...@@ -124,12 +124,12 @@ The following tags are defined in playbooks:
| k8s-pre-upgrade | Upgrading K8s cluster | k8s-pre-upgrade | Upgrading K8s cluster
| k8s-secrets | Configuring K8s certs/keys | k8s-secrets | Configuring K8s certs/keys
| kpm | Installing K8s apps definitions with KPM | kpm | Installing K8s apps definitions with KPM
| kube-apiserver | Configuring self-hosted kube-apiserver | kube-apiserver | Configuring static pod kube-apiserver
| kube-controller-manager | Configuring self-hosted kube-controller-manager | kube-controller-manager | Configuring static pod kube-controller-manager
| kubectl | Installing kubectl and bash completion | kubectl | Installing kubectl and bash completion
| kubelet | Configuring kubelet service | kubelet | Configuring kubelet service
| kube-proxy | Configuring self-hosted kube-proxy | kube-proxy | Configuring static pod kube-proxy
| kube-scheduler | Configuring self-hosted kube-scheduler | kube-scheduler | Configuring static pod kube-scheduler
| localhost | Special steps for the localhost (ansible runner) | localhost | Special steps for the localhost (ansible runner)
| master | Configuring K8s master node role | master | Configuring K8s master node role
| netchecker | Installing netchecker K8s app | netchecker | Installing netchecker K8s app
......
...@@ -3,7 +3,7 @@ AWS ...@@ -3,7 +3,7 @@ AWS
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kargo/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role. Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`. The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
...@@ -45,12 +45,12 @@ This will produce an inventory that is passed into Ansible that looks like the f ...@@ -45,12 +45,12 @@ This will produce an inventory that is passed into Ansible that looks like the f
Guide: Guide:
- Create instances in AWS as needed. - Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kargo-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd` - Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Copy the `kargo-aws-inventory.py` script from `kargo/contrib/aws_inventory` to the `kargo/inventory` directory. - Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal: - Set the following AWS credentials and info as environment variables in your terminal:
``` ```
export AWS_ACCESS_KEY_ID="xxxxx" export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy" export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2" export REGION="us-east-2"
``` ```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kargo-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml` - We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment