Skip to content
Snippets Groups Projects
setting-up-your-first-cluster.md 20.62 KiB

Setting up your first cluster with Kubespray

This tutorial walks you through the detailed steps for setting up Kubernetes with Kubespray.

The guide is inspired on the tutorial Kubernetes The Hard Way, with the difference that here we want to showcase how to spin up a Kubernetes cluster in a more managed fashion with Kubespray.

Target Audience

The target audience for this tutorial is someone looking for a hands-on guide to get started with Kubespray.

Cluster Details

Prerequisites

  • Google Cloud Platform: This tutorial leverages the Google Cloud Platform to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. Sign up for $300 in free credits.
  • Google Cloud Platform SDK: Follow the Google Cloud SDK documentation to install and configure the gcloud command line utility. Make sure to set a default compute region and compute zone.
  • The kubectl command line utility is used to interact with the Kubernetes API Server.
  • Linux or Mac environment with Python 3

Provisioning Compute Resources

Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single compute zone.

Networking

The Kubernetes networking model assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired network policies can limit how groups of containers are allowed to communicate with each other and external network endpoints.

Setting up network policies is out of scope for this tutorial.

Virtual Private Cloud Network

In this section a dedicated Virtual Private Cloud (VPC) network will be setup to host the Kubernetes cluster.

Create the kubernetes-the-kubespray-way custom VPC network:

gcloud compute networks create kubernetes-the-kubespray-way --subnet-mode custom

A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.

Create the kubernetes subnet in the kubernetes-the-kubespray-way VPC network:

gcloud compute networks subnets create kubernetes \
  --network kubernetes-the-kubespray-way \
  --range 10.240.0.0/24

The 10.240.0.0/24 IP address range can host up to 254 compute instances.

Firewall Rules

Create a firewall rule that allows internal communication across all protocols. It is important to note that the vxlan protocol has to be allowed in order for the calico (see later) networking plugin to work.

gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \
  --allow tcp,udp,icmp,vxlan \
  --network kubernetes-the-kubespray-way \
  --source-ranges 10.240.0.0/24

Create a firewall rule that allows external SSH, ICMP, and HTTPS:

gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-external \
  --allow tcp:80,tcp:6443,tcp:443,tcp:22,icmp \
  --network kubernetes-the-kubespray-way \
  --source-ranges 0.0.0.0/0

It is not feasible to restrict the firewall to a specific IP address from where you are accessing the cluster as the nodes also communicate over the public internet and would otherwise run into this firewall. Technically you could limit the firewall to the (fixed) IP addresses of the cluster nodes and the remote IP addresses for accessing the cluster.

Compute Instances

The compute instances in this lab will be provisioned using Ubuntu Server 18.04. Each compute instance will be provisioned with a fixed private IP address and a public IP address (that can be fixed - see guide). Using fixed public IP addresses has the advantage that our cluster node configuration does not need to be updated with new public IP addresses every time the machines are shut down and later on restarted.

Create three compute instances which will host the Kubernetes control plane:

for i in 0 1 2; do
  gcloud compute instances create controller-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type e2-standard-2 \
    --private-network-ip 10.240.0.1${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-kubespray-way,controller
done

Do not forget to fix the IP addresses if you plan on re-using the cluster after temporarily shutting down the VMs - see guide

Create three compute instances which will host the Kubernetes worker nodes:

for i in 0 1 2; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type e2-standard-2 \
    --private-network-ip 10.240.0.2${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-kubespray-way,worker
done

Do not forget to fix the IP addresses if you plan on re-using the cluster after temporarily shutting down the VMs - see guide

List the compute instances in your default compute zone:

gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way"

Output

NAME          ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
controller-0  us-west1-c  e2-standard-2               10.240.0.10  XX.XX.XX.XXX   RUNNING
controller-1  us-west1-c  e2-standard-2               10.240.0.11  XX.XXX.XXX.XX  RUNNING
controller-2  us-west1-c  e2-standard-2               10.240.0.12  XX.XXX.XX.XXX  RUNNING
worker-0      us-west1-c  e2-standard-2               10.240.0.20  XX.XX.XXX.XXX  RUNNING
worker-1      us-west1-c  e2-standard-2               10.240.0.21  XX.XX.XX.XXX   RUNNING
worker-2      us-west1-c  e2-standard-2               10.240.0.22  XX.XXX.XX.XX   RUNNING

Configuring SSH Access

Kubespray is relying on SSH to configure the controller and worker instances.

Test SSH access to the controller-0 compute instance:

IP_CONTROLLER_0=$(gcloud compute instances list  --filter="tags.items=kubernetes-the-kubespray-way AND name:controller-0" --format="value(EXTERNAL_IP)")
USERNAME=$(whoami)
ssh $USERNAME@$IP_CONTROLLER_0

If this is your first time connecting to a compute instance SSH keys will be generated for you. In this case you will need to enter a passphrase at the prompt to continue.

If you get a 'Remote host identification changed!' warning, you probably already connected to that IP address in the past with another host key. You can remove the old host key by running ssh-keygen -R $IP_CONTROLLER_0

Please repeat this procedure for all the controller and worker nodes, to ensure that SSH access is properly functioning for all nodes.

Set-up Kubespray

The following set of instruction is based on the Quick Start but slightly altered for our set-up.

As Ansible is a python application, we will create a fresh virtual environment to install the dependencies for the Kubespray playbook: