Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
  • v2.28.0
  • v2.27.0
  • v2.25.1
  • v2.24.3
  • v2.26.0
  • v2.24.2
  • v2.25.0
  • v2.24.1
  • v2.22.2
  • v2.23.3
  • v2.24.0
  • v2.23.2
  • v2.23.1
  • v2.23.0
  • v2.22.1
  • v2.22.0
  • v2.21.0
  • v2.20.0
  • v2.19.1
  • v2.18.2
21 results

setting-up-your-first-cluster.md

Blame
    • Cristian Calin's avatar
      360aff4a
      Rename ansible groups to use _ instead of - (#7552) · 360aff4a
      Cristian Calin authored
      * rename ansible groups to use _ instead of -
      
      k8s-cluster -> k8s_cluster
      k8s-node -> k8s_node
      calico-rr -> calico_rr
      no-floating -> no_floating
      
      Note: kube-node,k8s-cluster groups in upgrade CI
            need clean-up after v2.16 is tagged
      
      * ensure old groups are mapped to the new ones
      Rename ansible groups to use _ instead of - (#7552)
      Cristian Calin authored
      * rename ansible groups to use _ instead of -
      
      k8s-cluster -> k8s_cluster
      k8s-node -> k8s_node
      calico-rr -> calico_rr
      no-floating -> no_floating
      
      Note: kube-node,k8s-cluster groups in upgrade CI
            need clean-up after v2.16 is tagged
      
      * ensure old groups are mapped to the new ones
    setting-up-your-first-cluster.md 20.63 KiB

    Setting up your first cluster with Kubespray

    This tutorial walks you through the detailed steps for setting up Kubernetes with Kubespray.

    The guide is inspired on the tutorial Kubernetes The Hard Way, with the difference that here we want to showcase how to spin up a Kubernetes cluster in a more managed fashion with Kubespray.

    Target Audience

    The target audience for this tutorial is someone looking for a hands-on guide to get started with Kubespray.

    Cluster Details

    Prerequisites

    • Google Cloud Platform: This tutorial leverages the Google Cloud Platform to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. Sign up for $300 in free credits.
    • Google Cloud Platform SDK: Follow the Google Cloud SDK documentation to install and configure the gcloud command line utility. Make sure to set a default compute region and compute zone.
    • The kubectl command line utility is used to interact with the Kubernetes API Server.
    • Linux or Mac environment with Python 3

    Provisioning Compute Resources

    Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single compute zone.

    Networking

    The Kubernetes networking model assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired network policies can limit how groups of containers are allowed to communicate with each other and external network endpoints.

    Setting up network policies is out of scope for this tutorial.

    Virtual Private Cloud Network

    In this section a dedicated Virtual Private Cloud (VPC) network will be setup to host the Kubernetes cluster.

    Create the kubernetes-the-kubespray-way custom VPC network:

    gcloud compute networks create kubernetes-the-kubespray-way --subnet-mode custom

    A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.

    Create the kubernetes subnet in the kubernetes-the-hard-way VPC network:

    gcloud compute networks subnets create kubernetes \
      --network kubernetes-the-kubespray-way \
      --range 10.240.0.0/24

    The 10.240.0.0/24 IP address range can host up to 254 compute instances.

    Firewall Rules

    Create a firewall rule that allows internal communication across all protocols. It is important to note that the ipip protocol has to be allowed in order for the calico (see later) networking plugin to work.

    gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \
      --allow tcp,udp,icmp,ipip \
      --network kubernetes-the-kubespray-way \
      --source-ranges 10.240.0.0/24

    Create a firewall rule that allows external SSH, ICMP, and HTTPS:

    gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-external \
      --allow tcp:80,tcp:6443,tcp:443,tcp:22,icmp \
      --network kubernetes-the-kubespray-way \
      --source-ranges 0.0.0.0/0

    It is not feasible to restrict the firewall to a specific IP address from where you are accessing the cluster as the nodes also communicate over the public internet and would otherwise run into this firewall. Technically you could limit the firewall to the (fixed) IP addresses of the cluster nodes and the remote IP addresses for accessing the cluster.

    Compute Instances

    The compute instances in this lab will be provisioned using Ubuntu Server 18.04. Each compute instance will be provisioned with a fixed private IP address and a public IP address (that can be fixed - see guide). Using fixed public IP addresses has the advantage that our cluster node configuration does not need to be updated with new public IP addresses every time the machines are shut down and later on restarted.

    Create three compute instances which will host the Kubernetes control plane:

    for i in 0 1 2; do
      gcloud compute instances create controller-${i} \
        --async \
        --boot-disk-size 200GB \
        --can-ip-forward \
        --image-family ubuntu-1804-lts \
        --image-project ubuntu-os-cloud \
        --machine-type e2-standard-2 \
        --private-network-ip 10.240.0.1${i} \
        --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
        --subnet kubernetes \
        --tags kubernetes-the-kubespray-way,controller
    done

    Do not forget to fix the IP addresses if you plan on re-using the cluster after temporarily shutting down the VMs - see guide

    Create three compute instances which will host the Kubernetes worker nodes:

    for i in 0 1 2; do
      gcloud compute instances create worker-${i} \
        --async \
        --boot-disk-size 200GB \
        --can-ip-forward \
        --image-family ubuntu-1804-lts \
        --image-project ubuntu-os-cloud \
        --machine-type e2-standard-2 \
        --private-network-ip 10.240.0.2${i} \
        --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
        --subnet kubernetes \
        --tags kubernetes-the-kubespray-way,worker
    done

    Do not forget to fix the IP addresses if you plan on re-using the cluster after temporarily shutting down the VMs - see guide

    List the compute instances in your default compute zone:

    gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way"

    Output

    NAME          ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
    controller-0  us-west1-c  e2-standard-2               10.240.0.10  XX.XX.XX.XXX   RUNNING
    controller-1  us-west1-c  e2-standard-2               10.240.0.11  XX.XXX.XXX.XX  RUNNING
    controller-2  us-west1-c  e2-standard-2               10.240.0.12  XX.XXX.XX.XXX  RUNNING
    worker-0      us-west1-c  e2-standard-2               10.240.0.20  XX.XX.XXX.XXX  RUNNING
    worker-1      us-west1-c  e2-standard-2               10.240.0.21  XX.XX.XX.XXX   RUNNING
    worker-2      us-west1-c  e2-standard-2               10.240.0.22  XX.XXX.XX.XX   RUNNING