Skip to content
Snippets Groups Projects
Select Git revision
  • e355bef79bdde4a7adc17489205c9e5f338f954c
  • master default protected
  • v2.28.0
  • v2.27.0
  • v2.25.1
  • v2.24.3
  • v2.26.0
  • v2.24.2
  • v2.25.0
  • v2.24.1
  • v2.22.2
  • v2.23.3
  • v2.24.0
  • v2.23.2
  • v2.23.1
  • v2.23.0
  • v2.22.1
  • v2.22.0
  • v2.21.0
  • v2.20.0
  • v2.19.1
  • v2.18.2
22 results

vsphere-csi.md

Blame
  • user avatar
    Bakke authored and GitHub committed
    The old repository for these has been deleted, leaving the previous
    configuration not possible to deploy, and even currently running clusters
    fail after a restart as the DeameonSet has ImagePullPolicy: Always. More
    details can be found here: kubernetes-sigs/vsphere-csi-driver#3053
    
    As of writing, only CSI driver versions 3.1.2 to 3.3.1 is available in
    this registry. This "officially" supports Kubernetes 1.26 to 1.30. Since
    older drivers are not available, I have removed some feature-gating for
    those unavailable versions while I was at it. For the cloud provider,
    the `latest` image is now missing, and only 1.28.0 to 1.31.0 are
    available. I've set the latest of these as the new default.
    
    I also updated the documented default versions, as they were all out of
    date and not aligned with actual code defaults.
    e355bef7
    History

    vSphere CSI Driver

    vSphere CSI driver allows you to provision volumes over a vSphere deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.

    Prerequisites

    The vSphere user for CSI driver requires a set of privileges to perform Cloud Native Storage operations. Follow the official guide to configure those.

    Kubespray configuration

    To enable vSphere CSI driver, uncomment the vsphere_csi_enabled option in group_vars/all/vsphere.yml and set it to true.

    To set the number of replicas for the vSphere CSI controller, you can change vsphere_csi_controller_replicas option in group_vars/all/vsphere.yml.

    You need to source the vSphere credentials you use to deploy your machines that will host Kubernetes.

    Variable Required Type Choices Default Comment
    external_vsphere_vcenter_ip TRUE string IP/URL of the vCenter
    external_vsphere_vcenter_port TRUE string "443" Port of the vCenter API
    external_vsphere_insecure TRUE string "true", "false" "true" set to "true" if the host above uses a self-signed cert
    external_vsphere_user TRUE string User name for vCenter with required privileges (Can also be specified with the VSPHERE_USER environment variable)
    external_vsphere_password TRUE string Password for vCenter (Can also be specified with the VSPHERE_PASSWORD environment variable)
    external_vsphere_datacenter TRUE string Datacenter name to use
    external_vsphere_kubernetes_cluster_id TRUE string "kubernetes-cluster-id" Kubernetes cluster ID to use
    external_vsphere_version TRUE string "7.0u1" Vmware Vsphere version where located all VMs
    external_vsphere_cloud_controller_image_tag TRUE string "v1.31.0" CPI manager image tag to use
    vsphere_syncer_image_tag TRUE string "v3.3.1" Syncer image tag to use
    vsphere_csi_attacher_image_tag TRUE string "v4.3.0" CSI attacher image tag to use
    vsphere_csi_controller TRUE string "v3.3.1" CSI controller image tag to use
    vsphere_csi_controller_replicas TRUE integer 1 Number of pods Kubernetes should deploy for the CSI controller
    vsphere_csi_liveness_probe_image_tag TRUE string "v2.10.0" CSI liveness probe image tag to use
    vsphere_csi_provisioner_image_tag TRUE string "v2.1.0" CSI provisioner image tag to use
    vsphere_csi_node_driver_registrar_image_tag TRUE string "v3.5.0" CSI node driver registrar image tag to use
    vsphere_csi_driver_image_tag TRUE string "v3.3.1" CSI driver image tag to use
    vsphere_csi_resizer_tag TRUE string "v1.8.0" CSI resizer image tag to use
    vsphere_csi_aggressive_node_drain FALSE boolean false Enable aggressive node drain strategy
    vsphere_csi_aggressive_node_unreachable_timeout FALSE int 300 Timeout till node will be drained when it in an unreachable state
    vsphere_csi_aggressive_node_not_ready_timeout FALSE int 300 Timeout till node will be drained when it in not-ready state
    vsphere_csi_namespace TRUE string "kube-system" vSphere CSI namespace to use; kube-system for backward compatibility, should be change to vmware-system-csi on the long run

    Usage example

    To test the dynamic provisioning using vSphere CSI driver, make sure to create a storage policy and storage class, then apply the following manifest:

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-pvc-vsphere
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: mongodb-sc
    
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: csi-data-vsphere
      volumes:
      - name: csi-data-vsphere
        persistentVolumeClaim:
          claimName: csi-pvc-vsphere
          readOnly: false

    Apply this conf to your cluster: kubectl apply -f nginx.yml

    You should see the PVC provisioned and bound:

    $ kubectl get pvc
    NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    csi-pvc-vsphere   Bound    pvc-dc7b1d21-ee41-45e1-98d9-e877cc1533ac   1Gi        RWO            mongodb-sc     10s

    And the volume mounted to the Nginx Pod (wait until the Pod is Running):

    kubectl exec -it nginx -- df -h | grep /usr/share/nginx/html
    /dev/sdb         976M  2.6M  907M   1% /usr/share/nginx/html

    More info

    For further information about the vSphere CSI Driver, you can refer to the official vSphere Cloud Provider documentation.