Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
  • v2.28.0
  • v2.27.0
  • v2.25.1
  • v2.24.3
  • v2.26.0
  • v2.24.2
  • v2.25.0
  • v2.24.1
  • v2.22.2
  • v2.23.3
  • v2.24.0
  • v2.23.2
  • v2.23.1
  • v2.23.0
  • v2.22.1
  • v2.22.0
  • v2.21.0
  • v2.20.0
  • v2.19.1
  • v2.18.2
21 results

rbd_provisioner.md

Blame
  • RBD Volume Provisioner for Kubernetes 1.5+

    rbd-provisioner is an out-of-tree dynamic provisioner for Kubernetes 1.5+. You can use it quickly & easily deploy ceph RBD storage that works almost anywhere.

    It works just like in-tree dynamic provisioner. For more information on how dynamic provisioning works, see the docs or this blog post.

    Development

    Compile the provisioner

    make

    Make the container image and push to the registry

    make push

    Test instruction

    • Start Kubernetes local cluster

    See Kubernetes.

    • Create a Ceph admin secret
    ceph auth get client.admin 2>&1 |grep "key = " |awk '{print  $3'} |xargs echo -n > /tmp/secret
    kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret --namespace=kube-system
    • Create a Ceph pool and a user secret
    ceph osd pool create kube 8 8
    ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
    ceph auth get-key client.kube > /tmp/secret
    kubectl create secret generic ceph-secret --from-file=/tmp/secret --namespace=kube-system
    • Start RBD provisioner

    The following example uses rbd-provisioner-1 as the identity for the instance and assumes kubeconfig is at /root/.kube. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.

    docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host quay.io/external_storage/rbd-provisioner /usr/local/bin/rbd-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=rbd-provisioner-1

    Alternatively, deploy it in kubernetes, see deployment.

    • Create a RBD Storage Class

    Replace Ceph monitor's IP in examples/class.yaml with your own and create storage class:

    kubectl create -f examples/class.yaml
    • Create a claim
    kubectl create -f examples/claim.yaml
    • Create a Pod using the claim
    kubectl create -f examples/test-pod.yaml

    Acknowledgements

    • This provisioner is extracted from Kubernetes core with some modifications for this project.