Skip to content
Snippets Groups Projects
Unverified Commit 01eaa8c4 authored by David Ko's avatar David Ko Committed by GitHub
Browse files

Fix #172 support e2e regression (#173)


* Fix #172 support e2e regression

Signed-off-by: default avatarDavid Ko <dko@suse.com>
parent a18426d3
No related branches found
No related tags found
No related merge requests found
Showing
with 326 additions and 294 deletions
......@@ -11,6 +11,7 @@ steps:
image: rancher/dapper:v0.4.1
commands:
- dapper ci
- dapper e2e-test
volumes:
- name: docker
path: /var/run/docker.sock
......
......@@ -5,3 +5,4 @@
*.swp
.idea
.vscode/
Dockerfile.dapper[0-9]*
\ No newline at end of file
FROM golang:1.12.1-alpine3.9
FROM golang:1.15-alpine3.12
ARG DAPPER_HOST_ARCH
ENV ARCH $DAPPER_HOST_ARCH
......@@ -14,13 +14,17 @@ RUN mkdir -p /go/src/golang.org/x && \
go install golang.org/x/tools/cmd/goimports
RUN rm -rf /go/src /go/pkg
RUN if [ "${ARCH}" == "amd64" ]; then \
curl -sL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s v1.15.0; \
fi
curl -sL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s v1.36.0; \
fi; \
curl -sL "https://kind.sigs.k8s.io/dl/v0.10.0/kind-linux-${ARCH}" -o kind && install kind /usr/local/bin; \
curl -sLO "https://dl.k8s.io/release/v1.20.2/bin/linux/${ARCH}/kubectl" && install kubectl /usr/local/bin; \
curl -sL "https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.9.2/kustomize_v3.9.2_linux_${ARCH}.tar.gz" | tar -zxv -C /usr/local/bin;
ENV DAPPER_ENV REPO TAG DRONE_TAG
ENV DAPPER_SOURCE /go/src/github.com/rancher/local-path-provisioner/
ENV DAPPER_OUTPUT ./bin ./dist
ENV DAPPER_DOCKER_SOCKET true
ENV DAPPER_RUN_ARGS --network=host
ENV HOME ${DAPPER_SOURCE}
WORKDIR ${DAPPER_SOURCE}
......
......@@ -28,6 +28,11 @@ In this setup, the directory `/opt/local-path-provisioner` will be used across a
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
```
Or, use `kustomize` to deploy.
```
kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=master" | kubectl apply -f -
```
After installation, you should see something like the following:
```
$ kubectl -n local-path-storage get pod
......@@ -43,10 +48,14 @@ $ kubectl -n local-path-storage logs -f -l app=local-path-provisioner
## Usage
Create a `hostPath` backend Persistent Volume and a pod uses it:
```
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
```
Or, use `kustomize` to deploy them.
```
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod.yaml
kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl apply -f -
```
You should see the PV has been created:
......@@ -77,12 +86,12 @@ kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"
Now delete the pod using
```
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
```
After confirm that the pod is gone, recreated the pod using
```
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
```
Check the volume content:
......@@ -93,8 +102,13 @@ local-path-test
Delete the pod and pvc
```
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
```
Or, use `kustomize` to delete them.
```
kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl delete -f -
```
The volume content stored on the node will be automatically cleaned up. You can check the log of `local-path-provisioner-xxx` for details.
......
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- local-path-storage.yaml
......@@ -2,12 +2,14 @@ apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
......@@ -26,6 +28,7 @@ rules:
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
......@@ -39,6 +42,7 @@ subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
......@@ -78,6 +82,7 @@ spec:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
......@@ -86,6 +91,7 @@ metadata:
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
......
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pvc
- pod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
......
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pvc
- pod.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pvc
- pod.yaml
......@@ -2,7 +2,6 @@ apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
......
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pvc
- pod.yaml
......@@ -2,7 +2,6 @@ apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
......
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- pvc.yaml
......@@ -2,11 +2,10 @@ apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
storage: 128Mi
......@@ -2,8 +2,8 @@ apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
......@@ -13,7 +13,6 @@ volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment