...
| Code Block |
|---|
| language | bash |
|---|
| title | Deploy pvc.yaml (static) |
|---|
|
$ kubectl create ns mynsstatic
namespace/mynsstatic created
$ kubectl apply -f pvc.yaml -n mynsstatic
persistentvolumeclaim/mypvc created
$ kubectl get pvc -n mynsstatic
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mynsstatic mypvc Bound mypv 100Mi RWX 2m
# Let's display the PV again. We note that after binding with PVC, in the CLAIM column, its state has changed to bound
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 100Mi RWX Retain Bound mynsstatic/mypvc 5m |
Now all we have to do is implement an application that takes advantage of the built infrastructure. For testing purposes, we can also create a simple Pod, but we'll make use of the StatefulSet here. It is a workload API object used to manage, exactly, stateful applications (suitable for our cases). We therefore copy
| Code Block |
|---|
| language | yml |
|---|
| title | application.yaml |
|---|
| collapse | true |
|---|
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mystate
spec:
serviceName: mysvc
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mycontainer
image: nginx
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
volumeMounts:
- name: mydata
mountPath: /usr/share/nginx/html # location of basic static site in Nginx
volumes:
- name: mydata
persistentVolumeClaim:
claimName: mypvc # Note the reference to the Claim
---
apiVersion: v1
kind: Service
metadata:
name: mysvc
labels:
app: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80 |
Deploy the application, in the same namespace as the PVC, and verify that everything works correctly. To check it, you can run the curl command, followed by the IP of the service or Pod, or go directly to the folder specified in the container's mountPath (in this case /usr/share/nginx/html). So
| Code Block |
|---|
| language | bash |
|---|
| title | Verify storage (static) |
|---|
|
$ kubectl get all -n mynsstatic -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod/mystate-0 1/1 Running 0 13m 172.16.231.239 mycentos-1.novalocal
pod/mystate-1 1/1 Running 0 13m 172.16.94.103 mycentos-2.novalocal
pod/mystate-2 1/1 Running 0 13m 172.16.141.62 mycentos-ing.novalocal
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/mysvc ClusterIP 10.98.99.55 <none> 80/TCP 13m app=myapp
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/mystate 3/3 13m mycontainer nginx
# We use, for example, the IP of the service and of the pod/mystate-2
$ curl 10.98.99.55
Hello from Kubernetes Storage!
$ curl 172.16.141.62
Hello from Kubernetes Storage!
# Let's enter the pod/mystate-1 and go to the path indicated in the StatefulSet manifest, or run the "curl localhost" command
$ kubectl exec -it pod/mystate-1 -n mynsstatic -- bash
root@mystate-1:/usr/share/nginx/html# curl localhost
Hello from Kubernetes Storage!
root@mystate-1:/usr/share/nginx/html# cd /usr/share/nginx/html/; cat index.html
Hello from Kubernetes Storage! |
Finally, try to modify the index.html file from inside the Pod and verify that the changes are acquired from the file on the node (and vice versa).
Dynamic provisioning
The major difference between dynamic provisioning and the previous one is that the PV is replaced by the SC. So let's copy the .yaml file
| Code Block |
|---|
| language | yml |
|---|
| title | sc.yaml (dynamic) |
|---|
| collapse | true |
|---|
|
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: false |
Deploy and view the newly created component.
| Code Block |
|---|
| language | bash |
|---|
| title | Deploy sc.yaml (dynamic) |
|---|
|
$ kubectl apply -f sc.yaml
storageclass.storage.k8s.io/mysc created
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
mysc (default) nfs Retain WaitForFirstConsumer false 2s |
Unfortunately, NFS doesn't provide an internal provisioner, but an external provisioner can be used. For this purpose, we will implement the following two .yaml files. The first is related to the service account. It will create role, role binding, and various roles within the kubernetes cluster.
| Code Block |
|---|
| language | yml |
|---|
| title | rbac.yaml (dynamic) |
|---|
| collapse | true |
|---|
|
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: dynamic # Customize the namespace
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: dynamic # Customize the namespace
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io |
The second file implement an automatic provisioner, that use your existing and already configured NFS server to support dynamic provisioning in Kubernetes.
| Code Block |
|---|
| language | yml |
|---|
| title | clientProvisioner.yaml (dynamic) |
|---|
| collapse | true |
|---|
|
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: myvol
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs
- name: NFS_SERVER
value: <Node_IP> # Enter the IP of the node that shares the folder (usually the Master)
- name: NFS_PATH
value: /home/share
volumes:
- name: myvol
nfs:
server: <Node_IP> # Enter the IP of the node that shares the folder (usually the Master)
path: /home/share |
Now we can deploy, but, as already seen in the static provisioning, we must first create a new namespace
| Code Block |
|---|
| language | bash |
|---|
| title | Deploy rbac.yam & clientProvider.yaml |
|---|
|
$ kubectl create ns dynamic
namespace/dynamic created
$ kubectl apply -f rbac.yaml -f provisioner.yaml -n dynamic
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner configured
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
deployment.apps/nfs-client-provisioner created |