Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Let's start with the simplest case, namely the static one. For convenience we create a folder in which we will insert only 3 .yaml files: one for the PV, one for the PVC and, finally, one for the application, which will exploit the persistence of the data. After sharing the folder /home/share between the cluster nodes, we copy the following .yaml file

Code Block
languageyml
titlepv.yaml (static)
collapsetrue
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /home/share
    server: <Node_IP>	# Enter the IP of the node that shares the folder (usually the Master)

Next we will use an Nginx image to carry out our tests. We could then create a file in the shared folder, called index.html, containing a simple string like "Hello from Kubernetes Storage!". At this point, let's deploy the PV. Note that the component's status is available for now.

...

Deploy the application, in the same namespace as the PVC, and verify that everything works correctly. To check it, you can run the curl command, followed by the IP of the service or Pod, or go directly to the folder specified in the container's mountPath (in this case /usr/share/nginx/html). So

Code Block
languagebash
titleVerify storage (static)
$ kubectl get all -n static -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP               NODE
pod/mystate-0   1/1     Running   0          13m   172.16.231.239   mycentos-1.novalocal
pod/mystate-1   1/1     Running   0          13m   172.16.94.103    mycentos-2.novalocal
pod/mystate-2   1/1     Running   0          13m   172.16.141.62    mycentos-ing.novalocal
NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/mysvc   ClusterIP   10.98.99.55   <none>        80/TCP    13m   app=myapp
NAME                       READY   AGE   CONTAINERS    IMAGES
statefulset.apps/mystate   3/3     13m   mycontainer   nginx

# We use, for example, the IP of the service and of the pod/mystate-2
$ curl 10.98.99.55
Hello from Kubernetes Storage!
$ curl 172.16.141.62
Hello from Kubernetes Storage!

# Let's enter the pod/mystate-1 and go to the path indicated in the StatefulSet manifest, or run the "curl localhost" command
$ kubectl exec -it pod/mystate-1 -n static -- bash
root@mystate-1:/$ curl localhost
Hello from Kubernetes Storage!
root@mystate-1:/$ cat /usr/share/nginx/html/index.html
Hello from Kubernetes Storage!

Finally, try to modify the index.html file from inside the Pod and verify that the changes are acquired from the file on the node (and vice versa).

...

The major difference between dynamic provisioning and the previous one is that the PV is replaced by the SC. So let's copy the .yaml file

Code Block
languageyml
titlesc.yaml (dynamic)
collapsetrue
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mysc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: false

...

Unfortunately, NFS doesn't provide an internal provisioner, but an external provisioner can be used. For this purpose, we will implement the following two .yaml files. The first is related to the service account. It will create role, role binding, and various roles within the kubernetes cluster.

...

Then, after the application is deployed, a PV is automatically generated (the system hooks a hash code to the end of the component name) with exactly the required capacity. The PVC is now in the status bound and reports, in the adjacent column, the PV to which it is connected. If we go to check in the /home/share folder, we will find a new one with the composite name <namespace>-<pvc_name>-pvc-<hash_code>. We insert a simple index.html file in this directory and perform the same checks carried out in the static case.

...

  • ACCESSMODES: the option chosen is irrelevant, because the data access mode is controlled by the NFS (particular in the /etc/exports file);

  • RECLAIMPOLICY: it uses the Retain policy, even if you have chosen Delete;

  • VOLUMEBINDINGMODE: regardless of the choice, it waits for the creation of the first Pod to create a PV (only in the dynamic case);

  • ALLOWVOLUMEEXPANSION: it does not allow volume expansion, even if you set it to true (only in the dynamic case).

...