...
Let's start with the simplest case, namely the static one. For convenience we create a folder in which we will insert only 3 .yaml files: one for the PV, one for the PVC and, finally, one for the application, which will exploit the persistence of the data. After sharing the folder /home/share between the cluster nodes, we copy the following .yaml file
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /home/share
server: <Node_IP> # Enter the IP of the node that shares the folder (usually the Master) |
Next we will use an Nginx image to carry out our tests. We could then create a file in the shared folder, called index.html, containing a simple string like "Hello from Kubernetes Storage!". At this point, let's deploy the PV. Note that the component's status is available for now.
...
Before proceeding with the applay apply of the file, we need to create a namespace. The PVs do not have namespaces (as well as apiservices, nodes and the namespaces themselves), because they are created and made available at the cluster level. The PVC, on the other hand, makes a request for storage from the user, which certainly does not have the scope of the administrator (backend), and therefore needs a namespace. So
| Code Block | ||||
|---|---|---|---|---|
| ||||
$ kubectl create ns static namespace/static created $ kubectl apply -f pvc.yaml -n static persistentvolumeclaim/mypvc created $ kubectl get pvc -n static NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE static mypvc Bound mypv 100Mi RWX 2m # Let's display the PV again. We note that after binding with PVC, in the CLAIM column, its state has changed to bound $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mypv 100Mi RWX Retain Bound static/mypvc 5m |
Now all we have to do is implement an application that takes advantage of the built infrastructure. For testing purposes, we can also create a simple Pod, but we'll make use of the StatefulSet here. It is a workload API object used to manage, exactly, stateful applications (suitable for our cases). We therefore copy
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mystate
spec:
serviceName: mysvc
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mycontainer
image: nginx
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
volumeMounts:
- name: mydata
mountPath: /usr/share/nginx/html # location of basic static site in Nginx
volumes:
- name: mydata
persistentVolumeClaim:
claimName: mypvc # Note the reference to the Claim
---
apiVersion: v1
kind: Service
metadata:
name: mysvc
labels:
app: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80 |
Deploy the application, in the same namespace as the PVC, and verify that everything works correctly. To check it, you can run the curl command, followed by the IP of the service or Pod, or go directly to the folder specified in the container's mountPath (in this case /usr/share/nginx/html). So
| Code Block | ||||
|---|---|---|---|---|
| ||||
$ kubectl get all -n static -o wide NAME READY STATUS RESTARTS AGE IP NODE pod/mystate-0 1/1 Running 0 13m 172.16.231.239 mycentos-1.novalocal pod/mystate-1 1/1 Running 0 13m 172.16.94.103 mycentos-2.novalocal pod/mystate-2 1/1 Running 0 13m 172.16.141.62 mycentos-ing.novalocal NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/mysvc ClusterIP 10.98.99.55 <none> 80/TCP 13m app=myapp NAME READY AGE CONTAINERS IMAGES statefulset.apps/mystate 3/3 13m mycontainer nginx # We use, for example, the IP of the service and of the pod/mystate-2 $ curl 10.98.99.55 Hello from Kubernetes Storage! $ curl 172.16.141.62 Hello from Kubernetes Storage! # Let's enter the pod/mystate-1 and go to the path indicated in the StatefulSet manifest, or run the "curl localhost" command $ kubectl exec -it pod/mystate-1 -n static -- bash root@mystate-1:/$ curl localhost Hello from Kubernetes Storage! root@mystate-1:/$ cat /usr/share/nginx/html/index.html Hello from Kubernetes Storage! |
Finally, try to modify the index.html file from inside the Pod and verify that the changes are acquired from the file on the node (and vice versa).
...
The major difference between dynamic provisioning and the previous one is that the PV is replaced by the SC. So let's copy the .yaml file
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: false |
...
Unfortunately, NFS doesn't provide an internal provisioner, but an external provisioner can be used. For this purpose, we will implement the following two .yaml files. The first is related to the service account. It will create role, role binding, and various roles within the kubernetes cluster.
...
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl create ns dynamic namespace/dynamic created $ kubectl apply -f rbac.yaml -f provisioner.yaml -n dynamic serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner unchanged clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner configured role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created deployment.apps/nfs-client-provisioner created |
...
Then, after the application is deployed, a PV is automatically generated (the system hooks a hash code to the end of the component name) with exactly the required capacity. The PVC is now in the status bound and reports, in the adjacent column, the PV to which it is connected. If we go to check in the /home/share folder, we will find a new one with the composite name <namespace>-<pvc_name>-pvc-<hash_code>. We insert a simple index.html file in this directory and perform the same checks carried out in the static case.
...
In the "parametrization" paragraph of the Storage chapter, countless customization possibilities are listed, surmounted, however, by a warning: not all the parameters presented are supported by the various plugins. The limitations in the implementations presented here are multiple:
ACCESSMODES: the option chosen is irrelevant, because the data access mode is controlled by the NFS (particular in the
/etc/exportsfile);RECLAIMPOLICY: it uses the
Retainpolicy, even if you have chosenDelete;VOLUMEBINDINGMODE: regardless of the choice, it waits for the creation of the first Pod to create a PV (only in the dynamic case);
ALLOWVOLUMEEXPANSION: it does not allow volume expansion, even if you set it to
true(only in the dynamic case).
These limitations are the result of poor integration between the parties, due to the lack of a real Storage provider. Here, in practice, the cluster was scraping data directly from the machines hard drive. In the next sub-chapter we will put into practice a procedure that will allow us to get around the problem, without the need to pay a Storage provider (AWSElasticBlockStore, AzureFile, GCEPersistentDisk, ecc.). We will introduce a layer (next sub-chapter), between the VM hard drive and Kubernetes, which will act as a mediator. In this way we will be able to access a wider range of parameters, able to better meet our storage needs, but still using the volume of the VMs.