...
In this guide we will use one of the most popular volume types in Kubernetes, namely Network File System (NFS). It's a distributed file system protocol, allowing a user on a client computer to access files over a computer network much like local storage is accessed. Of course, this networking protocol must already exist, as Kubernetes doesn't run NFS, the pods just access it. If you are implementing an NFS server for the first time, you can consult one of the many guides on the web.
Static provisioning
Let's start with the simplest case, namely the static one. For convenience we create a folder in which we will insert only 3 .yaml files: one for the PV, one for the PVC and, finally, one for the application, which will exploit the persistence of the data. After sharing the folder /home/share between the cluster nodes, we copy the following .yaml file
...
Finally, try to modify the index.html file from inside the Pod and verify that the changes are acquired from the file on the node (and vice versa).
Dynamic provisioning
The major difference between dynamic provisioning and the previous one is that the PV is replaced by the SC. So let's copy the .yaml file
...
Then, after the application is deployed, a PV is automatically generated (the system hooks a hash code to the end of the component name) with exactly the required capacity. The PVC is now in the status bound and reports, in the adjacent column, the PV to which it is connected. If we go to check in the /home/share folder, we will find a new one with the composite name <namespace>-<pvc_name>-pvc-<hash_code>. We insert a simple index.html file in this directory and perform the same checks carried out in the static case.
Limitations
In the "parametrization" paragraph of the Storage chapter, countless customization possibilities are listed, surmounted, however, by a warning: not all the parameters presented are supported by the various plugins. The limitations in the implementations presented here are multiple:
...
These limitations are the result of poor integration between the parties, due to the lack of a real Storage provider. Here, in practice, the cluster was scraping data directly from the machines hard drive. In the next sub-chapter we will put into practice a procedure that will allow us to get around the problem, without the need to pay a Storage provider (AWSElasticBlockStore, AzureFile, GCEPersistentDisk, ecc.). We will introduce a layer (next sub-chapter), between the VM hard drive and Kubernetes, which will act as a mediator. In this way we will be able to access a wider range of parameters, able to better meet our storage needs, but still using the volume of the VMs.