Going back to the end of the previous sub-chapter, we introduce the Rook storage provider. It is inserted between the hard disk of the VMs, always based on NFS, and the Kubernetes cluster. As said previously, NFS allows remote hosts to mount filesystems over a network and interact with those filesystems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. As a prerequisite, NFS client packages must be installed on all nodes (nfs-utils on CentOS), where Kubernetes might run pods with NFS mounted. However, the official guide can be found here.
Deploy NFS Operator
First deploy the Rook NFS operator using the following commands
Some preliminary steps
It is recommended that you create Pod Security Policies first. To do this, you can use the psp.yaml file already present in the folder with the usual command
Before we create NFS Server we need to create ServiceAccount and RBAC rules
In this example will walk through creating a NFS server instance, that exports storage that is backed by the default SC for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.
So let's create a simple StorageClass (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default
Next we create a "large" PV, of type hostPath, based on the default SC
Create and Initialize NFS Server
Now that the operator is running, we can set up an instance of a NFS server, creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS server resource can be used to configure the server and its volumes to export.