Going back to the end of the previous sub-chapter, we introduce the Rook storage provider. It is inserted between the hard disk of the VMs, always based on NFS, and the Kubernetes cluster. As said previously, NFS allows remote hosts to mount filesystems over a network and interact with those filesystems as though they are mounted locally. This enables system administrators to consolidate resources into centralized servers on the network and integration with Rook to have greater control over Kubernetes storage-related parameters.
As a prerequisite, NFS client packages must be installed on all nodes (nfs-utils on CentOS), where Kubernetes might run pods with NFS mounted. Except for installing packages, we don't have to go through the usual steps you do when using NFS: edit the /etc/exports file or mount directories on client machines, etc.
Deploy NFS Operator
First deploy the Rook NFS operator using the following commands
Some preliminary steps
It is recommended that you create Pod Security Policies first. To do this, you can use the psp.yaml file already present in the folder with the usual command
Before we create NFS Server we need to create ServiceAccount and RBAC rules
In this example will walk through creating a NFS Server instance, that exports storage that is backed by the default StorageClass (SC) for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.
So let's create a simple SC (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default
Next we create a "large" PV (without exaggerating) of type hostPath (reference), based on the default SC
Create and Initialize NFS Server
Now that the operator is running, we can set up an instance of a NFS Server, creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS Server resource can be used to configure the server and its volumes to export. With the nfs.yaml file, now create the NFS Server as shown
We can verify that a Kubernetes object has been created, that represents our new NFS Server and its export with the command
Verify, afterwards, that the NFS Server pod is up and running. If the NFS Server pod is in the Running state, then we have successfully created an exported NFS share, that clients can start to access over the network. Inside the nfs.yaml file there are, in addition to the NFS Server part, some lines relating to the implementation of a PVC, which hooks to the default PV created previously. Verify that the PVC has been created in the rook-nfs namespace and that it is bound to the above PV.
This paragraph closes the section relating to Rook. We have a storage, which resides inside one of the cluster machines, managed by the NFS server. From this point on we could hang up on the Dynamic provisioning paragraph of the previous sub-chapter, forgetting (or almost) about Rook.
Accessing the Export
Since Rook version v1.0, Rook supports dynamic provisioning of NFS. This example will be showing how dynamic provisioning feature can be used for NFS. Once the NFS Operator and an instance of NFS Server is deployed, a SC similar to sc.yaml has to be created to dynamically provisioning volumes
Parameters necessary for the SC
The SC need to have the following 3 parameters passed:
exportName: it tells the provisioner which export to use for provisioning the volumes;nfsServerName: name of the NFS Server instance;nfsServerNamespace: namespace where the NFS Server instance is running.
Once you have created the SC above, you can create a PVC that references it. The PVC will automatically (dynamically) create the respective PV.
Note that we have retraced the steps made in the previous sub-chapter. The administrator (backend) creates a SC and makes it available to users (frontend). The user, through the PVC, exploits the SC to generate PV, useful for his purposes. If the user wants, he can of course generate new PVs, thanks to other PVCs, to use them in his applications.
Summary
Let's try to summarize the steps carried out thanks to the visual aid of the screen below. The operations carried out, in chronological order, are (use the AGE column as a reference):
- creation of a large default storage, through
sc/local-storageandpv/local-pv; - deployment of the NFS Server (
nfs.yaml), which generates thepvc/nfs-default-claimlinked to thepv/local-pv; - administrator creates
sc/rook-nfs-share1with provisionerrook-nfs-provisioner; - the user creates
pvc/rook-nfs-pv-claim, which dynamically generates a small volume, within its namespace; - the user can create other volumes in the same or other namespaces.
# With this command you get SC, PV and PVC (of all namespaces) $ kubectl get sc,pv,pvc -A NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE sc/local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 60m sc/rook-nfs-share1 nfs.rook.io/rook-nfs-provisioner Delete Immediate false 50m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE pv/local-pv 10Gi RWX Delete Bound rook-nfs/nfs-default-claim local-storage 58m pv/pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255 10Mi RWX Delete Bound myns/rook-nfs-pv-claim rook-nfs-share 40m pv/pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b 15Mi RWX Retain Bound myns/rook-nfs-pv-claim2 rook-nfs-share1 30m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rook-nfs pvc/nfs-default-claim Bound local-pv 10Gi RWX local-storage 56m myns pvc/rook-nfs-pv-claim Bound pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255 10Mi RWX rook-nfs-share1 40m myns pvc/rook-nfs-pv-claim2 Bound pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b 15Mi RWX rook-nfs-share1 30m