Going back to the end of the previous sub-chapter, we introduce the Rook storage provider. It is inserted between the hard disk of the VMs, always based on NFS, and the Kubernetes cluster. As said previously, NFS allows remote hosts to mount filesystems over a network and interact with those filesystems as though they are mounted locally. This enables system administrators to consolidate resources into centralized servers on the network and integration with Rook to have greater control over Kubernetes storage-related parameters.
As a prerequisite, NFS client packages must be installed on all nodes (nfs-utils on CentOS), where Kubernetes might run pods with NFS mounted. Except for installing packages, we don't have to go through the usual steps you do when using NFS : (i.e. edit the /etc/exports file or mount directories on client machines, etc.).
Deploy NFS Operator
First deploy the Rook NFS operator using the following commands on control-plane
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
# Clone the repository and enter the directory that we will use throughout the guide $ git clone --single-branch --branch v1.57.3 https://github.com/rook/rooknfs.git $ cd rooknfs/cluster/examples/kubernetes/nfs # Then launch (operator is created in the "rook-nfs-system" namespace) $ kubectl create -f commoncrds.yaml -f operator.yaml # Check if the operator is up and running $ kubectl get pod -n rook-nfs-system NAME READY STATUS RESTARTS AGE rook-nfs-operator-f79889845-8r5kq 1/1 Running 0 11m |
...
It is recommended that you create Pod Security Policies first (reference). To do this, you can use the psp.yaml file already present in the folder with the usual command
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl create -f psp.yaml podsecuritypolicy.policy/rook-nfs-policy created # To get it $ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES rook-nfs-policy true DAC_READ_SEARCH,SYS_RESOURCE RunAsAny RunAsAny RunAsAny RunAsAny false configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,hostPath |
Before we create NFS Server we need to create ServiceAccount and RBAC rules
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl create -f rbac.yaml
namespace/rook-nfs created
serviceaccount/rook-nfs-server created
clusterrole.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created |
In this example will walk through creating a NFS Server instance, that exports storage that is backed by the default StorageClass (SC) for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.
So let's create a simple SC (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer |
Next we create a "large" PV (without exaggerating) of type hostPath (reference), based on the default SC
| Info | ||
|---|---|---|
| ||
Pod Security Policies (PSP) enable fine-grained authorization of Pod creation and updates. It is a cluster-level resource that controls security sensitive aspects of the Pod specification. The PSP objects define a set of conditions that a Pod must run with in order to be accepted into the system, as well as defaults for the related fields. |
Before we create NFS Server we need to create ServiceAccount and RBAC rules
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl create -f rbac.yaml
namespace/rook-nfs created
serviceaccount/rook-nfs-server created
clusterrole.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created |
In this example will walk through creating a NFS Server instance, that exports storage that is backed by the default StorageClass (SC) for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.
So let's create a simple SC (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: storage.k8s.io/v1
kind: StorageClass | ||||||
| Code Block | ||||||
| ||||||
apiVersion: v1 kind: PersistentVolume metadata: name: local-pvstorage labelsannotations: typestorageclass.kubernetes.io/is-default-class: local"true" specprovisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer |
Next we create a "large" PV (without exaggerating) of type hostPath (reference), based on the default SC
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: v1 kind: PersistentVolume metadata: name: local-pv labels: type: local spec: storageClassName: local-storage # Same as the name of the default SC created capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath storageClassName: local-storage # Same as the name of the default SC created capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath: path: /mnt/data # The "data" folder, if it does not exist, will be created automatically on the node where the NFS Server pod is up and running nodeAffinity: requiredpath: /mnt/data # If does not nodeSelectorTerms: - matchExpressions: - key: exist, "data" folder will be created automatically on the node where the NFS Server pod is up and running nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node_name> # Enter the node name (obtainable with "kubectl get node") |
As can be seen from the As can be seen from the last lines of the previous file, there is the possibility to choose which node to draw the storage from, thanks to nodeAffinity. If this parameter is omitted, the cluster chooses. Kubernetes usually does not allow you to use the master for this purpose and, by the way, it is not a good architectural practice. The ideal is to create a new VM, with a generous hard disk and lacking in RAM/CPU, and combine it with the cluster. This node should only be used for data archiving and at the same time prevent workflows running on it. To obtain this result it is sufficient to add a taint on the node (reference).
Create and Initialize NFS Server
...
Verify, afterwards, that the NFS Server pod is up and running. If the NFS Server pod is in the Running state, then we have successfully created an exported NFS share, that clients can start to access over the network. Inside the nfs.yaml file there are, in addition to the NFS Server part, some lines relating to the implementation of a PVC, which hooks to the default PV created previously. Verify that the PVC has been created in the rook-nfs namespace and that it is bound to the above PV.
...
| Code Block | ||||
|---|---|---|---|---|
| ||||
# With this command you get SC, PV and PVC (of all namespaces) $ kubectl get sc,pv,pvc -A NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE sc/local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 60m sc/rook-nfs-share1 nfs.rook.io/rook-nfs-provisioner Delete Immediate false 50m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE pv/local-pv 10Gi RWX Delete Bound rook-nfs/nfs-default-claim local-storage 58m pv/pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255 10Mi RWX Delete Bound myns/rook-nfs-pv-claim rook-nfs-shareshare1 40m pv/pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b 15Mi RWX Retain Bound myns/rook-nfs-pv-claim2 rook-nfs-share1 30m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rook-nfs pvc/nfs-default-claim Bound local-pv 10Gi RWX local-storage 56m myns pvc/rook-nfs-pv-claim Bound pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255 10Mi RWX rook-nfs-share1 40m myns pvc/rook-nfs-pv-claim2 Bound pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b 15Mi RWX rook-nfs-share1 30m |
NFS Server provides two ACCESSMODE (nfs.yaml): ReadWrite (RWX) and ReadOnly (RWO). Here the first mode was used, but of course the second can also be used. You can also use both modes, by implementing two NFS Servers, but to make them coexist you need to create different namespaces and service accounts, so you have to start from the rbac.yaml file.