Going back to the end of the previous sub-chapter, we introduce the Rook storage provider. It is inserted between the hard disk of the VMs, always based on NFS, and the Kubernetes cluster. As said previously, NFS allows remote hosts to mount filesystems over a network and interact with those filesystems as though they are mounted locally. This enables system administrators to consolidate resources onto into centralized servers on the network. network and integration with Rook to have greater control over Kubernetes storage-related parameters.
As a prerequisite, NFS client packages must be installed on all nodes (nfs-utils on CentOS), where Kubernetes might run pods with NFS mounted. Except for installing packages, we don't have to go through the usual steps you do when using NFS : (i.e. edit the /etc/exports file or mount directories on client machines, etc.).
Deploy NFS Operator
First deploy the Rook NFS operator using the following commands on control-plane
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
# Clone the repository and enter the directory that we will use throughout the guide $ git clone --single-branch --branch v1.57.3 https://github.com/rook/rooknfs.git $ cd rooknfs/cluster/examples/kubernetes/nfs # Then launch (operator is created in the "rook-nfs-system" namespace) $ kubectl create -f commoncrds.yaml -f operator.yaml # Check if the operator is up and running $ kubectl get pod -n rook-nfs-system NAME READY STATUS RESTARTS AGE rook-nfs-operator-f79889845-8r5kq 1/1 Running 0 11m |
Logs produced by the operator can be very useful for troubleshooting
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl logs -l app=rook-nfs-operator -n rook-nfs-system |
Some preliminary steps
It is recommended that you create Pod Security Policies first (reference). To do this, you can use the psp.yaml file already present in the folder with the usual command
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl create -f psp.yaml podsecuritypolicy.policy/rook-nfs-policy created # To get it $ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES rook-nfs-policy true DAC_READ_SEARCH,SYS_RESOURCE RunAsAny RunAsAny RunAsAny RunAsAny false configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,hostPath |
| Info | ||
|---|---|---|
| ||
Pod Security Policies (PSP) enable fine-grained authorization of Pod creation and updates. It is a cluster-level resource that controls security sensitive aspects of the Pod specification. The PSP objects define a set of conditions that a Pod must run with in order to be accepted into the system, as well as defaults for the related fields. |
Before we create NFS Server we need to create ServiceAccount and RBAC rules
...
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
labels:
type: local
spec:
storageClassName: local-storage # Same as the name of the default SC created
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data" # The "data" folder, if it If does not exist, "data" folder will be created automatically on the node where the NFS Server pod is up and running
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node_name> # Enter the node name (obtainable with "kubectl get node") |
As can be seen from the last lines of the previous file, there is the possibility to choose which node to draw the storage from, thanks to nodeAffinity. If this parameter is omitted, the cluster chooses. Kubernetes usually does not allow you to use the master for this purpose and, by the way, it is not a good architectural practice. The ideal is to create a new VM, with a generous hard disk and lacking in RAM/CPU, and combine it with the cluster. This node should only be used for data archiving and at the same time prevent workflows running on it. To obtain this result it is sufficient to add a taint on the node (reference).
Create and Initialize NFS Server
...
Verify, afterwards, that the NFS Server pod is up and running. If the NFS Server pod is in the Running state, then we have successfully created an exported NFS share, that clients can start to access over the network. Inside the nfs.yaml file there are, in addition to the NFS Server part, some lines relating to the implementation of a PVC, which hooks to the default PV created previously. Verify that the PVC has been created in the rook-nfs namespace and that it is bound to the above PV.
...
Note that we have retraced the steps made in the previous sub-chapter. The administrator (backend) creates a SC and makes it available to users (frontend). The user, through the PVC, exploits the SC to generate PV, useful for his purposes. If the user wants, he can of course generate new PVs, thanks to other PVCs, to use them in his applications.
Summary
Let's try to summarize the steps carried out thanks to the visual aid of the screen below. The operations carried out, in chronological order, are (use the AGE column as a reference):
- creation of a large default storage, through
sc/local-storageandpv/local-pv; - deployment of the NFS Server (
nfs.yaml), which generates thepvc/nfs-default-claimlinked to thepv/local-pv; - administrator creates
sc/rook-nfs-share1with provisionerrook-nfs-provisioner; - the user creates
pvc/rook-nfs-pv-claim, which dynamically generates a small volume, within its namespace; - the user can create other volumes in the same or other namespaces.
| Code Block | ||||
|---|---|---|---|---|
| ||||
# With this command you get SC, PV and PVC (of all namespaces)
$ kubectl get sc,pv,pvc -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
sc/local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 60m
sc/rook-nfs-share1 nfs.rook.io/rook-nfs-provisioner Delete Immediate false 50m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
pv/local-pv 10Gi RWX Delete Bound rook-nfs/nfs-default-claim local-storage 58m
pv/pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255 10Mi RWX Delete Bound myns/rook-nfs-pv-claim rook-nfs-share1 40m
pv/pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b 15Mi RWX Retain Bound myns/rook-nfs-pv-claim2 rook-nfs-share1 30m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rook-nfs pvc/nfs-default-claim Bound local-pv 10Gi RWX local-storage 56m
myns pvc/rook-nfs-pv-claim Bound pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255 10Mi RWX rook-nfs-share1 40m
myns pvc/rook-nfs-pv-claim2 Bound pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b 15Mi RWX rook-nfs-share1 30m |
NFS Server provides two ACCESSMODE (nfs.yaml): ReadWrite (RWX) and ReadOnly (RWO). Here the first mode was used, but of course the second can also be used. You can also use both modes, by implementing two NFS Servers, but to make them coexist you need to create different namespaces and service accounts, so you have to start from the rbac.yaml file.