Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Going back to the end of the previous sub-chapter, we introduce the Rook storage provider. It is inserted between the hard disk of the VMs, always based on NFS, and the Kubernetes cluster. As said previously, NFS allows remote hosts to mount filesystems over a network and interact with those filesystems as though they are mounted locally. This enables system administrators to consolidate resources into centralized servers on the network and integration with Rook to have greater control over Kubernetes storage-related parameters.

As a prerequisite, NFS client packages must be installed on all nodes (nfs-utils on CentOS), where Kubernetes might run pods with NFS mounted. Except for installing packages, we don't have to go through the usual steps you do when using NFS : (i.e. edit the /etc/exports file or mount directories on client machines, etc.).

Deploy NFS Operator

First deploy the Rook NFS operator using the following commands on control-plane 

Code Block
languagebash
titleDeploy operator
collapsetrue
# Clone the repository and enter the directory that we will use throughout the guide
$ git clone --single-branch --branch v1.57.3 https://github.com/rook/rooknfs.git
$ cd rooknfs/cluster/examples/kubernetes/nfs

# Then launch (operator is created in the "rook-nfs-system" namespace)
$ kubectl create -f commoncrds.yaml -f operator.yaml

# Check if the operator is up and running
$ kubectl get pod -n rook-nfs-system
NAME                                READY   STATUS    RESTARTS   AGE
rook-nfs-operator-f79889845-8r5kq   1/1     Running   0          11m

...

It is recommended that you create Pod Security Policies first (reference). To do this, you can use the psp.yaml file already present in the folder with the usual command

Code Block
languagebash
titlePod Security Policies
collapsetrue
$ kubectl create -f psp.yaml
podsecuritypolicy.policy/rook-nfs-policy created

# To get it
$ kubectl get psp
NAME              PRIV   CAPS                           SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
rook-nfs-policy   true   DAC_READ_SEARCH,SYS_RESOURCE   RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,hostPath

Before we create NFS Server we need to create ServiceAccount and RBAC rules


bash
Code Block
language
Info
titleServiceAccount and RBAC
collapsetrue
$ kubectl create -f rbac.yaml
namespace/rook-nfs created
serviceaccount/rook-nfs-server created
clusterrole.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created

In this example will walk through creating a NFS Server instance, that exports storage that is backed by the default StorageClass (SC) for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.

So let's create a simple SC (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default

Code Block
languageyml
titleDefault SC
collapsetrue
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Next we create a "large" PV (without exaggerating) of type hostPath (reference), based on the default SC

Pod Security Policies

Pod Security Policies (PSP) enable fine-grained authorization of Pod creation and updates. It is a cluster-level resource that controls security sensitive aspects of the Pod specification. The PSP objects define a set of conditions that a Pod must run with in order to be accepted into the system, as well as defaults for the related fields.

Before we create NFS Server we need to create ServiceAccount and RBAC rules

Code Block
languagebash
titleServiceAccount and RBAC
collapsetrue
$ kubectl create -f rbac.yaml
namespace/rook-nfs created
serviceaccount/rook-nfs-server created
clusterrole.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created

In this example will walk through creating a NFS Server instance, that exports storage that is backed by the default StorageClass (SC) for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.

So let's create a simple SC (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default

Code Blockcode
languageyml
titleDefault PVSC
collapsetrue
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeStorageClass
metadata:
  name: local-pvstorage
  labelsannotations:
    type: local
spec:
  storageClassName: local-storage	# Same as the name of the default SC created
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /mnt/data	# The "data" folder, if it does not exist, will be created automatically on the node where the NFS Server pod is up and running

Create and Initialize NFS Server

Now that the operator is running, we can set up an instance of a NFS Server, creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS Server resource can be used to configure the server and its volumes to export. With the nfs.yaml file, now create the NFS Server as shown

Code Block
languagebash
titleNFS server
collapsetrue
$ kubectl create -f nfs.yaml
persistentvolumeclaim/nfs-default-claim created
nfsserver.nfs.rook.io/rook-nfs created

We can verify that a Kubernetes object has been created, that represents our new NFS Server and its export with the command

storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Next we create a "large" PV (without exaggerating) of type hostPath (reference), based on the default SC

Code Block
languageyml
titleDefault PV
collapsetrue
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
  labels:
    type: local
spec:
  storageClassName: local-storage	# Same as the name of the default SC created
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /mnt/data	# If does not exist, "data" folder will be created automatically on the node where the NFS Server pod is up and running
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
Code Block
languagebash
titleVerify NFS server
collapsetrue
$ kubectl get nfsservers.nfs.rook.io -n rook-nfs
NAME       AGE   STATE
rook-nfs   40m   Running

Verify, afterwards, that the NFS Server pod is up and running. If the NFS Server pod is in the Running state, then we have successfully created an exported NFS share, that clients can start to access over the network. Inside the nfs.yaml file there are, in addition to the NFS Server part, some lines relating to the implementation of a PVC, which hooks to the default PV created previously. Verify that the PVC has been created in the rook-nfs namespace and that it is bound to the above PV.

Code Block
languagebash
titleVerify NFS server pod
collapsetrue
$ kubectl get pod -l app=rook-nfs -n rook-nfs
NAME         READY   STATUS    RESTARTS   AGE
rook-nfs-0   2/2     Running   0          43m

This paragraph closes the section relating to Rook. We have a storage, which resides inside one of the cluster machines, managed by the NFS server. From this point on we could hang up on the Dynamic provisioning paragraph of the previous sub-chapter, forgetting (or almost) about Rook.

Accessing the Export

Since Rook version v1.0, Rook supports dynamic provisioning of NFS. This example will be showing how dynamic provisioning feature can be used for NFS. Once the NFS Operator and an instance of NFS Server is deployed, a SC similar to sc.yaml has to be created to dynamically provisioning volumes

Code Block
languagebash
titlesc.yaml
collapsetrue
$ k create -f sc.yaml
storageclass.storage.k8s.io/rook-nfs-share1 created
Info
titleParameters necessary for the SC

The SC need to have the following 3 parameters passed:

  • exportName: it tells the provisioner which export to use for provisioning the volumes;
  • nfsServerName: name of the NFS Server instance;
  • nfsServerNamespace: namespace where the NFS Server instance is running.
values:
          - <node_name>	# Enter the node name (obtainable with "kubectl get node")

As can be seen from the last lines of the previous file, there is the possibility to choose which node to draw the storage from, thanks to nodeAffinity. If this parameter is omitted, the cluster chooses. Kubernetes usually does not allow you to use the master for this purpose and, by the way, it is not a good architectural practice. The ideal is to create a new VM, with a generous hard disk and lacking in RAM/CPU, and combine it with the cluster. This node should only be used for data archiving and at the same time prevent workflows running on it. To obtain this result it is sufficient to add a taint on the node (reference).

Create and Initialize NFS Server

Now that the operator is running, we can set up an instance of a NFS Server, creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS Server resource can be used to configure the server and its volumes to export. With the nfs.yaml file, now create the NFS Server as shown

Code Block
languagebash
titleNFS server
collapsetrue
$ kubectl create -f nfs.yaml
persistentvolumeclaim/nfs-default-claim created
nfsserver.nfs.rook.io/rook-nfs created

We can verify that a Kubernetes object has been created, that represents our new NFS Server and its export with the command

Code Block
languagebash
titleVerify NFS server
collapsetrue
$ kubectl get nfsservers.nfs.rook.io -n rook-nfs
NAME       AGE   STATE
rook-nfs   40m   Running

Verify, afterwards, that the NFS Server pod is up and running. If the NFS Server pod is in the Running state, then we have successfully created an exported NFS share, that clients can start to access over the network. Inside the nfs.yaml file there are, in addition to the NFS Server part, some lines relating to the implementation of a PVC, which hooks to the default PV created previously. Verify that the PVC has been created in the rook-nfs namespace and that it is bound to the above PV.

Code Block
languagebash
titleVerify NFS server pod
collapsetrue
$ kubectl get pod -l app=rook-nfs -n rook-nfs
NAME         READY   STATUS    RESTARTS   AGE
rook-nfs-0   2/2     Running   0          43m

This paragraph closes the section relating to Rook. We have a storage, which resides inside one of the cluster machines, managed by the NFS server. From this point on we could hang up on the Dynamic provisioning paragraph of the previous sub-chapter, forgetting (or almost) about Rook.

Accessing the Export

Since Rook version v1.0, Rook supports dynamic provisioning of NFS. This example will be showing how dynamic provisioning feature can be used for NFS. Once the NFS Operator and an instance of NFS Server is deployed, a SC similar to sc.yaml has to be created to dynamically provisioning volumesOnce you have created the SC above, you can create a PVC that references it. The PVC will automatically (dynamically) create the respective PV.

Code Block
languagebash
titlepvcsc.yaml
collapsetrue
$ k create -f pvcsc.yaml -n <namespace>
persistentvolumeclaim
storageclass.storage.k8s.io/rook-nfs-pv-claimshare1 created

Note that we have retraced the steps made in the previous sub-chapter. The administrator (backend) creates a SC and makes it available to users (frontend). The user, through the PVC, exploits the SC to generate PV, useful for his purposes. If the user wants, he can of course generate new PVs, thanks to other PVCs, to use them in his applications.

Summary

Let's try to summarize the steps carried out thanks to the visual aid of the screen below. The operations carried out, in chronological order, are (use the AGE column as a reference):

  • creation of a large default storage, through sc/local-storage and pv/local-pv;
  • deployment of the NFS Server (nfs.yaml), which generates the pvc/nfs-default-claim linked to the pv/local-pv;
  • administrator creates sc/rook-nfs-share1 with provisioner rook-nfs-provisioner;
  • the user creates pvc/rook-nfs-pv-claim, which dynamically generates a small volume, within its namespace;
  • the user can create other volumes in the same or other namespaces.


Info
titleParameters necessary for the SC

The SC need to have the following 3 parameters passed:

  • exportName: it tells the provisioner which export to use for provisioning the volumes;
  • nfsServerName: name of the NFS Server instance;
  • nfsServerNamespace: namespace where the NFS Server instance is running.

Once you have created the SC above, you can create a PVC that references it. The PVC will automatically (dynamically) create the respective PV.

Code Block
languagebash
titlepvc.yaml
collapsetrue
$ k create -f pvc.yaml -n <namespace>
persistentvolumeclaim/rook-nfs-pv-claim created

Note that we have retraced the steps made in the previous sub-chapter. The administrator (backend) creates a SC and makes it available to users (frontend). The user, through the PVC, exploits the SC to generate PV, useful for his purposes. If the user wants, he can of course generate new PVs, thanks to other PVCs, to use them in his applications.

Summary

Let's try to summarize the steps carried out thanks to the visual aid of the screen below. The operations carried out, in chronological order, are (use the AGE column as a reference):

  • creation of a large default storage, through sc/local-storage and pv/local-pv;
  • deployment of the NFS Server (nfs.yaml), which generates the pvc/nfs-default-claim linked to the pv/local-pv;
  • administrator creates sc/rook-nfs-share1 with provisioner rook-nfs-provisioner;
  • the user creates pvc/rook-nfs-pv-claim, which dynamically generates a small volume, within its namespace;
  • the user can create other volumes in the same or other namespaces.
Code Block
languagebash
titleAll components implemented
# With this command you get SC, PV and PVC (of all namespaces)
$ kubectl get sc,pv,pvc -A

NAME     
Code Block
languagebash
titleAll components implemented
# With this command you get SC, PV and PVC (of all namespaces)
$ kubectl get sc,pv,pvc -A

NAME                         PROVISIONER                        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
sc/local-storage (default)   kubernetes.io/no-provisioner       Delete          WaitForFirstConsumer   false                  60m
sc/rook-nfs-share1           nfs.rook.io/rook-nfs-provisioner   Delete      PROVISIONER    Immediate              false      RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   50m

NAME       AGE
sc/local-storage (default)   kubernetes.io/no-provisioner       Delete          WaitForFirstConsumer   false               CAPACITY   ACCESS MODES60m
sc/rook-nfs-share1    RECLAIM POLICY   STATUS   CLAIMnfs.rook.io/rook-nfs-provisioner   Delete          Immediate           STORAGECLASS   false  AGE
pv/local-pv                50m

NAME                   10Gi       RWX            Delete    CAPACITY   ACCESS MODES   BoundRECLAIM POLICY    rook-nfs/nfs-default-claimSTATUS   CLAIM local-storage    58m
pv/pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255   10Mi       RWX         STORAGECLASS   Delete  AGE
pv/local-pv         Bound    myns/rook-nfs-pv-claim       rook-nfs-share   40m
pv/pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b      15Mi      10Gi RWX      RWX      Retain      Delete     Bound    myns/rook-nfs-pv-claim2  Bound    rook-nfs/nfs-share1  30m

NAMESPACE-default-claim   NAMElocal-storage     58m
pv/pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255   10Mi       RWX      STATUS   VOLUME   Delete           Bound    myns/rook-nfs-pv-claim       rook-nfs-share1  40m
pv/pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b   15Mi       CAPACITYRWX   ACCESS MODES   STORAGECLASS     Retain AGE
rook-nfs    pvc/      Bound    myns/rook-nfs-defaultpv-claimclaim2    Bound  rook-nfs-share1  local-pv30m

NAMESPACE   NAME                     STATUS   VOLUME        10Gi       RWX            local-storage     56m
myns     CAPACITY   pvc/rook-nfs-pv-claim ACCESS MODES   BoundSTORAGECLASS      AGE
rook-nfs    pvc/nfs-66761edb-0b68-4a6e-92c2-016c9ecf1255default-claim    Bound    local-pv    10Mi       RWX            rook-nfs-share1   40m
myns		pvc/rook-nfs-pv-claim2   Bound    pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b  10Gi       RWX   15Mi       RWX  local-storage     56m
myns        pvc/rook-nfs-share1   30m-pv-claim    Bound    pvc-66761edb-0b68-4a6e-92c2-016c9ecf1255   10Mi       RWX            rook-nfs-share1   40m
myns		pvc/rook-nfs-pv-claim2   Bound    pvc-9cc3bb63-eb0b-4ded-bbb9-3d854e7c6b4b   15Mi       RWX            rook-nfs-share1   30m

NFS Server provides two ACCESSMODE (nfs.yaml): ReadWrite (RWX) and ReadOnly (RWO). Here the first mode was used, but of course the second can also be used. You can also use both modes, by implementing two NFS Servers, but to make them coexist you need to create different namespaces and service accounts, so you have to start from the rbac.yaml file.