Going back to the end of the previous sub-chapter, we introduce the Rook storage provider. It is inserted between the hard disk of the VMs, always based on NFS, and the Kubernetes cluster. As said previously, NFS allows remote hosts to mount filesystems over a network and interact with those filesystems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. As a prerequisite, NFS client packages must be installed on all nodes (nfs-utils on CentOS), where Kubernetes might run pods with NFS mounted. However, the official guide can be found here.
First deploy the Rook NFS operator using the following commands
# Clone the repository and enter the directory that we will use throughout the guide $ git clone --single-branch --branch v1.5.3 https://github.com/rook/rook.git $ cd rook/cluster/examples/kubernetes/nfs # Then launch (operator is created in the "rook-nfs-system" namespace) $ kubectl create -f common.yaml -f operator.yaml # Check if the operator is up and running $ kubectl get pod -n rook-nfs-system NAME READY STATUS RESTARTS AGE rook-nfs-operator-f79889845-8r5kq 1/1 Running 0 11m |
It is recommended that you create Pod Security Policies first. To do this, you can use the psp.yaml file already present in the folder with the usual command
$ kubectl create -f psp.yaml podsecuritypolicy.policy/rook-nfs-policy created # To get it $ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES rook-nfs-policy true DAC_READ_SEARCH,SYS_RESOURCE RunAsAny RunAsAny RunAsAny RunAsAny false configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,hostPath |
Before we create NFS Server we need to create ServiceAccount and RBAC rules
$ kubectl create -f rbac.yaml namespace/rook-nfs created serviceaccount/rook-nfs-server created clusterrole.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-nfs-provisioner-runner created |
In this example will walk through creating a NFS server instance, that exports storage that is backed by the default StorageClass (SC) for the environment you happen to be running in. In some environments, this could be a hostPath, in others it could be a cloud provider virtual disk. Either way, this example requires a default SC to exist.
So let's create a simple SC (remember to activate the plugin --enable-admission-plugins=DefaultStorageClass in kube-apiserver.yaml), which will act as the default
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer |
Next we create a "large" PV, of type hostPath, based on the default SC
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data" |
Now that the operator is running, we can set up an instance of a NFS server, creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS server resource can be used to configure the server and its volumes to export.