Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Next we create a "large" PV (without exaggerating) , of type hostPath, based on the default SC

Code Block
languageyml
titleDefault PV
collapsetrue
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
  labels:
    type: local
spec:
  storageClassName: local-storage	# Same as the name of the default SC created
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnceReadWriteMany
  hostPath:
    path: "/mnt/data"	# The "data" folder, if it does not exist, will be created automatically on the node where the NFS server pod is up and running.

...

Now that the operator is running, we can set up an instance of a NFS server, creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS server resource can be used to configure the server and its volumes to export. With the nfs.yaml file, now create the NFS server as shown

Code Block
languagebash
titleNFS server
collapsetrue
$ kubectl create -f nfs.yaml
persistentvolumeclaim/nfs-default-claim created
nfsserver.nfs.rook.io/rook-nfs created

We can verify that a Kubernetes object has been created, that represents our new NFS server and its export with the command

Code Block
languagebash
titleVerify NFS server
collapsetrue
$ kubectl get nfsservers.nfs.rook.io -n rook-nfs
NAME       AGE   STATE
rook-nfs   40m   Running

Verify, afterwards, that the NFS server pod is up and running. If the NFS server pod is in the Running state, then we have successfully created an exported NFS share, that clients can start to access over the network.

Code Block
languagebash
titleVerify NFS server pod
collapsetrue
$ kubectl get pod -l app=rook-nfs -n rook-nfs
NAME         READY   STATUS    RESTARTS   AGE
rook-nfs-0   2/2     Running   0          43m