...
In this guide we will use one of the most popular volume types in Kubernetes, namely Network File System (NFS). It's a distributed file system protocol, allowing a user on a client computer to access files over a computer network much like local storage is accessed. Of course, this networking protocol must already exist, as Kubernetes doesn't run NFS, the pods just access it. If you are implementing an NFS server for the first time, you can consult one of the many guides on the web.
Static provisioning
Let's start with the simplest case, namely the static one. For convenience we create a folder in which we will insert only 3 .yaml files: one for the PV, one for the PVC and, finally, one for the application, which will exploit the persistence of the data. After sharing the folder /home/share between the cluster nodes, we copy the following .yaml file
| Code Block |
|---|
| language | yml |
|---|
| title | pv.yaml (static) |
|---|
| collapse | true |
|---|
|
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /home/share
server: <Node_IP> # Enter the IP of the node that shares the folder (usually the Master) |
At this point, let's deploy the PV. Note that the component's status is available for now.
| Code Block |
|---|
| language | bash |
|---|
| title | Deploy pv.yaml (static) |
|---|
|
$ kubectl apply -f pv.yaml
persistentvolume/mypv created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 100Mi RWX Retain Available 5s |
Let's move to the user side (frontend) and copy the following file
| Code Block |
|---|
| language | yml |
|---|
| title | pvc.yaml (static) |
|---|
| collapse | true |
|---|
|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteMany # The value must be consistent with the PV to be hooked to
resources:
requests:
storage: 100Mi # If this value exceeds the PV storage, the PVC will remain pending |
Before proceeding with the applay of the file, we need to create a namespace. The PVs do not have namespaces (as well as apiservices, nodes and the namespaces themselves), because they are created and made available at the cluster level. The PVC, on the other hand, makes a request for storage from the user, which certainly does not have the scope of the administrator, and therefore needs a namespace. So
| Code Block |
|---|
| language | bash |
|---|
| title | Deploy pvc.yaml (static) |
|---|
|
$ kubectl create ns myns
namespace/myns created
$ k apply -f pvc.yaml -n myns
persistentvolumeclaim/mypvc created
$ k get pvc -n myns
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myns mypvc Bound mypv 100Mi RWX 2m
# Let's display the PV again. We note that after binding with PVC, in the CLAIM column, its state has changed to bound
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 100Mi RWX Retain Bound myns/mypvc 5m |
Now all we have to do is implement an application that takes advantage of the built infrastructure. For testing purposes, we can also create a simple Pod, but we'll make use of the StatefulSet here. It is a workload API object used to manage, exactly, stateful applications (suitable for our cases). We therefore copy
| Code Block |
|---|
| language | yml |
|---|
| title | application.yaml |
|---|
| collapse | true |
|---|
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mystate
spec:
serviceName: mysvc
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mycontainer
image: nginx
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
volumeMounts:
- name: mydata
mountPath: /usr/share/nginx/html
volumes:
- name: mydata
persistentVolumeClaim:
claimName: mypvc # Note the reference to the Claim
---
apiVersion: v1
kind: Service
metadata:
name: mysvc
labels:
app: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80 |
Dynamic provisioning