You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Let's try to apply the theoretical concepts on a Kubernetes cluster. The practical test will embrace both static provisioning and dynamic provisioning.

In this guide we will use one of the most popular volume types in Kubernetes, namely Network File System (NFS). It's a distributed file system protocol, allowing a user on a client computer to access files over a computer network much like local storage is accessed. Of course, this networking protocol must already exist, as Kubernetes doesn't run NFS, the pods just access it. If you are implementing an NFS server for the first time, you can consult one of the many guides on the web. 

Static provisioning

Let's start with the simplest case, namely the static one. For convenience we create a folder in which we will insert only 3 .yaml files: one for the PV, one for the PVC and, finally, one for the application, which will exploit the persistence of the data. After sharing the folder /home/share between the cluster nodes, we copy the following .yaml file

pv.yaml (static)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /home/share
    server: <Node_IP>	# Enter the IP of the node that shares the folder (usually the Master)

Next we will use an Nginx image to carry out our tests. We could then create a file in the shared folder, called index.html, containing a simple string like "Hello from Kubernetes Storage!". At this point, let's deploy the PV. Note that the component's status is available for now.

Deploy pv.yaml (static)
$ kubectl apply -f pv.yaml
persistentvolume/mypv created

$ kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM        STORAGECLASS   REASON   AGE
mypv    100Mi      RWX            Retain           Available                                        5s

Let's move to the user side (frontend) and copy the following file

pvc.yaml (static)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany		# The value must be consistent with the PV to be hooked to
  resources:
    requests:
      storage: 100Mi	# If this value exceeds the PV storage, the PVC will remain pending

Before proceeding with the applay of the file, we need to create a namespace. The PVs do not have namespaces (as well as apiservices, nodes and the namespaces themselves), because they are created and made available at the cluster level. The PVC, on the other hand, makes a request for storage from the user, which certainly does not have the scope of the administrator, and therefore needs a namespace. So

Deploy pvc.yaml (static)
$ kubectl create ns myns
namespace/myns created

$ k apply -f pvc.yaml -n myns
persistentvolumeclaim/mypvc created

$ k get pvc -n myns
NAMESPACE   NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myns        mypvc   Bound    mypv     100Mi      RWX                           2m

# Let's display the PV again. We note that after binding with PVC, in the CLAIM column, its state has changed to bound
$ kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM        STORAGECLASS   REASON   AGE
mypv    100Mi      RWX            Retain           Bound       myns/mypvc                           5m

Now all we have to do is implement an application that takes advantage of the built infrastructure. For testing purposes, we can also create a simple Pod, but we'll make use of the StatefulSet here. It is a workload API object used to manage, exactly, stateful applications (suitable for our cases). We therefore copy

application.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mystate
spec:
  serviceName: mysvc

  selector:
    matchLabels:
      app: myapp
  replicas: 3
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: mycontainer
          image: nginx
          imagePullPolicy: "IfNotPresent"
          ports:
          - containerPort: 80
          volumeMounts:
          - name: mydata
            mountPath: /usr/share/nginx/html
      volumes:
      - name: mydata
        persistentVolumeClaim:
          claimName: mypvc		# Note the reference to the Claim
---
apiVersion: v1
kind: Service
metadata:
  name: mysvc
  labels:
    app: myapp
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      name: http
      port: 80
      targetPort: 80

Deploy the application and verify that everything works correctly. To check it, you can run the curl command, followed by the IP of the service or Pod, or go directly to the folder specified in the container's mountPath (in this case /usr/share/nginx/html). So

Verify storage (static)
$ k get all -n myns -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP               NODE
pod/mystate-0   1/1     Running   0          13m   172.16.231.239   mycentos-1.novalocal
pod/mystate-1   1/1     Running   0          13m   172.16.94.103    mycentos-2.novalocal
pod/mystate-2   1/1     Running   0          13m   172.16.141.62    mycentos-ing.novalocal
NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/mysvc   ClusterIP   10.98.99.55   <none>        80/TCP    13m   app=myapp
NAME                       READY   AGE   CONTAINERS    IMAGES
statefulset.apps/mystate   3/3     13m   mycontainer   nginx

# We use, for example, the IP of the service and of the pod/mystate-2
$ curl 10.98.99.55
Hello from Kubernetes Storage!
$ curl 172.16.141.62
Hello from Kubernetes Storage!

# Let's enter the pod/mystate-1 and go to the path indicated in the StatefulSet manifest, or run the "curl localhost" command
$ kubectl exec -it pod/mystate-1 -n myns -- bash
root@mystate-1:/usr/share/nginx/html# curl localhost
Hello from Kubernetes Storage!
root@mystate-1:/usr/share/nginx/html# cd /usr/share/nginx/html/; cat index.html
Hello from Kubernetes Storage!

Finally, try to modify the index.html file from inside the Pod and verify that the changes are acquired from the file on the node (and vice versa).

Dynamic provisioning




  • No labels