...
Before proceeding with the applay apply of the file, we need to create a namespace. The PVs do not have namespaces (as well as apiservices, nodes and the namespaces themselves), because they are created and made available at the cluster level. The PVC, on the other hand, makes a request for storage from the user, which certainly does not have the scope of the administrator (backend), and therefore needs a namespace. So
...
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mystate
spec:
serviceName: mysvc
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mycontainer
image: nginx
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
volumeMounts:
- name: mydata
mountPath: /usr/share/nginx/html # location of basic static site in Nginx
volumes:
- name: mydata
persistentVolumeClaim:
claimName: mypvc # Note the reference to the Claim
---
apiVersion: v1
kind: Service
metadata:
name: mysvc
labels:
app: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80 |
...
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
$ kubectl create ns dynamic namespace/dynamic created $ kubectl apply -f rbac.yaml -f provisioner.yaml -n dynamic serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner unchanged clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner configured role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created deployment.apps/nfs-client-provisioner created |
...
In the "parametrization" paragraph of the Storage chapter, countless customization possibilities are listed, surmounted, however, by a warning: not all the parameters presented are supported by the various plugins. The limitations in the implementations presented here are multiple:
...