Rook provides a growing number of storage providers to a Kubernetes cluster, each with its own operator to deploy and manage the resources for the storage provider. One of these is Ceph: a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. Here the official Rook documentation.
We present a small glossary that can be useful in reading this page:
In order to configure the Ceph storage cluster, the following prerogatives must be respected:
The first 2 points can be easily crossed out. Check the version of your k8s cluster with kubectl version --short and install (on CentOS) the mentioned package with sudo yum install -y lvm2. About the third point, you can confirm whether your partitions or devices are formatted filesystems with the following command. If the FSTYPE field is not empty, there is a filesystem on top of the corresponding device. In this case, you can use only vdb for Ceph and can’t use vda and its partitions.
$ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT vda └─vda1 LVM2_member eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / └─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb |
From the last point of the list it is clear that 3 worker nodes and therefore 3 memory archives are required. Attach as many volumes to your worker nodes (refer to the OpenStack guide). In the volume creation dialog, it is important to set the Volume Source field to No source, empty volume. Regarding the space, a few tens of GB is enough.
The first step is to deploy the Rook operator. To do this, clone the repository from GitHub, move to the indicated folder and run
git clone --single-branch --branch v1.5.8 https://github.com/rook/rook.git cd rook/cluster/examples/kubernetes/ceph kubectl create -f crds.yaml -f common.yaml -f operator.yaml |
The operator, like the other components that we will see shortly, are implemented in the rook-ceph namespace. Verify the rook-ceph-operator is in the Running state before proceeding
$ kubectl get all -l app=rook-ceph-operator NAME READY STATUS RESTARTS AGE pod/rook-ceph-operator-5ff4d5c446-4ldhx 1/1 Running 0 5h40m NAME DESIRED CURRENT READY AGE replicaset.apps/rook-ceph-operator-5ff4d5c446 1 1 1 5h40m |
Now that the Rook operator is running we can create the Ceph cluster. For the cluster to survive reboots, make sure you set the dataDirHostPath property that is valid for your hosts (the default is /var/lib/rook). Create the cluster (the operation takes a few minutes)
$ kubectl create -f cluster.yaml # List pods in the rook-ceph namespace. You should be able to see the following pods once they are all running $ kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-9r4hd 3/3 Running 0 5h51m csi-cephfsplugin-dfffx 3/3 Running 0 5h51m csi-cephfsplugin-mlr6c 3/3 Running 0 5h51m csi-cephfsplugin-provisioner-8658f67749-bprh7 6/6 Running 9 5h51m csi-cephfsplugin-provisioner-8658f67749-lqlm6 6/6 Running 24 5h51m csi-rbdplugin-provisioner-6bc6766db-2m72j 6/6 Running 21 5h51m csi-rbdplugin-provisioner-6bc6766db-vfv6n 6/6 Running 6 5h51m csi-rbdplugin-r6kzg 3/3 Running 0 5h51m csi-rbdplugin-slglp 3/3 Running 0 5h51m csi-rbdplugin-xksk8 3/3 Running 0 5h51m rook-ceph-crashcollector-k8s-worker-1.novalocal-685685cd4bfr2fp 1/1 Running 0 5h50m rook-ceph-crashcollector-k8s-worker-2.novalocal-65799fd97cvbq78 1/1 Running 0 5h40m rook-ceph-crashcollector-k8s-worker-3.novalocal-78499fc58dwhgwm 1/1 Running 0 5h51m rook-ceph-mgr-a-774d799bc7-jfc9m 1/1 Running 0 5h50m rook-ceph-mon-a-57498775bf-d9kjk 1/1 Running 0 5h51m rook-ceph-mon-b-866d86c8ff-rj5g9 1/1 Running 0 5h51m rook-ceph-mon-c-dbdc6994b-wtvfz 1/1 Running 0 5h50m rook-ceph-operator-5ff4d5c446-4ldhx 1/1 Running 0 5h53m rook-ceph-osd-0-57bf74dc8-kj444 1/1 Running 0 5h40m rook-ceph-osd-1-86dc8bf468-rsvld 1/1 Running 0 5h40m rook-ceph-osd-2-5b87cf587d-9pqsx 1/1 Running 0 5h40m rook-ceph-osd-prepare-k8s-worker-1.novalocal-9wzk2 0/1 Completed 0 9m10s rook-ceph-osd-prepare-k8s-worker-2.novalocal-sgcdb 0/1 Completed 0 9m8s rook-ceph-osd-prepare-k8s-worker-3.novalocal-lwpxz 0/1 Completed 0 9m5s |
If you did not modify the cluster.yaml above, it is expected that one OSD will be created per node. The file, which is fine in most cases, contains many parameters that can be changed. Here you will find a detailed list.
If you want to tear down the cluster and bring up a new one, be aware of the following resources that will need to be cleaned up:
rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD);/var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds.If you changed the default namespaces or paths such as |
A namespace cannot be removed until all of its resources are removed. Therefore, to eliminate it, we execute the commands in the following order
$ kubectl -n rook-ceph delete cephcluster rook-ceph # Verify that the cluster CRD has been deleted (kubectl -n rook-ceph get cephcluster), before continuing. # Remember that the path of the following files is "rook/cluster/examples/kubernetes/ceph". $ kubectl delete -f operator.yaml $ kubectl delete -f common.yaml $ kubectl delete -f crds.yaml |
At this point connect to each machine and delete /var/lib/rook, or the path specified by the dataDirHostPath. If the cleanup instructions are not executed in the order above, or you otherwise have difficulty cleaning up the cluster, here are a few things to try.