Rook provides a growing number of storage providers to a Kubernetes cluster, each with its own operator to deploy and manage the resources for the storage provider. One of these is Ceph: a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments.
In order to configure the Ceph storage cluster, the following prerogatives must be respected:
The first 2 points can be easily crossed out. Check the version of your k8s cluster with kubectl version --short and install (on CentOS) the mentioned package with sudo yum install -y lvm2. About the third point on the list, you can confirm whether your partitions or devices are formatted filesystems with the following command. If the FSTYPE field is not empty, there is a filesystem on top of the corresponding device. In this case, you can use only vdb for Ceph and can’t use vda and its partitions.
$ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT vda └─vda1 LVM2_member eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / └─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb |