Rook provides a growing number of storage providers to a Kubernetes cluster, each with its own operator to deploy and manage the resources for the storage provider. One of these is Ceph: a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments.
Prerequisites
In order to configure the Ceph storage cluster, the following prerogatives must be respected:
- Kubernetes cluster v1.11 or higher;
- LVM needs to be available on the hosts where OSDs will be running;
- Raw devices or partitions (no partitions or formatted filesystems);
- Required at least three worker nodes.
The first 2 points can be easily crossed out. Check the version of your k8s cluster with kubectl version --short and install (on CentOS) the mentioned package with sudo yum install -y lvm2. About the third point on the list, you can confirm whether your partitions or devices are formatted filesystems with the following command. If the FSTYPE field is not empty, there is a filesystem on top of the corresponding device. In this case, you can use only vdb for Ceph and can’t use vda and its partitions.