Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Rook provides a growing number of storage providers to a Kubernetes cluster, each with its own operator to deploy and manage the resources for the storage provider. One of these is Cepha highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. Here the official Rook documentation.

...

In order to configure the Ceph storage cluster, the following prerogatives must be respected:

  1. Kubernetes cluster v1.1116 or higher;
  2. LVM needs to be available on the hosts where OSDs will be running;
  3. Raw devices or partitions (no partitions or formatted filesystems);
  4. Required at least three worker nodes.

...

Code Block
languagebash
titleRook Operator
collapsetrue
$ git clone --single-branch --branch v1.58.87 https://github.com/rook/rook.git
$ cd rook/clusterdeploy/examples/kubernetes/ceph
$ kubectl create -f crds.yaml -f common.yaml -f operator.yaml

...

Code Block
languagebash
titleVerify Operator
collapsetrue
$ kubectl get all -l app=rook-ceph-operator -n rook-ceph
NAME                                      READY   STATUS    RESTARTS   AGE
pod/rook-ceph-operator-5ff4d5c446-4ldhx   1/1     Running   0          5h40m
NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/rook-ceph-operator-5ff4d5c446   1         1         1       5h40m

...

If you did not modify the cluster.yaml above, it is expected that one OSD will be created per node. The file, which is fine in most cases, contains many parameters that can be changed. Here you will find a detailed list.

Cleanup

If you want to tear down the cluster and bring up a new one, be aware of the following resources that will need to be cleaned up:

  • rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD);
  • /var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds.

...

Info
titleNote

If you changed the default namespaces or paths such as dataDirHostPath in the sample yaml files, you will need to adjust these namespaces and paths throughout these instructions. Moreover, first you will need to clean up the resources created on top of the Rook cluster.

A namespace cannot be removed until all of its resources are removed. Therefore, to eliminate it, we execute the commands in the following order

Code Block
languagebash
titleDelete CephCluster, Operator and related Resources
collapsetrue
$ kubectl -n rook-ceph delete cephcluster rook-ceph
# Verify that the cluster CRD has been deleted (kubectl -n rook-ceph get cephcluster), before continuing.
# Remember that the path of the following files is "rook/deploy/examples".
$ kubectl delete -f operator.yaml
$ kubectl delete -f common.yaml
$ kubectl delete -f crds.yaml

At this point connect to each machine and delete /var/lib/rook, or the path specified by the dataDirHostPath. If the cleanup instructions are not executed in the order above, or you otherwise have difficulty cleaning up the cluster, here are a few things to try.