Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Disable SELinux

Install ceph:


For C7:


Code Block
languagebash
rpm -Uvh https://download.ceph.com/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum install yum-plugin-priorities

For C8:


Code Block
languagebash
rpm -Uvh https://download.ceph.com/rpm-nautilus/el8/noarch/ceph-release-1-1.el8.noarch.rpm


Then:

Code Block
languagebash
yum clean all
yum update
yum install ceph

...


Move the host in the hosts_all/CephProd hostgroup(hosts_all/CephProd-C8 dor CentOS8) hostgroup 

Run once puppet:

Code Block
languagebash
puppet agent -t

...

Add (via puppet) the new OSDs in the ceph.conf file


Code Block
languagebash
...

...


...

...



[osd.50]

...


host = ceph-osd-06 #manual deployments only.

...


public addr = 192.168.61.235

...


cluster addr = 192.168.222.235

...


osd memory target = 3221225472

...




[osd.51]

...


host = ceph-osd-06 #manual deployments only.

...


public addr = 192.168.61.235

...


cluster addr = 192.168.222.235

...


osd memory target = 3221225472

...



...

...


...


Run puppet once to have the file updated on the new OSD node

Code Block
languagebash
puppet agent -t

Disable data movements:


Code Block
languagebash
[root@c-osd-1 /]# ceph osd set norebalance

...


norebalance is set

...



[root@c-osd-1 /]# ceph osd set nobackfill

...


nobackfill is set

...



[root@c-osd-1 /]# ceph osd set noout
noout is set

...



Create a first OSD:

Code Block
languagebash
ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db ceph-db-50-54/db-50


The above command could trigger some data movement

...

Then move this host (ceph-osd-06 in our example) in the relevant rack:

Code Block
languagebash
ceph osd crush move ceph-osd-06 rack=Rack12-PianoAlto


Ri-verify with ceph osd df and ceph osd tree.

Verify that the OSD is using the right vgs::


Code Block
languagebash
[root@ceph-osd-06 ~]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-50

...


infering bluefs devices from bluestore path

...


{

...


"/var/lib/ceph/osd/ceph-50/block": {

...


"osd_uuid": "dc72b996-d035-4dcd-ba42-1a6433eb78f7",

...


"size": 10000827154432,

...


"btime": "2019-02-19 11:55:47.553215",

...


"description": "main",

...


"bluefs": "1",

...


"ceph_fsid": "8162f291-00b6-4b40-a8b4-1981a8c09b64",

...


"kv_backend": "rocksdb",

...


"magic": "ceph osd volume v026",

...


"mkfs_done": "yes",

...


"osd_key": "AQCu4Gtc+jKSJhAAKzaAAYuTKWZs9rjJlBXWww==",

...


"ready": "ready",

...


"whoami": "50"

...


},

...


"/var/lib/ceph/osd/ceph-50/block.db": {

...


"osd_uuid": "dc72b996-d035-4dcd-ba42-1a6433eb78f7",

...


"size": 95563022336,

...


"btime": "2019-02-19 11:55:47.573213",

...


"description": "bluefs db"

...


}

...


}

...


[root@ceph-osd-06 ~]# ls -l /var/lib/ceph/osd/ceph-50/block

...


lrwxrwxrwx 1 ceph ceph 27 Feb 19 12:23 /var/lib/ceph/osd/ceph-50/block -> /dev/ceph-block-50/block-50

...


[root@ceph-osd-06 ~]# ls -l /var/lib/ceph/osd/ceph-50/block.db

...


lrwxrwxrwx 1 ceph ceph 24 Feb 19 12:23 /var/lib/ceph/osd/ceph-50/block.db -> /dev/ceph-db-50-54/db-50

...


[root@ceph-osd-06 ~]# 


Create the other OSDs (use also –osd-id if needed, e.g. when migrating OSDs from filestore to bluestore):



Code Block
languagebash
ceph-volume lvm create --bluestore --data ceph-block-51/block-51 --block.db ceph-db-50-54/db-51

...


ceph-volume lvm create --bluestore --data ceph-block-52/block-52 --block.db ceph-db-50-54/db-52

...


ceph-volume lvm create --bluestore --data ceph-block-53/block-53 --block.db ceph-db-50-54/db-53

...


ceph-volume lvm create --bluestore --data ceph-block-54/block-54 --block.db ceph-db-50-54/db-54

...


ceph-volume lvm create --bluestore --data ceph-block-55/block-55 --block.db ceph-db-55-59/db-55

...


ceph-volume lvm create --bluestore --data ceph-block-56/block-56 --block.db ceph-db-55-59/db-56

...


ceph-volume lvm create --bluestore --data ceph-block-57/block-57 --block.db ceph-db-55-59/db-57

...


ceph-volume lvm create --bluestore --data ceph-block-58/block-58 --block.db ceph-db-55-59/db-58

...


ceph-volume lvm create --bluestore --data ceph-block-59/block-59 --block.db ceph-db-55-59/db-59




Reboot the new osd node:

Code Block
languagebash
shutdown -r now


Verify that the new OSDs are up.

...

Verify that all buckets are using straw2:

Code Block
languagebash
ceph osd getcrushmap -o crush.map; crushtool -d crush.map | grep straw; rm -f crush.map


If not (i.e. if some are using straw), run the following command::

Code Block
languagebash
ceph osd crush set-all-straw-buckets-to-straw2 

Warning: this could trigger a data rebalance

Enable and start puppet:

Code Block
languagebash
systemctl start puppet

...


systemctl enable puppet


Then, after a few minutes, check that "ceph status" doesn't report Pgs in peering.

Then:

Code Block
languagebash
[root@c-osd-1 /]# ceph osd unset nobackfill

...


nobackfill is unset

...


[root@c-osd-1 /]# ceph osd unset norebalance

...


norebalance is unset

...


[root@c-osd-1 /]# ceph osd unset noout

...


noout is unset

...


[root@c-osd-1 /]# 


This should trigger a data movement.