...
Disable SELinux
Install ceph:
For C7:
| Code Block | ||
|---|---|---|
| ||
rpm -Uvh https://download.ceph.com/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum install yum-plugin-priorities
|
For C8:
| Code Block | ||
|---|---|---|
| ||
rpm -Uvh https://download.ceph.com/rpm-nautilus/el8/noarch/ceph-release-1-1.el8.noarch.rpm |
Then:
| Code Block | ||
|---|---|---|
| ||
yum clean all yum update yum install ceph |
...
Move the host in the hosts_all/CephProd hostgroup(hosts_all/CephProd-C8 dor CentOS8) hostgroup
Run once puppet:
| Code Block | ||
|---|---|---|
| ||
puppet agent -t |
...
| Code Block | ||
|---|---|---|
| ||
[root@ceph-osd-06 ~]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-50
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-50/block": {
"osd_uuid": "dc72b996-d035-4dcd-ba42-1a6433eb78f7",
"size": 10000827154432,
"btime": "2019-02-19 11:55:47.553215",
"description": "main",
"bluefs": "1",
"ceph_fsid": "8162f291-00b6-4b40-a8b4-1981a8c09b64",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "AQCu4Gtc+jKSJhAAKzaAAYuTKWZs9rjJlBXWww==",
"ready": "ready",
"whoami": "50"
},
"/var/lib/ceph/osd/ceph-50/block.db": {
"osd_uuid": "dc72b996-d035-4dcd-ba42-1a6433eb78f7",
"size": 95563022336,
"btime": "2019-02-19 11:55:47.573213",
"description": "bluefs db"
}
}
[root@ceph-osd-06 ~]# ls -l /var/lib/ceph/osd/ceph-50/block
lrwxrwxrwx 1 ceph ceph 27 Feb 19 12:23 /var/lib/ceph/osd/ceph-50/block -> /dev/ceph-block-50/block-50
[root@ceph-osd-06 ~]# ls -l /var/lib/ceph/osd/ceph-50/block.db
lrwxrwxrwx 1 ceph ceph 24 Feb 19 12:23 /var/lib/ceph/osd/ceph-50/block.db -> /dev/ceph-db-50-54/db-50
[root@ceph-osd-06 ~]#
|
Create
...
the
...
other
...
OSDs
...
(use
...
also
...
–osd-id
...
if
...
needed,
...
e.g.
...
when
...
migrating
...
OSDs
...
from
...
filestore
...
to
...
bluestore):
| Code Block | ||
|---|---|---|
| ||
ceph-volume lvm create --bluestore --data ceph-block-51/block-51 --block.db ceph-db-50-54/db-51 ceph-volume lvm create --bluestore --data ceph-block-52/block-52 --block.db ceph-db-50-54/db-52 ceph-volume lvm create --bluestore --data ceph-block-53/block-53 --block.db ceph-db-50-54/db-53 ceph-volume lvm create --bluestore --data ceph-block-54/block-54 --block.db ceph-db-50-54/db-54 ceph-volume lvm create --bluestore --data ceph-block-55/block-55 --block.db ceph-db-55-59/db-55 ceph-volume lvm create --bluestore --data ceph-block-56/block-56 --block.db ceph-db-55-59/db-56 ceph-volume lvm create --bluestore --data ceph-block-57/block-57 --block.db ceph-db-55-59/db-57 ceph-volume lvm create --bluestore --data ceph-block-58/block-58 --block.db ceph-db-55-59/db-58 ceph-volume lvm create --bluestore --data ceph-block-59/block-59 --block.db ceph-db-55-59/db-59 |
Reboot the new osd node:
| Code Block | ||
|---|---|---|
| ||
shutdown -r now |
Verify that the new OSDs are up.
...
Verify that all buckets are using straw2:
| Code Block | ||
|---|---|---|
| ||
ceph osd getcrushmap -o crush.map; crushtool -d crush.map | grep straw; rm -f crush.map |
If not (i.e. if some are using straw), run the following command:
| Code Block | ||
|---|---|---|
| ||
ceph osd crush set-all-straw-buckets-to-straw2 |
Warning: this could trigger a data rebalance
Enable and start puppet:
| Code Block | ||
|---|---|---|
| ||
systemctl start puppet |
...
systemctl enable puppet |
Then, after a few minutes, check that "ceph status" doesn't report Pgs in peering.
Then:
| Code Block | ||
|---|---|---|
| ||
[root@c-osd-1 /]# ceph osd unset nobackfill |
...
nobackfill is unset |
...
[root@c-osd-1 /]# ceph osd unset norebalance |
...
norebalance is unset |
...
[root@c-osd-1 /]# ceph osd unset noout |
...
noout is unset |
...
[root@c-osd-1 /]# |
This should trigger a data movement.