...
Disable SELinux
Install ceph:
For C7:
| Code Block |
|---|
|
rpm -Uvh https://download.ceph.com/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum install yum-plugin-priorities
|
For C8:
| Code Block |
|---|
|
rpm -Uvh https://download.ceph.com/rpm-nautilus/el8/noarch/ceph-release-1-1.el8.noarch.rpm |
Then:
| Code Block |
|---|
|
yum clean all
yum update
yum install ceph |
...
Move the host in the hosts_all/CephProd hostgroup(hosts_all/CephProd-C8 dor CentOS8) hostgroup
Run once puppet:
| Code Block |
|---|
|
puppet agent -t
|
...
| Code Block |
|---|
|
[root@c-osd-5 /]# gdisk /dev/vdb
GPT fdisk (gdisk) version 0.8.10
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): x
Expert command (? for help): ?
a set attributes
c change partition GUID
d display the sector alignment value
e relocate backup data structures to the end of the disk
g change disk GUID
h recompute CHS values in protective/hybrid MBR
i show detailed information on a partition
l set the sector alignment value
m return to main menu
n create a new protective MBR
o print protective MBR data
p print the partition table
q quit without saving changes
r recovery and transformation options (experts only)
s resize partition table
t transpose two partition table entries
u replicate partition table on new device
v verify disk
w write table to disk and exit
z zap (destroy) GPT data structures and exit
? print this menu
Expert command (? for help): z
About to wipe out GPT on /dev/vdb. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N):
Your option? (Y/N): Y |
...
osd memory target = 3221225472
If it doesn't not exist yet, create the file:
| Code Block |
|---|
|
-rw------- 1 ceph ceph 71 Apr 28 12:18 /var/lib/ceph/bootstrap-osd/ceph.keyring |
...
| Code Block |
|---|
|
# cat /var/lib/ceph/bootstrap-osd/ceph.keyring |
...
...
key = AQA+Y6hYQTvEHRAAr4Q/mwHCByv/kokqnu6nCA== |
It must match with what appears for the 'client.bootstrap-osd' entry in the 'ceph auth export' output. You can copy the file from another OSD node.
Add (via puppet) the new OSDs in the ceph.conf file
...
...
...
host = ceph-osd-06 #manual deployments only. |
...
public addr = 192.168.61.235 |
...
cluster addr = 192.168.222.235 |
...
osd memory target = 3221225472 |
...
...
host = ceph-osd-06 #manual deployments only. |
...
public addr = 192.168.61.235 |
...
cluster addr = 192.168.222.235 |
...
osd memory target = 3221225472 |
...
...
Run puppet once to have the file updated on the new OSD node
| Code Block |
|---|
|
puppet agent -t
|
Disable data movements:
| Code Block |
|---|
|
[root@c-osd-1 /]# ceph osd set norebalance |
...
...
[root@c-osd-1 /]# ceph osd set nobackfill |
...
...
[root@c-osd-1 /]# ceph osd set noout
noout is set |
...
Create a first OSD:
| Code Block |
|---|
|
ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db ceph-db-50-54/db-50 |
The above command could trigger some data movement
...
Then move this host (ceph-osd-06 in our example) in the relevant rack:
| Code Block |
|---|
|
ceph osd crush move ceph-osd-06 rack=Rack12-PianoAlto |
Ri-verify with ceph osd df and ceph osd tree.
Verify that the OSD is using the right vgs::
| Code Block |
|---|
|
[root@ceph-osd-06 ~]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-50 |
...
infering bluefs devices from bluestore path |
...
...
"/var/lib/ceph/osd/ceph-50/block": { |
...
"osd_uuid": "dc72b996-d035-4dcd-ba42-1a6433eb78f7", |
...
...
"btime": "2019-02-19 11:55:47.553215", |
...
...
...
"ceph_fsid": "8162f291-00b6-4b40-a8b4-1981a8c09b64", |
...
...
"magic": "ceph osd volume v026", |
...
...
"osd_key": "AQCu4Gtc+jKSJhAAKzaAAYuTKWZs9rjJlBXWww==", |
...
...
...
...
"/var/lib/ceph/osd/ceph-50/block.db": { |
...
"osd_uuid": "dc72b996-d035-4dcd-ba42-1a6433eb78f7", |
...
...
"btime": "2019-02-19 11:55:47.573213", |
...
"description": "bluefs db" |
...
...
...
[root@ceph-osd-06 ~]# ls -l /var/lib/ceph/osd/ceph-50/block |
...
lrwxrwxrwx 1 ceph ceph 27 Feb 19 12:23 /var/lib/ceph/osd/ceph-50/block -> /dev/ceph-block-50/block-50 |
...
[root@ceph-osd-06 ~]# ls -l /var/lib/ceph/osd/ceph-50/block.db |
...
lrwxrwxrwx 1 ceph ceph 24 Feb 19 12:23 /var/lib/ceph/osd/ceph-50/block.db -> /dev/ceph-db-50-54/db-50 |
...
Create the other OSDs (use also –osd-id if needed, e.g. when migrating OSDs from filestore to bluestore):
| Code Block |
|---|
|
ceph-volume lvm create --bluestore --data ceph-block-51/block-51 --block.db ceph-db-50-54/db-51 |
...
ceph-volume lvm create --bluestore --data ceph-block-52/block-52 --block.db ceph-db-50-54/db-52 |
...
ceph-volume lvm create --bluestore --data ceph-block-53/block-53 --block.db ceph-db-50-54/db-53 |
...
ceph-volume lvm create --bluestore --data ceph-block-54/block-54 --block.db ceph-db-50-54/db-54 |
...
ceph-volume lvm create --bluestore --data ceph-block-55/block-55 --block.db ceph-db-55-59/db-55 |
...
ceph-volume lvm create --bluestore --data ceph-block-56/block-56 --block.db ceph-db-55-59/db-56 |
...
ceph-volume lvm create --bluestore --data ceph-block-57/block-57 --block.db ceph-db-55-59/db-57 |
...
ceph-volume lvm create --bluestore --data ceph-block-58/block-58 --block.db ceph-db-55-59/db-58 |
...
ceph-volume lvm create --bluestore --data ceph-block-59/block-59 --block.db ceph-db-55-59/db-59 |
Reboot the new osd node:
| Code Block |
|---|
|
shutdown -r now |
Verify that the new OSDs are up.
...
Verify that all buckets are using straw2:
| Code Block |
|---|
|
ceph osd getcrushmap -o crush.map; crushtool -d crush.map | grep straw; rm -f crush.map |
If not (i.e. if some are using straw), run the following command::
| Code Block |
|---|
|
ceph osd crush set-all-straw-buckets-to-straw2
|
Warning: this could trigger a data rebalance
Enable and start puppet:
| Code Block |
|---|
|
systemctl start puppet |
...
Then, after a few minutes, check that "ceph status" doesn't report Pgs in peering.
Then:
| Code Block |
|---|
|
[root@c-osd-1 /]# ceph osd unset nobackfill |
...
...
[root@c-osd-1 /]# ceph osd unset norebalance |
...
...
[root@c-osd-1 /]# ceph osd unset noout |
...
...
This should trigger a data movement.