...
| Code Block |
|---|
|
[root@c-osd-5 /]# vgcreate ceph-block-12 /dev/vdb
Device /dev/vdb excluded by a filter. |
This is because the disk has a GPT. Lets delete it with gdisk:or:
| Code Block |
|---|
|
[root@c-osd-5 /]# gdisk Cannot use /dev/vdb
GPT fdisk (gdisk) version 0.8.10
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): x
Expert command (? for help): ?
a set attributes
c change partition GUID
d display the sector alignment value
e relocate backup data structures to the end of the disk
g change disk GUID
h recompute CHS values in protective/hybrid MBR
i show detailed information on a partition
l set the sector alignment value
m return to main menu
n create a new protective MBR
o print protective MBR data
p print the partition table
q quit without saving changes
r recovery and transformation options (experts only)
s resize partition table
t transpose two partition table entries
u replicate partition table on new device
v verify disk
w write table to disk and exit
z zap (destroy) GPT data structures and exit
? print this menu
Expert command (? for help): z
About to wipe out GPT on /dev/vdb. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N):
Your option? (Y/N): Ysdb: device is partitioned
Command requires all devices to be found. |
This is because the disk has a GPT. Lets delete it with sgdisk:
| Code Block |
|---|
|
sgdisk --zap-all /dev/sdb |
If it doesn't not exist yet, create the file:
...
| Code Block |
|---|
|
ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db ceph-db-50-54/db-50 --osd-id 50 |
The above command could trigger some data movement
...
| Code Block |
|---|
|
ceph-volume lvm create --bluestore --data ceph-block-51/block-51 --block.db ceph-db-50-54/db-51 --osd-id 51
ceph-volume lvm create --bluestore --data ceph-block-52/block-52 --block.db ceph-db-50-54/db-db-52 --osd-id 52
ceph-volume lvm create --bluestore --data ceph-block-53/block-53 --block.db ceph-db-50-54/db-53 --osd-id 53
ceph-volume lvm create --bluestore --data ceph-block-54/block-54 --block.db ceph-db-50-54/db-54 --osd-id 54
ceph-volume lvm create --bluestore --data ceph-block-55/block-55 --block.db ceph-db-55-59/db-55 --osd-id 55
ceph-volume lvm create --bluestore --data ceph-block-56/block-56 --block.db ceph-db-55-59/db-56 --osd-id 56
ceph-volume lvm create --bluestore --data ceph-block-57/block-57 --block.db ceph-db-55-59/db-57 --osd-id 57
ceph-volume lvm create --bluestore --data ceph-block-58/block-58 --block.db ceph-db-55-59/db-58 --osd-id 58
ceph-volume lvm create --bluestore --data ceph-block-59/block-59 --block.db ceph-db-55-59/db-59 --osd-id 59 |
Reboot the new osd node:
...