I'm playing with Ceph in Vagrant environment & trying to create some minimal cluster. I have two nodes: 'master' & 'slave' Master as admin, monitor, manager. Slave for OSD.
I'm following the official ceph deploy guides & facing the problem with OSD creation. On slave node I created some 10Gb loop device & mounted it to /media/vdevice then on master node I've tried to create OSD:
ceph-deploy osd create slave1:loop0
It fails with:
...
[slave1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 956, in verify_not_in_use
[slave1][WARNIN] raise Error('Device is mounted', dev)
[slave1][WARNIN] ceph_disk.main.Error: Error: Device is mounted: /dev/loop0
[slave1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/loop0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
In case of unmounted loop0 it fails with:
[slave1][WARNIN] ceph_disk.main.Error: Error: /dev/loop0 device size (0M) is not big enough for data
[slave1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/loop0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
Which is make sense as the actual storage is not binded to the system. So how can we prepare the storage for OSD?
Ceph requires a block device on an OSD. To turn a disk image file into a loopback block device you can use the losetup
utility.
sudo losetup /dev/loop0 /your/10GB/file.img
This command attaches the disk image file to the /dev/loop0
device node creating a loopback block device that can be used with Ceph.
If you need to detach the image file from the device node you can execute
sudo losetup -d /dev/loop0
Note that by default Ceph reserve 100 MB per device, so you have to make sure that your image file size is greater than that. You could create an suitable image file with
dd if=/dev/zero of=/cephfs/vdisk.img count=1 bs=10G