Search code examples
amazon-ec2ceph

Ceph EC2 install failed to create osd


I'm trying to install Ceph in two ec2 instances, following this guide but I can't create the osd. My cluster has only two servers and it fails to create a partition when using this command:

ceph-deploy osd create  host:xvdb:/dev/xvdb1 host:xvdf:/dev/xvdf1

[WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -K -f -- /dev/xvdf1
[WARNIN] can't get size of data subvolume
[WARNIN] Usage: mkfs.xfs
[WARNIN] /* blocksize */        [-b log=n|size=num]
[WARNIN] /* metadata */     [-m crc=0|1,finobt=0|1,uuid=xxx]
[WARNIN] /* data subvol */  [-d agcount=n,agsize=n,file,name=xxx,size=num,
[WARNIN]                (sunit=value,swidth=value|su=num,sw=num|noalign),
[WARNIN]                sectlog=n|sectsize=num
[WARNIN] /* force overwrite */  [-f]
[WARNIN] /* inode size */   [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
[WARNIN]                projid32bit=0|1]
[WARNIN] /* no discard */   [-K]
[WARNIN] /* log subvol */   [-l agnum=n,internal,size=num,logdev=xxx,version=n
[WARNIN]                sunit=value|su=num,sectlog=n|sectsize=num,
[WARNIN]                lazy-count=0|1]
[WARNIN] /* label */        [-L label (maximum 12 characters)]
[WARNIN] /* naming */       [-n log=n|size=num,version=2|ci,ftype=0|1]
[WARNIN] /* no-op info only */  [-N]
[WARNIN] /* prototype file */   [-p fname]
[WARNIN] /* quiet */        [-q]
[WARNIN] /* realtime subvol */  [-r extsize=num,size=num,rtdev=xxx]
[WARNIN] /* sectorsize */   [-s log=n|size=num]
[WARNIN] /* version */      [-V]
[WARNIN]            devicename
[WARNIN] <devicename> is required unless -d name=xxx is given.
[WARNIN] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
[WARNIN]       xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
[WARNIN] <value> is xxx (512 byte blocks).
[WARNIN] '/sbin/mkfs -t xfs -K -f -- /dev/xvdf1' failed with status code 1
[ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdf /dev/xvdf1
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs

The same error happens in the two disks that I'm trying to create the OSD This is the ceph.conf file that I'm using:

fsid = b3901613-0b17-47d2-baaa-26859c457737
mon_initial_members = host1,host2
mon_host = host1,host2
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd mkfs options xfs = -K
public network = ip.ip.ip.0/24, ip.ip.ip.0/24
cluster network = ip.ip.0.0/24
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256
osd crush chooseleaf type = 3

Does anybody know how to solve this problem?


Solution

  • >>ceph-deploy osd create host:xvdb:/dev/xvdb1 host:xvdf:/dev/xvdf1

    You need to use the DATA partition dev name and Journal partition dev name. So it would be like

    ceph-deploy osd create host:/dev/xvdb1:/dev/xvdb2 host:/dev/xvdf1:/dev/xvdf2

    Also, as you are creating these partitions manually you need to change the ownership of the device to ceph:ceph for ceph-deploy to work. Example: chown ceph:ceph /dev/xvdb* Example: chown ceph:ceph /dev/xvdf*

    NOTE: If you dont specify the journal disk ie [/dev/xvdb2 OR /dev/xvdf2] the ceph-deploy will use file instead of disk partition to store journals.

    -- Deepak