I added an extra drive for ceph but after zapping the disk, the creation failed because it was being used by a device-mapper. After rebooting it was created properly but when running ceph osd tree
I get:
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 4.53099 root default
-2 3.62700 host mymachine2
0 0.90399 osd.0 up 1.00000 1.00000
3 2.72299 osd.3 up 1.00000 1.00000
-3 0.90399 host mymachine4
1 0.90399 osd.1 up 1.00000 1.00000
2 0 osd.2 down 0 1.00000
I've read the docs but didn't find a way to remove that "rogue" osd.2
ceph health
is not displaying any warnings or errors for now. Any suggestions?
if you try this:
ceph osd crush reweight osd.2 0.0
Then wait for rebalance
ceph osd out 2
service ceph stop osd.2
ceph osd crush remove osd.2
ceph auth del osd.2
ceph osd rm 2
is this resolve the problem ?