Ceph version: 0.94.1
ceph -s
cluster 30266c5f-5e10-4027-936c-e4409667b409
health HEALTH_WARN
65 pgs stale
22 pgs stuck inactive
65 pgs stuck stale
22 pgs stuck unclean
monmap e7: 7 mons at {kvm1=10.136.8.129:6789/0,kvm2=10.136.8.130:6789/0,kvm3=10.136.8.131:6789/0,kvm4=10.136.8.132:6789/0,kvm5=10.136.8.133:6789/0,kvm6=10.136.8.134:6789/0,kvm7=10.136.8.135:6789/0}
election epoch 122, quorum 0,1,2,3,4,5,6 kvm1,kvm2,kvm3,kvm4,kvm5,kvm6,kvm7
osdmap e368: 14 osds: 14 up, 14 in
pgmap v1072573: 1128 pgs, 8 pools, 186 GB data, 51533 objects
630 GB used, 7330 GB / 8319 GB avail
1041 active+clean
65 stale+active+clean
22 creating
client io 361 kB/s rd, 528 kB/s wr, 48 op/s
ceph osd stat
osdmap e368: 14 osds: 14 up, 14 in
As you can see I have issues with stale/inactive/unclean. I tried to do
ceph pg 0.21 query
And this hangs. (0.21 is one of the stale pgs). Strace shows this:
[pid 4850] futex(0x7f8cd8003984, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f8cd8003980,
{FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1} <unfinished ...>
[pid 4855] <... sendmsg resumed> ) = 9
[pid 4850] <... futex resumed> ) = 1
[pid 4855] futex(0x7f8cd8026cd4, FUTEX_WAIT_PRIVATE, 19, NULL <unfinished ...>
[pid 4841] <... futex resumed> ) = 0
[pid 4850] futex(0x7f8cd801e2ac, FUTEX_WAIT_PRIVATE, 11, NULL <unfinished ...>
[pid 4841] futex(0x7f8cd8003900, FUTEX_WAKE_PRIVATE, 1) = 0
[pid 4841] futex(0x7f8cd8003984, FUTEX_WAIT_PRIVATE, 39, NULL <unfinished ...>
[pid 4833] <... select resumed> ) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 4000}) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 8000}) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 16000}) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 32000}) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
[pid 4833] select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
It doesn't ever come back with information. Other PGs do show the proper JSON data. I tried to restart osd0, but not seeing any errors.
Does anybody have any ideas?
I found the issue! It was with pools which had no OSD after they were removed via crush rules. I am not exactly sure why the PGs were created and rules just allowed the OSDs to be moved, but that is not material.
After I deleted all the empty pools I am fine now.
For those who want a procedure how to find that out:
First:
ceph health detail
To find which had issue, then:
ceph pg ls-by-pool
To match the pg with the pools. Afterwards delete the pool with:
ceph osd pool delete <pool name> <pool name> --yes-i-really-really-mean-it