I set up a test cluster and follow the documentation.
I created cluster with command ceph-deploy new node1
. After that, ceph configuration file appeared in the current directory, which contains information about the monitor on the node with hostname node1
. Then I added two OSDs to the cluster.
So now I have cluster with 1 monitor and 2 OSDs. ceph status
command says that status is HEALTH_OK
.
Following all the same documentation, I moved on to section "Expanding your cluster" and added two new monitors with commands ceph-deploy mon add node2
and ceph-deploy mon add node3
. Now I have cluster with three monitors in the quorum and status HEALTH_OK
, but there is one little discrepancy for me. The ceph.conf
is still the same. It contains old information about only one monitor. Why ceph-deploy mon add {node-name}
command didn't update configuration file? And the main question is why ceph status
displays correct information about new cluster state with 3 monitors while ceph.conf
doesn't contain this information. Where is real configuration file and why ceph-deploy
knows it but I don't?
And it works even after a reboot. All ceph daemons start, read incorrect ceph.conf
(I checked this with strace
) and, ignoring this, work fine with new configuration.
And the last question. Why ceph-deploy osd activate {ceph-node}:/path/to/directory
command didn't update configuration file too? After all why do we need ceph.conf
file if we have so smart ceph-deploy
now?
You have multiple questions here.
1) ceph.conf doesn't need to be completely the same for all nodes to run. E.g. OSD only need osd configuration they care about, MON only need configuration mon care ( unless you run everything on the same node which is also not recommended) So maybe your MON1 has MON1 MON2 has MON2 MON3 has MON3
2) When MON being created and then added, the MON map being updated so MON itself already know which other MON require to have quorum. So MON doesn't counting on ceph.conf to get quorum information but to change run time configuration.
3) ceph-deploy just a python script to prepare and run the ceph command for you. If you read into the detail ceph-deploy use e.g. ceph-disk zap prepare activate. Once you osd being prepared, and activate, once it is format to ceph partition, udev know where to mount. Then systemd ceph-osd.server will be activate ceph-osd at boot. That's why it doesn't need OSD information in ceph.conf at all