Search code examples
cephobject-storage

Ceph Pool Size max capacity


Max available space is automatically set without any configurations by my side when creating a pool in ceph. How is it determined? And is there a way to increase the pool size if needed, without adding additional hardware.


Solution

  • The available space is determined by the number of disks, how much data each OSD stores, the device class and the number of replicas. You can also set pool quotas but I don't think that's what you're aming at. The OSD with the least free space basically determines the prediction. So if your fullest (or smallest) OSD has 1TB free space left and your replica count is 3 (pool size) then all your pools within that device-class (e.g. hdd) will have that limit:

    number of OSDs * free space / replica count
    

    That value can change, of course, for example if the PGs are balanced equally or if you changed replication size (or used different EC profiles). But since you wouldn't do that in a production cluster (change replica size) and hopefully planned your architecture carefully the only real option is indeed to add more disks to increase available space. But that's how Ceph is designed, so the scaling out works quite well and will even increase the performance.