As per the official man page documentation for coredump.conf, the core dump disk space utilization per node in Kubernetes can be modified by changing the MaxUse section to X%, and by default, it is 10%.
https://man7.org/linux/man-pages/man5/coredump.conf.5.html
I have done the following things :
1st Solution which I tried: Modified /etc/systemd/coredump.conf
by removing # before MaxUse and changes it to 20%
cat /etc/systemd/coredump.conf
MaxUse=20%
After that, I ran "sudo systemctl daemon-reload" in order to reflect the change but it didn't work.
2nd Solution which I tried : Adding /etc/systemd/coredump.conf.d/custom.conf
to override /etc/systemd/coredump.conf
. This option was specified here: https://wiki.archlinux.org/index.php/Core_dump
[Coredump]
MaxUse=20%
After that, I ran "sudo systemctl daemon-reload" in order to reflect the change but it didn't work. I did multiple cores just to check whether both solutions are working but it didn't work:
I have core my test application and just to get large cores I have used the below-mentioned command:
$ sudo dd if=/dev/zero of=abc.xz bs=1024 count=10240000
While testing this my core dump directory /var/lib/systemd/coredump/
had over 150G of core.
Looks like the docs are misleading and say that it defaults to MaxUse
to be 10%. But it seems like the systemd code shows that it's parsed in terms of Bytes, Kbytes, Mbytes, etc. You can try something like:
[Coredump]
MaxUse=20G