I have an EC2 instance, which keeps on running Out Of Memory killing my mongod process in turn.
Doing df -h gives
udev 2.0G 0 2.0G 0% /dev
tmpfs 396M 41M 355M 11% /run
/dev/xvda1 7.8G 7.4G 0 100% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 396M 0 396M 0% /run/user/1000
so as my /dev/xvda1
is full. I go in my '/' or root directory and do a
ls -l --block-size=M
total 1M
drwxr-xr-x 2 root root 1M Nov 22 04:49 bin
drwxr-xr-x 3 root root 1M Dec 21 13:14 boot
drwxrw-rwx 3 root root 1M Oct 18 21:01 data
drwxr-xr-x 16 root root 1M Oct 18 20:30 dev
drwxr-xr-x 91 root root 1M Dec 14 01:29 etc
drwxr-xr-x 3 root root 1M Oct 18 20:30 home
lrwxrwxrwx 1 root root 1M Dec 21 13:14 initrd.img -> boot/initrd.img-4.4.0-57-generic
lrwxrwxrwx 1 root root 1M Dec 6 05:03 initrd.img.old -> boot/initrd.img-4.4.0-53-generic
drwxr-xr-x 21 root root 1M Sep 7 19:24 lib
drwxr-xr-x 2 root root 1M Sep 7 19:22 lib64
drwx------ 2 root root 1M Sep 7 19:26 lost+found
drwxr-xr-x 2 root root 1M Sep 7 19:22 media
drwxr-xr-x 2 root root 1M Sep 7 19:22 mnt
drwxr-xr-x 2 root root 1M Sep 7 19:22 opt
dr-xr-xr-x 139 root root 0M Oct 18 20:29 proc
drwx------ 4 root root 1M Oct 18 21:00 root
drwxr-xr-x 23 root root 1M Dec 25 13:55 run
drwxr-xr-x 2 root root 1M Oct 19 06:11 sbin
drwxr-xr-x 2 root root 1M Sep 1 17:37 snap
drwxr-xr-x 2 root root 1M Sep 7 19:22 srv
dr-xr-xr-x 13 root root 0M Dec 25 13:59 sys
drwxrwxrwt 11 root root 1M Dec 25 14:17 tmp
drwxr-xr-x 10 root root 1M Sep 7 19:22 usr
drwxr-xr-x 14 root root 1M Oct 18 20:52 var
lrwxrwxrwx 1 root root 1M Dec 21 13:14 vmlinuz -> boot/vmlinuz-4.4.0-57-generic
lrwxrwxrwx 1 root root 1M Dec 6 05:03 vmlinuz.old -> boot/vmlinuz-4.4.0-53-generic
If I add up all the file size it doesn't add up to be 7.4 GB. Then what is ? and how do I fix this? So that it doesn't overflow and kill my mongod process in return.
On some of the answer. It said restart your system. Post restarting this is the output.
udev 2.0G 0 2.0G 0% /dev
tmpfs 396M 5.6M 390M 2% /run
/dev/xvda1 7.8G 5.3G 2.2G 72% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 396M 0 396M 0% /run/user/1000
I still use 5.3G of space. what is causing 2 GB to be filled? How do I drill down the issue?
Okay, Now I have figured it out. I first do df -h
. They tell me what all folders have how much space. We are only considered with /dev/xvda1
. Now go to home directory and type du -h -d 1
which means disk usage and search by directory level 1 in human readable format. Which tells you how much space is being used by each directory.
I then go to that directory and remove the things which are taking space.In my case, it was the logs and wrote scripts to automatically gzip and secure copy them on my local machine and delete the logs. Hence solving the problem.