I'm using server side rendering with Angular universal, and PM2 as the process manager, in a Digital Ocean droplet of 8 GB Memory / 80 GB Disk / Ubuntu 16.04.3 x64 / 4 vCPUs.
I use a 6GB swap file, and the available memory when "free -m" is the following:
total used free shared buff/cache available
Mem: 7983 1356 5290 16 1335 6278
Swap: 6143 88 6055
The ram used looks fine. There are 4 processes with Cluster Mode of PM2.
Every 6-8 hours, when the memory reaches ~88% in my Digital Ocean panel, the CPU goes very high, the web application does not respond correctly and PM2 has to restart the process, not sure for how long the web application does not work well.
Here is an image of what happens:
Performance is fine when working normally:
I think I'm missing some sort of configuration or something, since this happens always at the same periods of time.
EDIT1 So far I fixed some incompatibilities in my code (the app was working, but sometimes failed due to this), and added a memory limit in pm2 of 1GB. I'm not sure if this is the way to go since I'm a bit new to process management, but the CPU levels are fine now. Any comment is appreciated. I leave a picture of the current behaviour, every time one of the four processes reach 1GB, its restarted:
EDIT2 I add 3 more images, 2 showing top processes from Digital Ocean, and one showing Keymetrics status:
EDIT3 I figured out some memory leaks from my Angular app (I forgot to unsubscribe from a couple of subscriptions) and the system behaviour improved, but the memory line is still going up. I'll keep investigating about memory leaking in Angular and see if I've made some other mistakes:
It looks like your Angular Universal app is leaking memory, it should not increase over time as you observe but stay mostly flat.
You can try to find the memory leak (looks like you already found an issue and have a suspicion what else it could be).
Another thing you can try is periodically restart your app. See here for example how to restart your pm2 process every couple of hours to reset and prevent the OOM situation that you've been running into.