When I run my app locally with PM2 v 3.5.0 all works fine, however when I deploy it on Google GCP app engine Flex environment, PM2 keeps restarting the app.
here are my PM2 config file
{
"apps": [{
"name" : "prod_client",
"script" : "./bin/www",
"exec_mode": "cluster_mode",
"instances": 1,
"watch" : false,
"env": {
"NODE_ENV": "production"
}
},{
"name" : "prod_api",
"script" : "./src/server/apiServer.js",
"exec_mode": "cluster_mode",
"instances": 1,
"watch" : false,
"env": {
"NODE_ENV": "production"
}
}]
}
interestingly I do not get any proper useful logs. Note here that, everything works fine in local machine, PM2 doesn't complain.
We had this same issue. It was due to PM2 trying to write files to storage, which Google AppEngine (GAE) does not generally support (more to follow on this). Also, we have not got it fully working because, sadly, there's a problem with the memory check pidusage
command on GAE that has not yet been resolved [1].
So, to address the issue initially, we used the configuration to redirect logging and pidfile paths to /dev/stdout
and /dev/null
respectively. This got PM2 working, but it still was not working quite right. It was struggling to read the pidfile, for example.
However, GAE does allow for tmp files (we were using Standard, but I imagine Flex has similar support)[2]. So, we removed the pidfile configuration and instead, changed the start
script to set PM2_HOME=/tmp/.pm2
. This got us as close to working as we could, given the pidusage
issue mentioned earlier.
PM2_HOME=/tmp/.pm2 pm2 start ecosystem.config.js --env production --no-daemon --mini-list
The ecosystem.config.js
configuration was something like:
module.exports = {
apps: [
{
name: "service",
script: "main.js",
kill_timeout: 15000,
max_memory_restart: "400M",
exec_mode: "cluster",
instances: 1,
out_file: "/dev/stdout",
error_file: "/dev/stderr",
env: {
NODE_ENV: "development",
BLAH: "1",
},
env_production: {
NODE_ENV: "production",
BLAH: "0",
},
},
],
};