Search code examples
node.jsdockeramazon-ec2containersamazon-ecs

HTTP post request between Docker containers on EC2 giving Internal Server Error


So i have 3 containers on same EC2 instance- Nodejs, DB and Flask. The Nodejs connects fine with DB container and all routes are working. There is one route from Nodejs that goes to flask which is giving "internal server error". I am using "host" networkMode in my task definition and connection between containers can be achieved using 'localhost:xxxx'.

The post request is sending a image file and getting back a json as response.

The same route works fine and gives the right result when i test from my local: i.e., Node container on local routing to Flask container on AWS EC2.

I dont understand what can be the issue on EC2 side but works fine from local. All the inbound rules are correct. Node is on port 3000 Flask is on port 5000

the code on node that connects to flask api:

form.append('image', fs.createReadStream(path.join(path.resolve(''),'/assets/images',   image.filename)));

 /**Redirect to flask API with image fileand delete from local*/
 const urlProxy = new Proxy(redirectMapping, validator);
 const response = await fetch(urlProxy.predict, { method: 'POST', body: form });
 const flaskData = await response.json();
 await unlink(path.join(path.resolve(''),'/assets/images', image.filename))

Here is my container definitions in task-definition.yaml:

"containerDefinitions": [
        {
            "name": "KlemFlask",
            "image": "klemrepo:latest",
            "cpu": 0,
            "memoryReservation": 128,
            "portMappings": [
                {
                    "containerPort": 5000,
                    "hostPort": 5000,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [],
            "mountPoints": [],
            "volumesFrom": []
         
            }
        },
        {
            "name": "klemweb",
            "image": "klemtech/node-prod",
            "cpu": 0,
            "memoryReservation": 50,
            "portMappings": [
                {
                    "containerPort": 3000,
                    "hostPort": 3000,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [],
            "environmentFiles": [
                {
                    "value": "klempgdata/node.env",
                    "type": "s3"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "dependsOn": [
                {
                    "containerName": "klemdb",
                    "condition": "START"
                }
            ]
        },
        {
            "name": "klemdb",
            "image": "postgres:14-alpine",
            "cpu": 0,
            "memoryReservation": 50,
            "portMappings": [
                {
                    "containerPort": 5432,
                    "hostPort": 5432,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [],
            "environmentFiles": [
                {
                    "value": "klempgdata/database.env",
                    "type": "s3"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "pgdata",
                    "containerPath": "/var/lib/postgresql/data"
                }
            ],
            "volumesFrom": []
        }

Solution

  • Thanks guys, the real issue was the permissions on working directory in the node container. Changing permissions solved the problem