Search code examples
amazon-web-servicesdockersocket.iosails.jsamazon-alb

400 bad request on socket connection hosted on amazon application load balancer


Background

I am working on kong admin to connect to kong api gateway

I am using the docker file provided by the kong admin.

Problem

The docker container is working fine on my local machine and the UI is loaded as expected

enter image description here

However, when I am trying to access the same docker hosted on amazon ecs it does not work. It just keeps showing the loader.

enter image description here

Infrastructure

The docker container is hosted behind an amazon load balancer and is listening on port 80. The traffic on port 80 is then forwarded to port 1337 inside the docker container

The load balancer url - http://staging.host.internal

Error

Request

Request URL: http://staging.host.internal/socket.io/?__sails_io_sdk_version=0.13.8&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=MTvxlu9&sid=lH69C1E52B3aGVIwAANl
Request Method: GET
Status Code: 400 Bad Request
Remote Address: xx.xx.xx.xx:xx
Referrer Policy: no-referrer-when-downgrade
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Type: application/json
Date: Tue, 04 Dec 2018 15:56:05 GMT
Transfer-Encoding: chunked
Accept: */*
Accept-Encoding: gzip, deflate
Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
Connection: keep-alive
Cookie: io=lH69C1E52B3aGVIwAANl

Response

{"code":1,"message":"Session ID unknown"}

I am getting below error in console

WebSocket connection to 'ws://staging.host.internal/socket.io/?__sails_io_sdk_version=0.13.8&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=j-RcLmqGi5bZoQ4YAAPF' failed: WebSocket is closed before the connection is established.

On server logs for DEBUG=socket.io.* I get the below log

Tue, 04 Dec 2018 15:07:26 GMT socket.io-parser encoding packet
{
    "type": 0,
    "nsp": "/"
}

Can anyone please point in the right direction for debugging it. I do not have a start point.


Solution

  • I had to enable sticky session on ALB since I had multiple Docker containers hosted behind the load balancer

    The issue was that without sticky session I was getting logged in on one server of which I got websocket session ID and read request was going on separate server because of this I was getting continuous success and failure responses since serges were load balanced.

    https://www.looklinux.com/enable-sticky-session-application-load-balancer-aws/