Search code examples
node.jsmongodbsslnginxhttp2

net::ERR_CONNECTION_CLOSED on remote server when there are more than 7 sub-documents in mongo document


I am developing a MEAN project with angular 4.1.0.

On my localhost, everything works without errors. When I deploy to the server however, retrieving a user with more than 8 question-answer pairs causes a net::ERR_CONNECTION_CLOSED error on the xhr request that angular's http module fires.

The digital ocean droplet I am hosting on uses an nginx reverse proxy and uses a letsencrypt SSL certificate. I have tried:

  • Restarting server, nginx service, node.js etc.
  • Increasing client_max_body_size to 20M in the nginx config file
  • Increasing large_client_header_buffers' size to 128k in the nginx config file

Other important facts:

  • The GET request to qapairs?jwt=ey.. never reaches the node.js app
  • There is no mention of the request in /var/log/nginx/error.log
  • The failing requests shown in the /var/log/nginx/access.log are as follows:

    89.15.159.19 - - [08/May/2017:14:25:53 +0000] "-" 400 0 "-" "-"
    89.15.159.19 - - [08/May/2017:14:25:53 +0000] "-" 400 0 "-" "-"
    

Please point me in possible directions.


The chrome dev tool network tab screenshots

  1. After logging in to an account where there are only 7 question answer pairs After logging in to an account where there are only 7 question answer pairs

  2. Then, after going to mlab.com and manually adding another question answer pair to same account and then refreshing the page (notice the number of questions in now 8) After going to mlab.com and manually adding another question answer pair to same account and then refreshing the page

  3. Finally, after logging in and out of the same account (notice the xhr request to qapairs?jwt=ey... returned a failed status) After logging in and out of the same account

/etc/nginx/sites-enabled/default

# HTTP — redirect all traffic to HTTPS
server {
    listen 80;
    listen [::]:80 default_server ipv6only=on;
    return 301 https://$host$request_uri;
}

# etc

# HTTPS  ^ ^  proxy all requests to the Node app
server {
    # Enable HTTP/2
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name subdomain.example.com;

    # Use the Let ^ ^ s Encrypt certificates
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Include the SSL configuration from cipherli.st
    include snippets/ssl-params.conf;

    # Increase allowed URL length     
    large_client_header_buffers 4 128k;

    # Increase max body size
    client_max_body_size 20M;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-NginX-Proxy true;
        proxy_pass http://subdomain.example.com:3001/;
        proxy_ssl_session_reuse off;
        proxy_set_header Host $http_host;
        proxy_cache_bypass $http_upgrade;
        proxy_redirect off;
    }
}

qa-pairs.service.ts

The error is being caught here in the getQAPairs function. Being passed to the callback in the catch function a ProgressEvent object with a type property of error, eventPhase of 2.

@Injectable()
export class QaPairsService {
  /* etc */

  getQAPairs () {
    const jwt = localStorage.getItem('jwt') ? `?jwt=${localStorage.getItem('jwt')}` : ''

    return this.http.get(this.qapairsUrl + jwt)
      .map(response => {
        this.qapairs = response.json().map((qapair: IQAPair) => new QAPair(qapair))
        this.qapairsChanged.emit(this.qapairs)
        return this.qapairs
      })
      .catch(
        (error: any) => {
          error = error.json()
          this.errorsService.handleError(error)
          return Observable.throw(error)
        }
      )
  }

  /* etc */
}

Solution

  • Solution:

    /etc/nginx/sites-enabled/default

    # other code here
    
    server {
    
       # other code here
    
       # Increase http2 max sizes
       http2_max_field_size 64k;
       http2_max_header_size 64k;
    
    }

    The reason I found this so hard to debug was because there was

    no mention of the request in /var/log/nginx/error.log

    and I didn't realize that nginx has the ability to be more verbose with its logging (duh)

    So after changing /etc/nginx/sites-enabled/default to include

    server { 
        error_log /var/log/nginx/error.log info;
    }
    

    I saw

    2017/05/08 16:17:04 [info] 3037#3037: *9 client exceeded http2_max_field_size limit while processing HTTP/2 connection, client: 89.15.159.19, server: 0.0.0.0:443
    

    which was the error message I needed.