I want to enable relatively long URLs to work on my site.
In Python, it works pretty well:
import requests
base_url = 'https://myurl.com'
client = requests.session()
gs = ['FAM20558-i1-1.1']
for i in [100,1000,1100]:
r = client.get(url=f'{base_url}/api/validate-genomes', params={'genomes[]': gs * i})
print(i, r.text)
Output:
100 {"success": true}
1000 {"success": true}
1100 <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.14.1</center>
</body>
</html>
So it works fine until i=1000
, which is all I need.
For i=300
, the URL is 9071 characters long. Size according to sys.getsizeof: 9120 bytes. It looks like this: https://myurl.com/api/validate-genomes/?genomes%5B%5D=FAM20558-i1-1.1&genomes%5B%5D=FAM20558-i1-1.1&...
But when I try to CURL the URL or copy this URL into the browser, it will not work! Nor do ajax requests with this length work. Why is that? How can I fix it? (Requests with i=100
always work.)
CURL output (curl --http2 -v $URL
):
> Host: myurl.com
> user-agent: curl/7.71.1
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.3 (IN), TLS alert, close notify (256):
* Empty reply from server
* Closing connection 0
* TLSv1.3 (OUT), TLS alert, close notify (256):
curl: (52) Empty reply from server
In nginx access.log, I see:
<MY IP> - - [04/Feb/2021:11:06:58 +0100] "-" 000 0 "-" "-" "-"
No change in nginx error.log.
The relevant nginx config (not sure it matters):
upstream django {
server unix:///path/to/socket.sock;
}
server {
listen 443 ssl http2 default_server;
client_max_body_size 10M;
uwsgi_buffer_size 128k;
uwsgi_buffers 12 128k;
uwsgi_busy_buffers_size 256k;
client_header_buffer_size 5120k;
large_client_header_buffers 16 5120k;
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
}
#EDIT: I understand that in this case, POST requests make more sense. But I want long URLs elsewhere, and this is a convenient way to demonstrate the problem.
If I specified --http1.1
in the CURL request, it worked! The problem was with http2. Found the solution here: https://phabricator.wikimedia.org/T209590
I had to increase the http2_max_field_size
and http2_max_header_size
in my nginx config.