I am trying to quickly set up a database for some friends to connect to (security, efficiency, and durability are not huge concerns), but I cannot determine what is causing my connection attempts to time out. Pretty much this unanswered question.
PostgreSQL and PGAdmin are created via docker-compose on (let's say) 192.168.1.100.
Everything starts fine. I confirmed that listen_addresses = '*'
in the pg conf. Firewall is allowing 5432 and 5050 (pgadmin) to my local network, where my nginx server will pick it up
5050 ALLOW 192.168.1.0/24 # pgadmin
5432 ALLOW 192.168.1.0/24 # postgres
The ngnix server is redirecting a subdomain to the original server's IP and port, like so:
server {
listen 80;
server_name pg.mydomain.net;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name pg.mydomain.net;
proxy_read_timeout 600s;
location / {
proxy_pass http://192.168.1.100:5432;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
ssl_certificate /etc/letsencrypt/live/mydomain.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.net/privkey.pem;
}
(this block is boilerplate that I use for all my quick projects, works 90% of the time)
Then on cloudflare I add a CNAME
entry for pg.mydomain.net (also one for pgadmin.mydomain.net, which works flawlessly).
But the connection string postgresql://myuser:mypw@pg.mydomain.net:5433/mydb
isn't working like it does when I access it by its local ip address directly. I'm thinking the problem lies with nginx. I'm hoping for a solution that allows my users to construct a similarly simple connection string in a Jupyter Notebook.
As was hinted at in the comments but not stated explicitly was that requests to a postgres server are not HTTP/HTTPS, thus it is probably not able to go through nginx.
The reason I (and anyone) would want it to pass through a reverse proxy is to limit the surface area of my network which is exposed to the internet to a single server. It had worked for all kinds of web apps provided by docker containers (including PGAdmin), but this assumption does not hold for data headed to a database.
I say probably not able to go through nginx because it seems like this is what the nginx stream module is for. But since that required a re-compiling and re-installation of nginx just to test a stream{}
block, I did not invest the effort.
What ultimately worked is
In the end, my users can connect via any database tool using hostname: pg.mydomain.net, port: 5432, username + password