So my docker setup is the following: I have an Nginx container that accepts HTTP requests, and I have another container (my custom container) where I have php-fpm, and my application code. The application code is not on the host, only in the web container.
I want to configure Nginx as a proxy, to get requests and route them to php-fpm.
My nginx confiration is the following (i've removed some parts that are not important here):
upstream phpserver {
server web:9000;
}
server {
listen 443 ssl http2;
server_name app;
root /app/web;
ssl_certificate /ssl.crt;
ssl_certificate_key /ssl.key;
location ~ ^/index\.php(/|$) {
fastcgi_pass phpserver;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_read_timeout 160;
internal;
http2_push_preload on;
}
}
And my docker configuration (again, I've removed some not important parts)
nginx:
ports:
- 443:443/tcp
- 80:80/tcp
image: nginx
links:
- web:web
web:
image: custom_image
container_name: web
With this configuration I get the following Nginx error: "open() "/app/web" failed (2: No such file or directory)", because Nginx does not have access to that folder (that folder is in the web container were the php-fpm is).
Is there a way I can configure Nginx to route the HTTP requests, even if it does not have access to the application code?
I understand that one of the ways to fix this issue is to mount the application code to the Nginx container, but I would like to avoid that if possible. The reason for that is that in swarm mode, that wouldn't work if the two containers don't share a host.
I managed to solve the issue, so I'm posting my own solution bellow for people with similar problem.
The solution was to use the 'alias' directive and not use the 'root' directive in the nginx configuration (i've removed some parts that are not important here):
upstream phpserver {
server web:9000;
}
server {
listen 443 http2;
ssl on;
server_name app;
ssl_certificate /ssl.crt;
ssl_certificate_key /ssl.key;
location ~ ^/index\.php(/|$) {
alias /app/web;
fastcgi_pass phpserver;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
internal;
http2_push_preload on;
}
}
Now the request is properly routed to the phpserver on port 9000, and handled there by php fpm. Php fpm know which script to execute by looking at the 'alias' directive.
The problem now was how to serve static files. One solution was to serve them via php fpm as well, but from what I read online that's not recommended as the overhead would be bigger. So my solution was to share all the static files with the nginx docker container, so that ngnix has access to them and can serve them directly. If somebody has a better solution about how to serve static files in this scenario, please let me know.
# Cache Control for Static Files
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
#access_log on;
#log_not_found off;
expires 360d;
}