Search code examples
phpnginxload-balancingfpm

Nginx load balance with proxied php-fpm - try_files


I'm trying to setup multiple php-fpm servers to handle traffic spikes.

Right now I have one machine running Nginx + PHP7.3-fpm + Redis (6vCPU and 16GB RAM) and another separated running only php-fpm 7.3 and same extensions.

Everything is ok but I have to create a plan for traffic spikes. And I don't know how to attach this new and isolated machine to work together with main server without have many troubles.

I've already researched a lot about it and do not find nothing especific.

The most valuable links I can find is:

https://serverfault.com/questions/744124/file-issue-with-nginx-php-fpm-on-separate-servers

nginx - php-fpm cluster

https://blog.digitalocean.com/horizontally-scaling-php-applications/

Nginx to serve php files from a different server

I read several docs about it but the main doubt remains:

Can I simply remove try_files line from all nginx locations conf and set cgi.fix_pathinfo =0 in php.ini so I do not have to necessarily have the files on all servers?

Or for security is better to mount a NFS silesystem to have each .php file in all servers including php-fpm dedicated servers?

Some people say "create a NFS and mount to all php-fpm proxied servers or use rsync to sync files through servers" and others say's "remove try_files and it will work" but I did find an article that say "remove try_files and cross your fingers to not be hacked". :O

What is the better/correct/most secure way to do this? We yet can be hacked to remove try_files nowadays?

If I can simply remove try_files, will different locations with different softwares work ok? Let's say I have one WP on root folder and a Xenforo install on /forum/ folder. try_files are different from each other.

Upstream block before server{}

        upstream backend {
            server unix:/var/run/php/php7.3-fpm.sock weight=100 max_fails=5 fail_timeout=5;
            server unix:/var/run/php/php7.3-fpm-2.sock weight=100 max_fails=5 fail_timeout=5;
            #I want to add 192.168.x.x:9000 here to balance with this origin server
        }

An example of servers blocks:

        location / {
                try_files $uri $uri/ /index.php;
        }

        #AMP
        location /amp/ {
                try_files $uri $uri/ /amp/index.php;
        }

        #forum
        location /forum/ {
                try_files $uri $uri/ /forum/index.php?$uri&$args;
                index index.php index.html;
        }

        location ~ \.php$ {
            include snippets/fastcgi-php.conf;
            #fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
            fastcgi_pass backend;
        }

I also bind php-fpm server to run on it's internal IP (not 127.0.0.1) and set to accpet nginx proxy ip on listen.allowed_clients in php.ini

Also did nmap run on php-fpm-server-IP:9000 from origin server and it say it's running and up.

If you know how or maybe have one link showing how to, please share. I have the machines in stand by just waiting to work together please help to achieve this goal.


Solution

  • As I had no answers to help, I ended up doing an NFS from the source server to the PHP server just to have the files, thus keeping try_files. I didn't want to risk removing try_files without knowing exactly the security consequences.

    So for now the correct answer for me was to do an NFS and pass the NGINX requests to the PHP-FPM scaling machine. Everything went well, without major problems. The only issue was to change the internal IP in php-fpm.conf, include the internal IP in the NFS server and include the new internal IP in the NGINX upstream pool. And of course, removing php sessions from files to Redis. This way logged pages won't logout when changing requests from origin to scaling servers.