Search code examples
phplaraveldockerproduction

Laravel login not working on production with white page showing request data


I suddenly have this weird thing on the live version of my project/website. I didn’t work on it for a few weeks and when I started again yesterday I suddenly had this error/fault. It’s really weird and I have no clue, but I’m just gonna try to explain it.

When I try to login to the website, with the correct credentials, I get a white page with a sort of get request. It shows the _token, email and password in plain text. This page is visible for a second or so before it redirects and brings me back to the login page, without logging me in.

screenshot of the white page

This is the basic problem. But here are some things that make it confusing and weird.

  • On my own machine it works just fine. I’m using docker, so my dev and production setups are 99% the same. Only differences are the ports of the containers.
  • It was working a few weeks ago, and now, without me changing a thing, this happens.
  • And maybe the weirdest thing: when I bring down the containers and then bring them back up (docker-compose -f docker-compose.prod.yml up -d —build) it works. I can login and everything. But after some (unknown) time it stops working and I get this again.

There are no errors or any feedback. It just doesn’t work and shows that white page for a split second. The request does seem to be validated though. When I enter wrong credentials the same thing occurs, but I get the validation message in the form, as expected.

I have absolutely no idea where this might even remotely come from. So if anyone has any clue for a direction, just ask me for the info and I’ll provide it.

I’m on Laravel 8.


UPDATE / FIX ==============

Okay now that it hasn't failed in a week I'm fairly confident in saying that I fixed it. So what I did to fix it is (mostly based on this answer):

  • Found out what container/image had the issue. Turned out to be php-fpm.
  • In the Dockerfile for this container I added the commands from this answer.
  • Added the local IP (127.0.0.1) on the port bindings for all containers not publicly available.
  • Rebooted my server. Before I did this the fixes seemed to work, but it just took longer for the issue to return. After reboot it hasn't failed so far.

I think those were all the changes I did.


Solution

  • Debug.

    I had the same issue for almost three days. It turned out to be a cryptocurrency miner script which later introduces a malware, 'kinsing'.

    For anyone using laravel for an API, the 'symptom' for me was returning Content-Type header as text/html as opposed to application/json in the responses for all my post requests. This, like the login, would work okay for some time. Also all the responses returned success 200 status code irrespective of what I had in my codebase.

    To confirm that this is indeed the same issue for you:

    • RUN: $ top // to check the CPU processes.

    You'll notice a suspicious process named kdevtmpfsi or Kinsing using obnoxiously high amounts of CPU. Mine was at 400%.

    You may have to check for a while as it sometimes disappears. If you find the culprit the great, now you know the problem and might want to see issue thread .

    Summary of what worked for me.

    As suggested in the issue thread above, here's the solution that worked for me.

    1. Found all instances of the associated files.

      • find / -iname kdevtmpfsi
      • find / -iname kinsing
    2. Created and run a script to remove and replace all instances of the files, make them non-executable, and only allow root and one other user to edit them. (You can also run the commands directly in your terminal, from within the 'infected' container). On my server the files were in /var/tmp/kinsing /tmp/kinsing and /tmp/kdevtmpfsi.

       #!/bin/bash
       rm -rf /var/tmp/kinsing /tmp/kinsing /tmp/kdevtmpfsi && touch /tmp/kdevtmpfsi && 
       touch /var/tmp/kinsing && touch /tmp/kinsing
       echo "everything is good here" > /tmp/kdevtmpfsi
       echo "everything is good here" > /var/tmp/kinsing
       echo "everything is good here" > /tmp/kinsing
       chmod go-rwx /var/tmp
       chmod 1777 /tmp
       touch /etc/cron.allow
       echo "root" > /etc/cron.allow
       echo "{other uer}" >> /etc/cron.allow
       cat /tmp/kdevtmpfsi && cat /var/tmp/kinsing && cat /tmp/kinsing
      

    remember to make the script executable if you go the script way

    1. Updated all my docker images. (Deleted old ones then pulled new ones). I want to assume you already know how to do this.

    2. Closed all my ports from the public and only exposed the webserver ports. To prevent future attacks. So the ports section of your laravel app service and db service would be something like this:

      laravel app service

      ports:
         - "127.0.0.1:8000:8000"
         - "127.0.0.1:9000:9000"
      

      db service

      ports:
        - "127.0.0.1:3306:3306"
      

      This basically binds the ports to connections only from the host since 127.0.0.1 is only accessible via your machine/server. Port mappings without the localhost binding map the ports to every interface on your machine/server. This is less than desirable if you have a public IP address, or your machine has an IP on a large network and is likely how the attackers accessed your server. Find more on port binding here.

    Of course, this depends on the services you want publicly available. The point is not to give the public what they don't need.