Search code examples
phpnginxmemorywkhtmltopdf

nginx server unresponsive while running memory intensive tasks ( Wkhtmltpdf, Imagick)


I am running a VM that serves my production site with DigitalOcean.

My whole site hangs while creating a large 100mb/100page pdf, or while compiling several pictures in to one large one.

The site runs on

PHP 7.0.1 and nginx 1.6.0 in Ubuntu 14.04. 

When a user runs memory intensive task the whole site hangs until the task has completed.

This happens in a script that creates and compiles a PDF with Imagick and WkHtmlToPdf extensions.

It takes about 2 minutes to run, and no other users can access the site while this task is running.

VM Specs:

Ubuntu 14.04 x64 512 MB Memory / 20 GB Disk / AMS2

VM monitor peaks while running wkhtmltopdf/imagick

CPU 50%, memory 99%

Seems like it could be a memory issue.

How do i prevent one request from making the whole site unresponsive?

Also can i really be true that a small server can't handle one user generating a large pdf?

Also shouldn't a memory/cpu intensive tasks still leave room for the requests of other users, or do i need to manually enable parallelisation somehow? Maybe 512 mb isn't enough?

Thanks in advance!


Solution

  • A base Digital Ocean 512mb machine has a single virtual CPU - it can only do one thing at a time. If it is expending all of its potential effort on a single, memory intensive task, everything else that normally happens in the background of the operating system will queue up, causing more and further slowdowns - so what might take only a matter of seconds on your local multi-core laptop could take a great deal longer.

    Further, if you are running this task from within a PHP web-request, that's another layer of effort and memory that must be allowed for. This could help explain some of the slowdown, especially if the server has to swap out some of the memory to disk to have enough working space.

    If this is a common action, then moving the site to a more powerful machine with more memory and more (virtual) CPUs will help by allowing other things to run at the same time. Longer term, moving the process to another server with some sort of queue, so that it can happen somewhere elsewhere it is not affecting the main webserver may also be very useful, and would be a typical potential next step.