Search code examples
pythondjangohostingfuzzy-logicrailway

Request Taking too long in Django


I have been trying to build a mathematical model of chemotherapy drug dose scheduling as part of my thesis. Firstly, I did the logic in MATLAB and then implemented it in Python. Then I tried using Django. While the app runs fine on localhost, it fails when I host it. Mainly because of the time it takes to calculate the whole process. On my localhost, it takes around 40 seconds to 1 minute to calculate, but I get the result. When I host it in railway it gives up after the first iteration. It is based on Fuzzy Logic. Also, I didn't follow software development principles when creating this, so the code is a mess.

I do not know what I should include for reference. I am adding the GitHub link- FES Tool

The error I get from the railway is application failed to respond. Railway response\

Here are the server logs.

Not Found: /favicon.ico
14
[2023-08-23 08:12:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:264)
capi_return is NULL
Call-back cb_f_in_lsoda__user__routines failed.
[2023-08-23 08:12:18 +0000] [264] [INFO] Worker exiting (pid: 264)
[2023-08-23 08:12:18 +0000] [1] [ERROR] Worker (pid:264) exited with code 1
[2023-08-23 08:12:18 +0000] [1] [ERROR] Worker (pid:264) exited with code 1.
[2023-08-23 08:12:18 +0000] [327] [INFO] Booting worker with pid: 327

Any suggestion is appreciated.

I tried to host a Django project that takes a lot of time to calculate in the backend, and I failed.


Solution

  • The WWW model of GET request ... processing ... response to browser, is intended for use by large numbers of clients at once, and is not well-suited to requests that require large amounts of resources to generate the response. Samdace's answer tells you how to increase the timeout, if you know that (say) a minute will always suffice but 30s wont, and provided you know that the server will never be asked to process more than a small number of such requests at once. Because too many will basically kill the entire server (that's a denial-of-service attack, especially if the server is accessible to all, such as a price-comparison site).

    But suppose each request is going to take tens of minutes? The User isn't going to be happy waiting for an eventual response, and the response will be lost forever if he gives up and navigates away. So you need to store his request to do something, and allow him to return later to see whether his processing has finished yet and collect his results if it has.

    You can write a command-line command that accesses the Django database in the form of a custom management command. Then you need something (for example a periodic cron job, invoking a second management command or the same one with a --poll argument) which checks whether any requests are outstanding, and works through them one (or a suitable limited small number) at a time by firing up processes that execute the management command, taking a request from one Model instance and returning a result in the form of another Model instance. Python's subprocess.run may be useful here.

    The user comes back later and accesses his results. Don't forget to keep a ForeignKey to User so Django can tell his results from everybody else's results.

    It's a fair bit of work, but mostly very straightforward CRUD views. The hard stuff is what goes on inside the command which executes the request, which you've mostly already coded.

    Your processing command should have a catch-all exception handler so that any sort of failure gets recorded as a (failed) result, and the request always gets marked as processed, so you never get into a loop reprocessing failure forever.

    try:
    
        do_the_hard_work()
    
    except Exception as exc: # or a wide range of more specific exceptions
         mark_this_request_failed()
    
    finally:
         make_results_available_for_user()  # success or fail