So I have set up a webpage/webapplication that is powered by Django. Previously I utilized a MySQL database as backend and everything worked out smooth, but then I tried to switch to PostGreSQL.
When I now try to start the server in the shell it keeps getting stuck during the "Performing system checks..." process. From there it takes a very long time (circa 5-6 minutes) with my computer using a lot of CPU until finally the server starts with no issues.
Finally if I run "python -v manage.py check", then I can see the the process gets stuck at one point for multiple minutes:
import 'django.db.models.sql.compiler' # <_frozen_importlib_external.SourceFileLoader object at 0x7ff640701890>
However, I work with pycharm, where when I open the python console, I am able to import the libaries without any issues.
I created a minimal example to track down the issue. It can be reproduced by simply importing it anywhere (e.g. in the project's url.py-file) and then trying to start the server.
ExampleFile.py:
from apolloWebApp.models import DiseaseGeneAssociations
# Load database entries with distinct IDs
def create_network_elements(score_limit):
distinct_diseases = DiseaseGeneAssociations.objects.values('diseaseid').distinct() # query for distinct diseases
distinct_diseases_list = list(distinct_diseases) # [{'diseaseid': 'C0002395'}, ... ]
# Iterate over entries and filter database entries
for disease in distinct_diseases_list:
associated_DisGeneAss_objs = DiseaseGeneAssociations.objects.filter(score__gte=score_limit,
diseaseid=disease["diseaseid"])
# Transform result into list -> Commenting it out fixes the problem
associated_DisGeneAss_objs = list(associated_DisGeneAss_objs)
# dummy return to make sure everything is executed
return associated_DisGeneAss_objs
test = create_network_elements(0.5)
So far I figured out: It seems that during the System checks also the SQL-queries are executed, which causes the problem. However, if I comment out the cast of the filter-results list(associated_DisGeneAss_objs, then the check finishes after 2 seconds.
So the questions are:
Ok, I found the problem:
The problem was created by Dash-Plotly (in my case I used Dash Cytoscape), because "All of the callbacks in a Dash app are executed with the initial value of their inputs when the app is first loaded.". I think just importing the app into another python file is enough to trigger that execution. And yeah, it was really my database that was too big and therefore causing the queries, that were triggered by the app, to take so much time.
Thus my solution was to crack down the queries into more efficient requests.
P.S.: I also created a Google-Groups entry, which you can find here.