I have been using stomp.py and stompest for years now to communicate with activemq to great effect, but this has mostly been with standalone python Daemons.
I would like to use these two libraries from the webserver to communicate with the backend, but I am having trouble finding out how to do this without creating a new connection every request.
Is there a standard approach to safely handling TCP connections in the webserver? In other languages, some sort of global object at that level would be used for connection pooling.
HTTP is a synchronous protocol. Each waiting client consumes server resources (CPU, memory, file descriptors) while waiting for a response. This means that web server has to respond quickly. HTTP web server should not block on external long-running processes when responding to a request.
The solution is to process requests asynchronously. There are two major options:
Use polling.
POST
pushes a new task to a message queue:
POST /api/generate_report
{
"report_id": 1337
}
GET
checks the MQ (or a database) for a result:
GET /api/report?id=1337
{
"ready": false
}
GET /api/report?id=1337
{
"ready": true,
"report": "Lorem ipsum..."
}
Asynchronous tasks in Django ecosystem are usually implemented using Celery, but you can use any MQ directly.
Use WebSockets.
Helpful links:
Edit:
Here is a pseudocode example of how you can reuse a connection to a MQ:
projectName/appName/services.py
:
import stomp
def create_connection():
conn = stomp.Connection([('localhost', 9998)])
conn.start()
conn.connect(wait=True)
return conn
print('This code will be executed only once per thread')
activemq = create_connection()
projectName/appName/views.py
:
from django.http import HttpResponse
from .services import activemq
def index(request):
activemq.send(message='foo', destination='bar')
return HttpResponse('Success!')