I try to build a GUI for given Python code using Electron. The data flow is actually straight-forward: The user interacts with the Electron app, which sends a request to the Python API, which processes the request and sends a reply.
So far, so good. I read different threads and blog posts:
But in all three solutions, I struggle at the same point: I have to make asynchronous requests/replies, because the request processing can take some time and in this time, there can occur already further requests. For me, this looks like a very common pattern, but I found nothing on SO, maybe I just don't know, what exactly I am looking for.
Frontend Backend
| |
REQ1 |—————————————————————————————>|Process REQ1——--
| | |
REQ2 |—————————————————————————————>|Process REQ2 --|----—
| | | |
REP1 |<————————————————————————————-|REPLY1 <——————— |
| | |
REP2 |<————————————————————————————-|REPLY2 <———————————--
| |
The most flexible solution seems to me going with 3. zeroMQ, but on the website and the Python doc, I found only the minimum working examples, where both, send and receive are blocking.
Could anybody give me a hint?
If you're thinking of using ZeroMQ, you are entering into the world of Actor model programming. In actor model programming, sending a message happens independently of receiving that message (the two activities are asynchronous).
What ZeroMQ means by Blocking
When ZeroMQ talks about a send "blocking", what that means is that the internal buffer ZeroMQ uses to queue up messages prior to transmission is full, so it blocks the sending application until there is space available in this queue. The thing that empties the queue is the successful transfer of earlier messages to the receiver, which has a receive buffer, which has to be emptied by the recieve application. The thing that actually transfers the messages is the mamangement thread(s) that belong to the ZeroMQ contenxt.
This management thread is the cruicial part; it's running independently of your own application threads, and so it's making the communications between sender and receiver asynchronous.
What you likely want is to use ZeroMQ's reactor, zmq_poll(). Typically in actor model programming you have a loop, and at the top is a call to the reactor (zmq_poll() in this case). Zmq_poll() tells you when something has happened, but here you'd primarily be interested in it telling you that a message has arrived. Typically then you'd read that message, process it (which may involve sending out other ZeroMQ messages), and looping back to the zmq_poll().
Backend
So your backend would be something like:
while (forever)
{
zmq_poll(list of input sockets) // allows serving more than one socket
zmq_recv(socket that has a message ready to read) // will always succeed immediately because zmq_poll() told us there was a message waiting
decode req message
generate reply message
zmq_send(reply to original requester) // Socket should be in blocking mode to ensue that messages don't get lost if something is unexpectedly running slowly
}
If you don't want to serve more than one Front end, it's simpler:
while (forever)
{
zmq_recv(req) // Socket should be in blocking mode
decode req message
generate reply message
zmq_send(reply) // Socket should also be in blocking mode to ensure that messages don't get lost if something is unexpectedly running slow
}
Frontend
Your front end will be different. Basically, you'll need the Electron event loop handler to take over the role of zmq_poll(). A build of ZeroMQ for use within Electron will have taken care of that. But basically it will come down to GUI event callbacks sending ZeroMQ messages. You will also have to write a callback for Electron to run when a message arrives on the socket from the backend. There'll be no blocking in the front end between sending and receiving a message.
Timing
This means that the timing diagram you've drawn is wrong. The front end can send out as many requests as it wants, but there's no timing alignment between those requests departing and arriving in the backend (though assuming everything is running smoothly, the first one will arrive pretty much straight away). Having sent a request or requests, the front end simply returns to doing whatever it wants (which, for a User Interface, is often nothing but the event loop manager waiting for an event).
That backend will be in a loop of read/process/reply, read/process/reply, handling the requests one at a time. Again there is no timing alignment between those replies departing and subsequently arriving in the front end. When a reply does arrive back in the front end, it wakes up and deals with it.