I am developing an application which is written in Python3 and is composed of a Python library/package (which contains the core functionality) and a Python application which will provide a cli shell and handles user commands.
In addition the functionality contained within the Python package must be exposed to existing gui applications written in C# (using the Microsoft .Net framework).
I've done a fair bit of research into how this might be done and have come up with a few potential solutions.
Note I am not well versed in C# and will be doing the majority of the development in linux.
I'm really looking to get feedback on which option will provide the best balance between a clean interface to my library and low latency/good performance (emphasis on the later).
~ 10 [ms]
for a SuT
-stability ?Thanks for details added about a rather wide range of latency ceilings ~ 10 .. 100 [ms]
+
…this is actually replacing something that was previously implemented in C. The idea is that if the interface layer of the library and the cli are implemented in Python it would be easier for users to build off the core functionality for their use case. Some of the more demanding control loops may have to be implemented as a static C library or rust library which we would call in with python. In any case the top layer is still implemented in Python which will have to interface with C#
( = the most important takeaway from here…
The need to understand both
The Costs of wished-to-have ease of user-extensions & refactoring the architecture
+ Who pays these Costs)
For this to be done safely & professionally, you will most probably like this, not to repeat common errors of uninformed decisions, where general remarks source from heaps of first-hand experience with crafting a system with control-loop under ~ 80 [us]
Map you control-system's - both the internal eco-system (resources) & exo-system ( interactions with outer world )
Without due understanding of toys, no one can decide about The Right-enough Architecture.
Understanding the landscape of devices in a latency-motivated device requires us to first know ( read + test + benchmark also its jitter/wander envelope(s) under (over)-loaded conditions of the System-under-Test ). Not knowing this will lead to but a blind & facts unsupported belief, our SuT will never ever headbang into the wall of reality, which will proof itself wrong, typically in the least pleasant moments.
Irreversibly wrong and bad practice, as all the so far accrued costs have been already burnt...
Knowing & testing is a core step before sketching the architecture - where details matter ( ref. How much does one loose in h2d/d2h
latencies [us]
? - why these principal costs are so weakly reported? Does that mean those costs do not exist? No. They do exist and your control-loops will pay them each and every time... so better know about all such hidden costs, paid in the TimeDOMAIN
, well beforehand ...before Architecture gets designed and drafted. )
Learn from NASA Apollo mission design
- it was deeply distributed
and
- proper engineering helped to reach the Moon
- it saved both the National Pride and the lives of these first, and so far the only, Extra Terrestrians
( credits to Ms.Margaret HAMILTON's wisdom in defining her design rules and her changing the minds of the proper engineering of the many control-loops-systems' coordination strategies )
Either ZeroMQ ( zmq
, being a mature, composable, well scaling, architecture of principally distributed many-to-many behaviours, developed atop a set of a few Trivial Scalable Formal Communication Pattern Archetypes ) or its Marting SUSTRIK co-fathered younger and light-weighted sister, nanomsg
, may help one a lot to compose a smart macro-system, where individual component's strengths (or monopolies of having none substitute for) may get interconnected into a still-within-latency-thresholds stable, priority-aware macro-system, for which one cannot in principle ( or does not want to, due to some other reasons - economy of costs, time-to-market, legal-constraints being the first ones on hand ) design a monolithic all-in-one system.
While on the first look this may sound as complicating the problem, one may soon realise, that it does serve the very opposite :
not mentioning the positive benefits from such tools independent evolution and their further extensions.
My system was in a similar dilemma - #C not being a way for me for a second (closed source app dependency was too expensive if not fatal for our success).
Is here any need to add more for going rather Distributed and independent from the original Vendor-Lock-in?
Having chosen but sweat, tears and blood - to start with ZeroMQ in its days of mature v2.x, I regret no single hour of having done so and cannot imagine to meet all of the above without having done so.