I need to mutex
several processes running python
on a linux
host.
They processes are not spawned in a way I control (to be clear, they are my code), so i cannot use multithreading.Lock
, at least as I understand it. The resource being synchronized is a series of reads/writes to two separate internal services, which are old, stateful, not designed for concurrent/transactional access, and out of scope to modify.
a couple approaches I'm familiar with but rejected so far:
shmget
/ pthread_mutex_lock
(eg create a pthread
mutex by well-known string name, in shared memory provided by the OS). Im hoping to not have to use/add a ctypes
wrapper for this (or ideally have any low-level constructs visible at all here for this high-level app).lock file
libraries such as fasteners
would work - but requiring any particular file system access is awkward (the library/approach could use it robustly under the hood, but ideally my client code is abstracted from that).Is there a preferred way to accomplish this in python (under linux; bonus points for cross-platform)?
Options for synchronizing non-child processes:
Use a remote manager. I'm not super familiar with this process, but the docs has at least a simple example.
create a simple server with your own protocol (rather than a manager): something like a socket server on the loopback address for bouncing simple messages around.
use the filesystem: https://pypi.org/project/filelock/
On posix compliant systems, there's a rather straightforward wrapper for IPC constructs posix-ipc. I also found a wrapper for windows semaphores, but it's not quite as simple (though also not difficult per-say). In both cases your program would use a well known string "name" to access / create the mutex. In both cases, care / error checking is needed to handle creation of the mutex properly (see things like O_CREX
flag...)