I'm trying to find or implement a simple solution that can sequentially queue up Linux shell commands, so that they are executed one at a time. Here are the criteria:
- The queue must execute the commands one at a time, i.e. no two commands can run at the same time.
- I don't have the list of commands ahead of time. They will be coming in from web requests that my web server receives. That means the queue could be empty for a long time, and 10 requests can come in at the same time.
- My web server can only do system calls to the shell, so this program/solution needs to be callable from the command line.
- I only have one machine, so it can't and doesn't need to farm out the work to multiple machines.
Originally I thought the at
command can do what I want, but the only thing is that it doesn't execute the commands sequentially.
I'm thinking of implementing my own solution in python with these parts:
- Have a dedicated directory with a lock file
- Queued commands are stored as individual files with the filename containing an incrementing sequence ID or timestamp or something similar, which I'll call "command files"
- Write a python script using
fcntl
module on the lock file to ensure only 1 instance of the script is running
- The script will watch the directory for any files and execute the shell commands in the files in the order of the filename
- When the directory has no more "command files", the script will unlock the lock file and exit
- When my web server wants to enqueue jobs, it will add a new "command file" and call my python script
- The python script will check if another instance of itself is running. If yes, then exit, which will let the other instance handle the newly queued "command file". If no, then lock the lock file and start executing the "command files" in order
Does this sound like it'll work? The only race condition that I don't know how to handle is when the first instance of the script checks the directory and see that it's empty, and before it unlocks the lock file, a new command is queued and new instance of the script invoked. And that new script will exit when it sees the file is locked. Then the original script will unlock the file and exit.
Is there something out there that already do this, so I don't have to implement this myself?
Use a named pipe, aka FIFO:
mkfifo /tmp/shellpipe
Start a shell process whose input comes from the pipe:
/bin/sh < /tmp/shellpipe
When the web server wants to execute a command, it writes it to the pipe.
sprintf(cmdbuf, "echo '%s' > /tmp/shellpipe", command);
system(cmdbuf);