Search code examples
pythontmux

How can I use python to control multiple concurrant command line programs?


I'd like to use a python program to send different videos to different devices.

My plan is to use ffmpeg to control the video and the destination (I can do this for one destination using os.system) but I'm not sure how to write concurrent ffmpeg commands so that 6 videos are playing at the same time on different devices.

Initially I thought I could use tmux but I can't find a solution for how to control/access different tmux windows within my python program. Am I missing something obvious?


Solution

  • You can use the python subprocess module for that: https://docs.python.org/3/library/subprocess.html

    As a simple example, I will run a Linux utility 'sleep' which does nothing else than waiting for a given amount of seconds. I will do this in parallel to see that we can really do this with subprocess in parallel. As a first setup, I do:

    import os
    import subprocess
    import time
    
    n_jobs = 5
    sleeping_time_in_sec = 10
    shell_command_with_arguments = ["sleep", f"{sleeping_time_in_sec}s"]
    

    I'm importing 'subprocess' to launch jobs in parallel. The 'time'module I'm going to use to measure the total running time. The 'os' module I'm going to use to block the script until all jobs are done. The shell command looks like 'sleep 10s', but for subprocess you put the script and all the arguments in a list.

    Submit the jobs like this:

    time_start = time.time()
    jobs = list()
    for counter in range(n_jobs):
        process = subprocess.Popen(shell_command_with_arguments, shell=False)
        jobs.append(process)
    

    Now all the 5 jobs each taking 10 sec have been submitted. In order to make your script wait until the last process is done, you can add:

    print(f"Waiting for all {n_jobs} processes to finish...")
    for ip, process in enumerate(jobs):
        try:
            os.waitpid(process.pid, 0)
        except ChildProcessError:
            print(f"No more: {ip} : {process.pid}")
        else:
            print(f"DONE: {ip} : {process.pid}")
    

    Finally, I report the total running time:

    time_end = time.time()
    delta_time = time_end - time_start
    print(f"Run {n_jobs} jobs taking {sleeping_time_in_sec} s each in {delta_time} s")
    

    The output of the script looks like:

    Waiting for all 5 processes to finish...
    DONE: 0 : 88988
    DONE: 1 : 88989
    DONE: 2 : 88990
    DONE: 3 : 88991
    DONE: 4 : 88992
    Finished running 5 jobs each taking 10 s in 10.022365093231201 s
    

    As you can see, if you would run the sleep command 5 times in serie, it would take 50 s. But since the script took a little more then 10 s you can see the the jobs indeed were running in parallel.