Search code examples
pythonlinuxbashsubprocessdaemon

How can I start a process and put it to background in python?


I am currently writing my first python program (in Python 2.6.6). The program facilitates starting and stopping different applications running on a server providing the user common commands (like starting and stopping system services on a Linux server).

I am starting the applications' startup scripts by

p = subprocess.Popen(startCommand, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
print(output)

The problem is, that the startup script of one application stays in foreground and so p.communicate() waits forever. I have already tried to use "nohup startCommand &" in front of the startCommand but that did not work as expected.

As a workaround I now use the following bash script to call the application's start script:

#!/bin/bash

LOGFILE="/opt/scripts/bin/logs/SomeServerApplicationStart.log"

nohup /opt/someDir/startSomeServerApplication.sh >${LOGFILE} 2>&1 &

STARTUPOK=$(tail -1 ${LOGFILE} | grep "Server started in RUNNING mode" | wc -l)
COUNTER=0

while [ $STARTUPOK -ne 1 ] && [ $COUNTER -lt 100 ]; do
   STARTUPOK=$(tail -1 logs/SomeServerApplicationStart.log | grep "Server started in RUNNING mode" | wc -l)
   if (( STARTUPOK )); then
      echo "STARTUP OK"
      exit 0
   fi
   sleep 1
   COUNTER=$(( $COUNTER + 1 ))
done

echo "STARTUP FAILED"

The bash script is called from my python code. This workaround works perfect but I would prefer to do all in python...

Is subprocess.Popen the wrong way? How could I accommplish my task in Python only?


Solution

  • First it is easy not to block the Python script in communicate... by not calling communicate! Just read from output or error output from the command until you find the correct message and just forget about the command.

    # to avoid waiting for an EOF on a pipe ...
    def getlines(fd):
        line = bytearray()
        c = None
        while True:
            c = fd.read(1)
            if c is None:
                return
            line += c
            if c == '\n':
                yield str(line)
                del line[:]
    
    p = subprocess.Popen(startCommand, shell=True, stdout=subprocess.PIPE,
                   stderr=subprocess.STDOUT) # send stderr to stdout, same as 2>&1 for bash
    for line in getlines(p.stdout):
        if "Server started in RUNNING mode" in line:
            print("STARTUP OK")
            break
    else:    # end of input without getting startup message
         print("STARTUP FAILED")
         p.poll()    # get status from child to avoid a zombie
         # other error processing
    

    The problem with the above, is that the server is still a child a the Python process and could get unwanted signals such as SIGHUP. If you want to make it a daemon, you must first start a subprocess that next start your server. That way when first child will end, it can be waited by caller and the server will get a PPID of 1 (adopted by init process). You can use multiprocessing module to ease that part

    Code could be like:

    import multiprocessing
    import subprocess
    
    # to avoid waiting for an EOF on a pipe ...
    def getlines(fd):
        line = bytearray()
        c = None
        while True:
            c = fd.read(1)
            if c is None:
                return
            line += c
            if c == '\n':
                yield str(line)
                del line[:]
    
    def start_child(cmd):
        p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
                             shell=True)
        for line in getlines(p.stdout):
            print line
            if "Server started in RUNNING mode" in line:
                print "STARTUP OK"
                break
        else:
            print "STARTUP FAILED"
    
    def main():
        # other stuff in program
        p = multiprocessing.Process(target = start_child, args = (server_program,))
        p.start()
        p.join()
        print "DONE"
        # other stuff in program
    
    # protect program startup for multiprocessing module
    if __name__ == '__main__':
        main()
    

    One could wonder what is the need for the getlines generator when a file object is itself an iterator that returns one line at a time. The problem is that it internally calls read that read until EOF when file is not connected to a terminal. As it is now connected to a PIPE, you will not get anything until the server ends... which is not what is expected