Search code examples
pythonshellsubprocessopenfoam

Python - using os.subprocess to call a shell script with nested commands


I'm utilizing python to run a top level genetic algorithm to optimize 3D aircraft wing geometry for aerodynamic performance using OpenFOAM. I'm running Ubuntu 16.04, Python 3.6, and OpenFOAM 5. I have written several shell scripts that reside in the OpenFOAM case directory (directory containing geometry STL files, fluid dynamic parameters, and mesh). These scripts execute repetitive file management commands and OpenFOAM parallel process decomposition programs, along with commands that generate the 3D mesh and run the simulation. The python script should be in a different directory entirely because hundreds (if not thousands) of these case folders will be generated as offspring of the genetic algorithm. The problem is that using os.subprocess to call these shell scripts seems to generate and terminate a shell for each command in the script. For instance, if my code is (brief version):

#!/bin/bash
rm constant/polyMesh/*
rm constant/triSurface/*.eMesh
rm -rf 0.*
find -type d -name '*e-0*' -exec rm -r {} +
decomposePar (OpenFOAM utility)

and I call said script with:

subprocess.Popen(['./shellScript'],cwd=r'./pathToDirectory')

I end up getting the error:

./shellScript: line #: decomposePar no such file or directory

The only way I can see this happening if the shell is attempting to execute decomposePar in a directory other than the case directory (where it should go off without a hitch). For the case management commands, only the first command seems to work. I've scraped stack overflow for a solution to this problem but I'm not sure that I've found one. I've tried to set the working directory of the python script using os.chdir('casePath'), setting the cwd='casePath', and several other subprocess arguments that I don't quite understand!

I know that I can just transcribe each shell command into Python, but I find it will be extremely tedious due to the number of shell commands and the potential for change to these shell scripts in the future. I'd like my method to be modular and robust enough to run only a couple of these scripts that I can modify to my needs. Additionally, at the end of the day this program will be deployed to a supercomputer. Parallel processing will happen at the C++ level in OpenFOAM, not python, so this is not one of my concerns. Is there any way to do this and am I missing something painfully obvious? Bear with me as computer science is not my strength, I am an aerospace engineer. Thanks!


Solution

  • TL;DR: source thebashrc from the OpenFOAM installation at the start of the shell script.

    After some digging with the OP the issue was initially that the directory containing decomposePar was not in the PATH environment variable.

    Unless you have . in your PATH (this is very dangerous, do not do this), changing the current directory to the directory will not have any impact of finding decomposePar. Adding the full path to decomposePar in the shell script allowed decomposePar to be run.

    Example:

    #!/bin/bash
    rm constant/polyMesh/*
    rm constant/triSurface/*.eMesh
    rm -rf 0.*
    find -type d -name '*e-0*' -exec rm -r {} +
    /full/path/to/decomposePar
    

    However, when being run it was unable to load the shared libraries it needed because the OpenFOAM shared libraries directory weren't in the LD_LIBRARY_PATH environment variable. To fix this, the OpenFOAM bashrc file was sourced at the beginning of the script.

    Example:

    #!/bin/bash
    source /path/to/bashrc
    rm constant/polyMesh/*
    ...
    

    Sorting this file set the PATH and LD_LIBRARY_PATH environment variables correctly.