I want to submit a python script with slurm, going through bash "sbatch myscript.sh". In my script.sh python is called "python running.py"
Which in turn will use "check_call" from the subroutine module to finally call srun to run a massively-parallelized software.
I would like to know if this will hurt performance somehow, my worry is that python will take one process since it is the job being submitted?
If a slurm/HPC guru could answer my question I would be grateful!
If i understand the workflow correctly:
Sbatch -> Bash Script -> Python Script -> Srun -> HPC Workload
If this is the case, and without having more information then i would say youd be introducing a very small (negligible) overhead when submitting the job. But in terms of performance of the entire Workload, if this only happens once at startup, the runtime performance wont be affected.