I am dealing with large sensor data structures stored in FITS-files which I open and read-in using the astropy module.
For analysis I have written several scripts (e.g. analysis1.py
) that deal with this data in different ways - each one creating an image file with several subplots.
Now I want to create an init.py
script that initializes things like, filename and several parameters for analysis like this
init.py filename arg2 arg3 arg4
This I want to pass then to all subscripts, e.g. analysis1.py
#analysis1.py
from init import filename, arg2, arg3, arg4
# import everything else necessary
# do things with the data
# output picture as png-file
using bash but without explicitely inputting these as arguments again like
#!/bin/bash
# initialization of parameters and data read-in
python init.py filename arg2 arg3 arg4
# analysis scripts running in parallel using the parameters of init.py
python analysis1.py &
python analysis3.py &
I use this way because I'm usually working on CPU-machines and want to use as many threads/cores as possible to be efficient.
Common importing of course doesn't work because there argv
doesn't get the parameters from the last run
import init
from sys import argv
print(argv)
#returns ['mypath/init.py']
How could I achieve this?
As Kendas says, you will have to persist the arguments on disk somewhere.
This will work:
init.py
import sys
import pickle
def get_parms():
return pickle.load(open("init.dat","rb"))
if __name__ == "__main__":
myparms = sys.argv[:]
pickle.dump(myparms,open("init.dat","wb"))
analysis1.py
import init
myparms = init.get_parms()
print(myparms)
If you run init.py
like this:
python init.py filename arg1 arg2 arg3
then analysis1.py
will produce this output:
['(pathname)/init.py', 'filename', 'arg1', 'arg2', 'arg3']
You will almost certainly want to allow analysis1.py
to accept command line parameters as overrides to the values from init
.