I'm currently reading a file and importing the data in it with the line:
# Read data from file.
data = np.loadtxt(join(mypath, 'file.data'), unpack=True)
where the variable mypath
is known. The issue is that the file file.data
will change with time assuming names like:
file_3453453.data
file_12324.data
file_987667.data
...
So I need a way to tell the code to open the file in that path that has a name like file*.data
, assuming that there will always be only one file by that name in the path. Is there a way to do this in python
?
You can use the glob
module. It allows pattern matching on filenames and does exactly what you're asking
import glob
for fpath in glob.glob(mypath):
print fpath
e.g I have a directory with files named google.xml, google.json and google.csv.
I can use glob like this:
>>> import glob
>>> glob.glob('g*gle*')
['google.json', 'google.xml', 'google.csv']
Note that glob
uses the fnmatch
module but it has a simpler interface and it matches paths instead of filenames only.
You can search relative paths and don't have to use os.path.join
. In the example above if I change to the parent directory and try to match file names, it returns the relative paths:
>>> import os
>>> import glob
>>> os.chdir('..')
>>> glob.glob('foo/google*')
['foo/google.json', 'foo/google.xml', 'foo/google.csv']