I have a .odb file, named plate2.odb, that I want to extract the strain data from. To do this I built the simple code below that loops through the field output E (strain) for each element and saves it to a list.
from odbAccess import openOdb
import pickle as pickle
# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)
# load the strain values into a list
E = []
for i in range(1000):
E.append(odb.steps['Step-1'].frames[0].fieldOutputs['E'].values[i].data)
# save the data
with open("mises.pickle", "wb") as input_file:
pickle.dump(E, input_file)
odb.close()
The issue is the for loop that loads the strain values into a list is taking a long time (35 seconds for 1000 elements). At this rate (0.035 queries/second), it would take me 2 hours to extract the data for my model with 200,000 elements. Why is this taking so long? How can I accelerate this?
If I do a single strain query outside any Python loop it takes 0.04 seconds, so I know this is not an issue with the Python loop.
I found out that I was having to reopen the subdirectories in the odb dictionary every time I wanted a strain. Therefore, to fix the problem I saved the odb object as a smaller object. My updated code that takes a fraction of a second to solve is below.
from odbAccess import openOdb
import pickle as pickle
# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)
# load the strain values into a list
E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
E.append(EE.values[i].data)
# save the data
with open("mises.pickle", "wb") as input_file:
pickle.dump(E, input_file)
odb.close()