I calculate sum of volume at all integration point like
volume = f.steps['Step-1'].frames[-1].fieldOutputs['IVOL'].values
# CALCULATE TOTAL VOLUME AT INTEGRATION POINTS
V = 0
for i in range(len(volume)):
V = V+volume[i].data
the length of volume in my problem is about 27,000,000 so it takes too long to do that. I try to parallelize this process with multiprocessing module in python. As far as I know, the data should be splited to several parts for that. Could you give me some advice about spliting the data in odb files to several parts or parallelizing that code??
You can make use of bulkDataBlocks
command available for the field output objects.
Using bulkDataBlocks
, you can access all the field data as an array (abaqus array actaully, but numpy array and abaqus array are largely the same) and Hence, manipulation can get very easy.
For example:
# Accessing bulkDataBlocks object
ivolObj= f.steps['Step-1'].frames[-1].fieldOutputs['IVOL'].bulkDataBlocks
# Accessing data as array --> please note it is an Abaqus array type, not numpy array
ivolData = ivolObj[0].data
# now, you can sum it up
ivolSum = sum(ivolData )