I have a bunch of csv files where each column holds a value representing the angle of each of the Canadarm2's 7 segments. The largest file contains values for every second of a 5 hour mission and is 30 megs!
I've written this python script below in C4D's Script Manager that reads data from the csv and creates keyframes every second where the segments are rotated according to the angle data.
The script works well on small files up to about 1 meg in size but I get the dreaded "Cinema4D has stopped responding" alert or it just sits there silently with apparently nothing happening when I try to process larger files.
Has anyone had any experience with something like this? Might more memory than my 12 gigs help? If I could process say 30 minutes of data at a time inside C4D I'd have a winner.
memLog=''
record = 0
frame=0
path='/Users/...'
filename = path + '30minutes_3meg.csv'
fileobj = open(filename, 'r')
rowcount=0
for row in fileobj:
rowcount=rowcount+1
if rowcount>1:
ar = row.split(',')
colcount=0
for angle in ar:
if colcount == 0:
log=angle
if log==memLog:
record=0
else:
record=1
print log
frame=frame+1
memLog=log
if colcount == 2:
if record==1:
rotate(frame,'SR','r',angle)
elif colcount == 4:
if record==1:
rotate(frame,'SY','r',angle)
elif colcount == 6:
if record==1:
rotate(frame,'SP','r',angle)
elif colcount == 8:
if record==1:
rotate(frame,'EP','h',angle)
elif colcount == 10:
if record==1:
rotate(frame,'WP','h',angle)
elif colcount == 12:
if record==1:
rotate(frame,'WY','h',angle)
elif colcount == 14:
if record==1:
rotate(frame,'WR','h',angle)
colcount=colcount+1
Python has a csv
module which might make things easier.
If you're running into memory problems, I'd suggest doing a forced garbage collection. You do import gc
and then in the loop, a gc.collect()
. It's probably not required every loop, and would even slow things down if done too frequently. You could keep a counter variable (rowcount
would do) and do it less frequently (every 10,000 loops, say. Experiment.). Consider adding a print statement indicating the collection happening and the row number. That way you can tell if the script is still running or if it has locked up.
if rowcount % 10000 == 0:
print 'collecting', rowcount
gc.collect()
If you can install external Python modules, you could also try using Pandas to load the csv file. It probably performs better for very large files, but you might have to specify a chunksize if you're still running out of memory.