I'm experiencing some really strange behavior, and I'm hoping someone can explain to me what's going on. I've been over the official Python documentation and don't see anything that explains what I'm seeing - if I missed something, please point it out for me.
I have a (large) python script that takes ~150k worth of text, housed in the "content" variable, and uploads it to a Perl script on a remote system for logging. It uses urllib to do this:
thash = {'file_name': file_name, 'contents':content}
upload = urlib.urlencode(thash)
post = urlib.urlopen("http://path.to.perl.script/log_writer.pl", upload)
#post.read()
The problem is that the file "log_writer.pl" writes is truncated at a seemingly randomly selected place, depending on the length of "content" - unless I call post.read() after the urlopen call in my Python script.
I'm new to Python and still very new to Perl, but my understanding is that it shouldn't work this way. Why would the remote Perl script write the whole file when I call post.read() locally in my Python script?
You need to do something with the response object in 'post' subsequent to the request. If you don't have any references to 'post' after that point in your program (e.g. post.read(), post.close(), etc..), python will optimize it away and leave the response object eligible for garbage collection before the POST has even completed.
See: should I call close() after urllib.urlopen()? for more information.