All, I've got a issue with django signals.
I have a model
In an effort to speed up responsiveness of page loads, I'm offloading some intensive processing that must be done, via a call to a second localhost webserver we're running, both using the same database. I'm seeing behavior where the calling process can retrieve the object, but the called process can't. Both port 80 and port [port] are pointing to django processes running off the same database.
In models.py
class A(models.Model):
stuff...
def trigger_on_post_save( sender, instance, create, raw, **keywords):
#This line works
A.objects.get( pk=instance.pk )
#then we call this
urlopen( r'http://127.0.0.1:[port]' +
reverse(some_view_url, args(instance_pk) ).read()
post_save.connect( trigger_on_post_save, A )
In views.py
def some_view_function( request, a_pk ):
#This line raises an object_not_found exception
A.objects.get( pk=a_pk )
Furthermore, after the urlopen call raises an exception, the object does not exist in the database. It was my understanding that post_save was called after the object had been saved, and written to the database. Is this incorrect?
I believe post_save fires after the save occurs, but before the transaction is commited to the database. By default, Django only commits changes to the database after the request has been completed.
Two possible solutions to your problem:
To be honest though, your whole setup seems a little bit nasty. You should probably look into Celery for asynchronous task queuing.