# models.py
from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
text_blob = models.CharField(max_length=50000)
# tasks.py
import celery
@celery.task
def my_task(person):
# example operation: does something to person
# needs only a few of the attributes of person
# and not the entire bulky record
person.first_name = person.first_name.title()
person.last_name = person.last_name.title()
person.save()
In my application somewhere I have something like:
from models import Person
from tasks import my_task
import celery
g = celery.group([my_task.s(p) for p in Person.objects.all()])
g.apply_async()
How can I efficiently and evenly distribute the Person records to workers running on multiple machines?
Could this be a better idea? Wouldn't it overwhelm the db if Person has a few million records?
# tasks.py
import celery
from models import Person
@celery.task
def my_task(person_pk):
# example operation that does not need text_blob
person = Person.objects.get(pk=person_pk)
person.first_name = person.first_name.title()
person.last_name = person.last_name.title()
person.save()
#In my application somewhere
from models import Person
from tasks import my_task
import celery
g = celery.group([my_task.s(p.pk) for p in Person.objects.all()])
g.apply_async()
I believe it is better and safer to pass PK rather than the whole model object. Since PK is just a number, serialization is also much simpler. Most importantly, you can use a safer sarializer (json/yaml instead of pickle) and have a peace of mind that you won't have any problems with serializing your model.
As this article says:
Since Celery is a distributed system, you can't know in which process, or even on what machine the task will run. So you shouldn't pass Django model objects as arguments to tasks, its almost always better to re-fetch the object from the database instead, as there are possible race conditions involved.