Im using MongoDb with Mongoskin. In a collection I'm saving events.
Among other fields, these events have a start and an end, saved as Dates
in Mongodb.
events {
start: "Date1",
end: "Date2",
...
}
When inserting new documents in this collection I need a constrain that forbids insertion of document which start-end dates overlapping an event alreay created. In short, I dont want any events share the same time span.
Question: Is there a way to handle this constraint trough MongoDb with some kind of unique index? I think not, but please correct me if I'm wrong!
If not:
Question Do I have to check possible overlaps trough code before inserting new events? Do I need to set up some kind of write lock, so that another user can't squeeze in an event between the time I check for overlaps and inserting my own event? How is this done in MongoDb?
EDIT
This is the best way i have come up with so far, it actually seems to work pretty good.
var input = getPostInput();
var query = {$and: [
{start: {$lte: input.end}},
{end: {$gte: input.start}}
]};
db.events.findAndModify(query, {}, {$setOnInsert: input}, {new: true, upsert: true}, callback)
It uses the findAndModify
as a type of "findOrCreate" operator. $setOnInsert
add the POST input properties only if the findAndModify
don't find a document, and upsert: true
says it should create an document if none is found. These two options in combination seems to create a findOrCreate operator.
EDIT
Problems arise when updating (PUT) an event. I can't reuse the code above because it's relies on upsert
and $setOnInsert
.
EDIT
@wdberkeley:
I'm still struggling with this main problem: ensure uniqueness on a range. The more I think about it, it seems that "the array of time slices" might be the most non problematic solution. For example, lets say that 5 minutes is chosen as the smallest time period, and the average booking is 45 minutes. This would require me to save 9 numbers (probably date
s): timespan = [0,5,10,15,20,25,30,35,40]
, instead of two: start=0, end=45
.
This is more than four times more saved data for the average booking.
I dont mean to be harsh, but don't you see this as a problem? Or does it become a problem first when the saved data is 10 times larger or 100 times larger? I do realise that this is also relative to the totalt amount of bookings actually made...
There isn't a dead simple way to do this in MongoDB. I cooked up one alternative option that could work for you. If your dates come in discrete steps, like if this is for a booking application where users reserve objects by the day or the hour, then you can use a combination of unique indexes and multikey indexes. For example, suppose reservations are by day. John Q reserves October 11 to October 14, inclusive. That is something like the 281st to 284th days of the year - let's assume that's exactly which days it is. Save the reservation field as an array of the days that are reserved
> db.reservations.insert({ "span" : [ 281, 282, 283, 284 ] })
Put a unique index on the span
field.
> db.reservations.ensureIndex({ "span" : 1}, { "unique" : 1 })
Now you can't insert a document that has any of those days in its span:
> db.reservations.insert({ "span" : [ 279, 280, 281, 282 ] })
// unique key error
This might work for you with some additional tweaking to take into account the year, or it might be part of a compound unique index to make the timespans unique by e.g. room_id
for hotel booking.
Another way is just to coordinate the checks on the client side. If you have multiple clients that don't talk to each other at all I guess the best way to do this would be to share a "lock" in the database: findAndModify
a document in a lock
collection to check and acquire a lock. Once a client has the lock by changing a field on that document, it can do the check for overlaps with a query and then the insert if all is well, then release the lock by changing the flag on the lock document again.