I've come across a problem with transactions and linked entities. I've got code like this (not real code, just to get the idea):
User *user = [User new];
user.username = "test";
[user commit];
user = [[[User query]where:@"username = \"test\""]fetch][0];
Session *session = [Session new];
session.user = user;
[session commit];
session = [[[Session query]whereWithFormat:@"user = %@", user.Id]fetch][0];
Config *config = [Config new];
config.user = user;
[config commit];
config = [[[Config query]whereWithFormat:@"user = %@", user.Id]fetch][0];
Using this scheme to code without transactions, it works fine. But as soon as i try to execute this in a transaction, i'm ending up with the user object not being linked in Session and Config (the query inside the transaction never finds the previously commited object). As i understand from the documentation, the commit is not a transactional commit but either insert or update.
Is there a way to get operations like this (create and link multiple entities) to work inside a transaction using DBAccess?
Thanks,
mic
Sorry for the delay, I thought I had responded to this but clearly I got distracted.
I have replicated your issue exactly, and I would like to explain why you are seeing what you are.
If we take the example below, based on your feedback:
[DBTransaction transaction:^{
Department *dept = [Department new];
dept.name = @"test dept";
[dept commit];
dept = [[[Department query] where:@"name = 'test dept'"] fetch].firstObject;
Person *person = [Person new];
person.Name = @"test";
person.department = dept;
[person commit];
person = [[[Person query] whereWithFormat:@"department = %@", dept.Id] fetch].firstObject;
} withRollback:^{
}];
The reason, you have the scenario you have is because DBAccess records all of the object changes and instructions, based on any calls to commit or remove.
And these are wrapped up within a transaction, so as the code in the block is run we build up a list of the instructions in the order they are called, and then once the block has been run, we execute the transaction and look for errors.
This means, that whilst you are within the block, the queries will not return results, because the records are not within the database yet.
So, a DBAccess transaction is not similar to a SQL transaction in concept. I would like to explain, why this came about and is largely the difference between dealing with database commands vs ORM objects.
Our original implementation was as you would expect, the transaction simply started a new transaction and then executed the SQL in order and committed it.
The problem arose, because you could have transactions, within transactions or transactions within the entityWillDelete, Update or Insert methods. Or even just cancel events being generated from the event model, which SQL knows absolutely nothing about.
So locking the data writer object for that table when within a transaction will simply cause a locked thread. So we re-wrote the transaction system to record a list of changes to be made, and them commit them, inclusive of any child or related changes that may be created using triggers, or events.
So that is why we are where we are presently. Which, I will be the first to admit, is not ideal.
Well that depends on the reasoning behind using a transaction. If it was for performance reasons, then you can use a DBContext instead, which works pretty much like a normal SQL transaction, but also has all the downsides of not tracking all changes made to any object from any other code that may inadvertently be called from code within the block.
Below is an example of a context working correctly with embedded objects, but your original problem with queries will still be a problem.
DBContext* c = [DBContext new];
Department *dept = [Department new];
dept.name = @"test dept";
[c addEntityToContext:dept];
Person *person = [Person new];
person.Name = @"test";
person.department = dept;
[c addEntityToContext:person];
[c commit];
But if your intention was to avoid orphaned records or data integrity issues, then the context is not ideal, because there is no rollback clause.
Originally DBAccess did not have any transaction support for a very simple reason, with the exception of database corruption, or running out of space it is not possible to either accidentally or deliberately generate errors which would stop an object from being committed to the database. All assignments to objects are tested for the persistence capabilities, and errors are raised on the delegate if a developer tries to store an object which is not supported in the ORM, but this would be a coding error that will be common across all versions of the application and would be found in testing.
That is a bit of a big claim I appreciate, but with no constraints, a flexible type storage system, which will store a string in a number column if it needs to, thread safety and the use of WAL's to overcome SQLite's legacy issues with these concurrency issues.
Objects can also be persisted in trees as well, so if you have a person class and that has a related object defined as a property, a commit on person will commit in a single operation any child objects as well, cascading down.
Not everyone was happy with that kind of implementation, so we implemented transactions as a method that people feel more comfortable with.
Well it certainly seems like there is a valid bug in your example, in that the cascading of object comital and assignment is broken. As the object.Id is pulled out and added as a parameter in a SQL statement, but that object has not been assigned a PK value at that point. So we will fix that, to store the actual DBObject itself, which at the point that particular statement is executed, will have been assigned a PK value, therefore solving that issue.
As a workaround that should work now, using NSString* based Id columns would generate a GUID which is available before persistence.
If you care to send us an email, we will keep you updated or if indeed you would like to provide us with feedback or implementation ideas we would be most grateful.
Thanks Adrian