I'm interesting in using DB4O to store the training data for a learning algorithm. This will consist of (potentially) hundreds of millions of objects. Each object is on average 2k in size based on my benchmarking.
The training algorithm needs to iterate over the entire set of objects repeatedly (perhaps 10 times). It doesn't care what order the objects are in.
My question is this: When I retrieve a very large set of objects from DB4O, are they all loaded into memory, or are they pulled off disk as needed?
Clearly, pulling hundreds of millions of 2k objects into memory won't be practical on the type of servers I'm working with (they hvae about 19GB of RAM).
Is Db4o a wise choice here?
db4o activation mechanism allows you to control which object are loaded into memory. For complex object graphs you probably should us transparent activation, where db4o loads an object into memory as soon as it is used. However db4o doesn't explicit remove object from memory. It just keeps a weak reference to all loaded objects. If a object is reachable, it will stay there (just like any other object). Optionally you can explicitly deactivate an object.
I just want to add a few notes to the scalability of db4o. db4o was built for embedding in application and devices. It was never built for large datasets. Therefore it has its limitations.
Therefore you probably should look at larger object databases, like VOD. Or maybe a graph database like Neo4J is also a good choise for your problem?