I have a absolutely huge array (~10 million objects which themselves hold substantial data). Destroying this object causes a quite long lag on the main thread of roughly 5 seconds. While this is just a test case for huge data I'd like to be able to A) better time it's destruction or B) push it off on some background thread to die. I don't know much about the run-time requirement for memory collection but would like a better solution to just spinning for 5 seconds.
So question is how to destroy VERY large objects without facing a long destructor wait on the main thread. I am using ARC and the destructor is being called at a reasonable time (set to nil). Has anyone else dealt with this? Is there a design principle or some other strategy for circumstances like this.
Here is what i'm looking at during profiling
I was able to get things working and released on a background thread doing something like:
__block MyHugeObject* lastResults = self.localHugeObject; //retain it for the block
self.localHugeObject = nil;//clear local copy
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
lastResults = nil;//release on a background thread
});