I am retrieving a very large number of records with multiple result sets from SQL Server into my .NET 3.5 project.
I can't just pull them all into a DataSet and work on them, since it would take up too much memory. I don't need to pull all the records as once but just one record in the parent table and then the related child records.
I could accomplish this with using a DataReader but my concern here is the process of iterating through all the records will take many hours. That means the DataReader would be left open for all those hours which I don't think is possible.
How do you go about processing the records when retrieving all the records into a DataSet is not possible due to the large number but using a DataReader would require it to be open for many hours?
One way to tackle this problem is to split the data and process it in chunks, while keeping information about the last processed record.
This way, if the database connection is lost you wouldn't need to start over but you could continue at the last "checkpoint" (you could save checkpoint after every record if that makes sense).
If processing of each loaded record takes long, it would make sense to use a DataTable to download a small chunk of data into memory, process it and continue (this improves usage of db resources). But if the processing is fast and most of the time is spent downloading the data from database, then it would make more sense to use DataReader (to save memory).