I have a code like this:
public class Database : System.Data.Entity.DbContext
{
public DbSet<Person> Persons { get; set; }
}
var db = new Database();
var persons = db.Persons.Where(...).AsEnumerable();
foreach(var person in persons)
{
//...
}
Which of the following scenarios is correct?
Persons
table by one request to database, then access each record from memory.foreach
loop, application fetches only one record from database .Which of the following scenarios is correct?
- Application fetches entire records from Persons table by one request to database, then access each record from memory.
- In each step of foreach loop, application fetches only one record from database.
None of these statements is entirely correct. But first, let me say that in your code .AsEnumerable()
doesn't actually do anything. You can remove it without changing anything logically. IQueryable
implements IEnumerable
and foreach
executes IEnumerable
methods under the hood. So it addresses IQueryable
"as enumerable".
Now for the reading part. From the application's point of view, the second statement is closest to the truth. It receives all entities in persons
one-by-one. It's not before the end of the loop that all persons
have been made available.
But the actual lower-level reading happens in chunks. As described here, the client stores raw data from the database in network buffers. Depending on the size of these buffers and that of the result set (i.e. the number and size of persons
) it is possible that all records are read at once. "Large" amounts of data will require multiple reads.
For the application, that doesn't really matter though. When it comes to performance optimization, I think the last thing we should think of is playing with network buffer size. So to reword the second statement more correctly: