Search code examples
c#performancesql-server-cecompact-framework

Compact-Framework SqlDataReader vs SqlDataAdapter


I'm working on an older app that works with a handheld scanner device running SQL Server CE and .Net 3.5.

My question is associated with performance between using SqlCeDataReader vs SqlCeDataAdapter. I have read plenty that states that the DataReader is much faster than the DataAdapter as it has a lot less overhead and is direct one record at a time vs a bulk "Fill" into a datatable or DataSet.

The existing app uses the DataAdapter, and I was trying to work on a retrofit implementing the DataReader instead but finding the DataReader on the handheld device is SLOWER which completely puzzles me.

In summary, here is a simplified version as other posts have similarly had

Creation of the table... 45 fields, not just a 3 or 4 field sample

// Showing more fields just to show variety of columns of not just
// strings, but of other types too   
CREATE TABLE MyTest 
(
   MyTestID     int IDENTITY(1,1) CONSTRAINT MyTestID PRIMARY KEY,
   Fld1    int,
   Fld2    int, 
   ... (more, just stripped for sample)
   fld6    float,
   Fld7    nchar(2),
   Fld8    float,
   Fld9    nchar(2),
   Fld10   int,
   ... (more, just stripped for sample)
   Fld24   DATETIME, 
   Fld25   nchar(1), 
   Fld26   nchar(15), 
   Fld27   nvarchar(15), 
   ... (more, just stripped for sample)
   Fld44   DATETIME, 
   Fld45   int )

After this, I have a class in its most basic format is

public class CTypedClass1
{
   public int Fld1 { get; set; }
   public int Fld2 { get; set; }
   ... etc..
   public double Fld6 { get; set; }
   public double Fld7 { get; set; }
   public string Fld8 { get; set; }
   ... etc to field 45
}

Then I have two methods that cycle through a count for 100 records which corresponding call a method to query by a Data Adapter vs DataReader

private void ReadMyTestByAdapter()
{
   var cmd = MyConnection.GetSQLDbCommand( "Select * from MyTest where MyTestID = @nMyTestParm" );
   cmd.Parameters.Add( "nMyTestParm", 1 );
   var da = MyConnection.GetSQLDataAdapter();
   da.SelectCommand = cmd;

   for (int i = 1; i < 100; i++)
   {
      DataTable tmpTbl = new DataTable();
      da.SelectCommand.Parameters[0].Value = i;
      QueryByDataAdapter(da, tmpTbl);
   }
}

private void ReadMyTestByTypedClass()
{
   var cmd = MyConnection.GetSQLDbCommand("Select * from MyTest where MyTestID = @nMyTestParm");
   cmd.Parameters.Add("nMyTestParm", 1);

   for (int i = 1; i < 100; i++)
   {
      CTypedClass1 tmpRec = new CTypedClass1();
      cmd.Parameters[0].Value = i;
      QueryByStruct(cmd, tmpRec);
   }
}

Now, the query by data adapter and populating the results into a DataTable

private void QueryByDataAdapter(SqlCeDataAdapter da, DataTable putInHere )
{
   if (da.SelectCommand.Connection.State != ConnectionState.Open)
      da.SelectCommand.Connection.Open();

   da.Fill(putInHere);

   if (da.SelectCommand.Connection.State == ConnectionState.Open)
      da.SelectCommand.Connection.Close();
}

And to try and prevent Reflection overhead, am trying using the data reader directly into the properties of the typed class.

private void QueryByStruct(SqlCeCommand cmd, CTypedClass1 curRec)
{
   if (cmd.Connection.State != ConnectionState.Open)
      cmd.Connection.Open();

   using (var reader = cmd.ExecuteReader())
   {
      while (reader.Read())
      {
         curRec.Fld1 = (int)reader["Fld1"];
         curRec.Fld2 = (int)reader["Fld2"];
         // etc with rest of fields to 45
         // all explicitly (typecast) referenced
      }
   }

   if (cmd.Connection.State == ConnectionState.Open)
      cmd.Connection.Close();
}

The process actually does all 45 fields of the table, and the performance between the two methods is virtually identical at 9 seconds to read 100 records, all individually queried one ID at a time. The apparent memory used by the Typed format is less than that of the DataTable/DataAdapter which is also confusing as the Typed is only storing ONE instance of the values, but the DataTable result has the entire ItemArray at the row level, and has a DataRowVersion for Current, Default and Original values.

This just does not make any sense and hope someone might shed a little light on this for me.

NOTE: If I do this exact same thing on a desktop machine (going against a different database, but respectively the connection, query, etc are the same), typed-class method is about 6 times faster than the data adapter method.


Solution

  • Using reader[string] in my experience is slow. When I developed for CE on HandHeld scanners I used something like the following: (don't use the DataAdapter it is slower)

    private void QueryByStruct(SqlCeCommand cmd, CTypedClass1 curRec)
    {
       if (cmd.Connection.State != ConnectionState.Open)
          cmd.Connection.Open();
    
       List<object> fields;
    
       using (var reader = cmd.ExecuteReader())
       {
          if (reader.Read())
          {
              var values = new Object[reader.FieldCount];
              reader.GetValues(values);
              fields = values.ToList();
          }
       }
    
       if (cmd.Connection.State == ConnectionState.Open)
          cmd.Connection.Close();
    
       curRec.Fld1 = (int)fields.ElementAt(1);
       curRec.Fld2 = (int)fields.ElementAt(2);
       // etc with rest of fields to 45
       // all explicitly (typecast) referenced
    
    }
    

    I changed the while to if because it appears you're just overwriting values for no really good reason. GetValues() in my experience is really really fast. The down side is that you only get values in column order with no column types nor column names.

    Also a good - Microsoft SQL Server Compact Edition Ultimate Performance Tuning:

    Video Channel

    Microsoft SQL Server Compact Edition Ultimate Performance Tuning Powerpoint Presentation (direct link)