Search code examples
c#cdouble-precision

Why does the precision of a double differ between C and C#?


I'm trying to rewrite part of an old system as a C# program. The old programs where written in C. Both programs read blob files into a byte array and fill an object/struct with the data.

In the original C code this is done with fread()

fread(&myStruct, sizeof(MYSTRUCT), 1, data)
fseek(data, 256, 0)
fread(&nextStruct, sizeof(NEXTSTRUCT), 1, data)

in C# a binary reader is used

using(BinaryReader reader = new BinaryReader(stream)){

  double1 = reader.ReadDouble();
  double2 = reader.ReadDouble();

  reader.BaseStream.Position = 256;

  short1 = reader.ReadInt16();
   ... and so on ...
}

When running the programs most of the time the results are the same but sometimes there are small deviations and for some blobs there are huge deviations.

While debugging the C code with insight I saw that the values after extraction from the blob are not the same.

Examples
In C# I got 212256608402.688 in C 212256608402.68799 for double values
In C# I got 2.337 in C 2.3370000000000001 for short values

What's the reason for this discrepancy and is it fixable?
After some methods summing up all entries (up to a million) and calculation some values, could this lead to a fault of 5% or more? Are there other pitfalls to watch for, that could cause faulty results?


Solution

  • 2.3370000000000001 == 2.337 and 212256608402.688 == 212256608402.68799. These strings result in bit-for-bit identical doubles when parsed. double doesn't have enough precision to differentiate those real numbers, they are both rounded to the same value. There is no difference in precision, only a difference in the amount of digits printed.