I'm trying to understand the practical utility of the G9
format specifier in C# when dealing with round-tripping floating-point numbers.
The book C#12 in a nutshell mentions that G9
is helpful to avoid precision loss when converting a float
to a string
and back to a float
(page 324). But I can't find an example that shows a case where G9
formatting succeeds in preserving equality, while default formatting would fail.
For example:
using System;
class Program
{
static void Main()
{
float originalNumber = 0.1f;
string defaultFormatted = originalNumber.ToString();
float defaultRoundTrip = float.Parse(defaultFormatted);
Console.WriteLine("Default Formatting:");
Console.WriteLine("Original: " + originalNumber);
Console.WriteLine("Formatted: " + defaultFormatted);
Console.WriteLine("Round-trip Equal: " + (originalNumber == defaultRoundTrip));
string g9Formatted = originalNumber.ToString("G9");
float g9RoundTrip = float.Parse(g9Formatted);
Console.WriteLine("\nG9 Formatting:");
Console.WriteLine("Original: " + originalNumber);
Console.WriteLine("Formatted: " + g9Formatted);
Console.WriteLine("Round-trip Equal: " + (originalNumber == g9RoundTrip));
}
}
Gives me
Default Formatting:
Original: 0.1
Formatted: 0.1
Round-trip Equal: True
G9 Formatting:
Original: 0.1
Formatted: 0.100000001
Round-trip Equal: True
I was expecting something of the sort:
Default Formatting:
Original: 0.1
Formatted: 0.100000000001
Round-trip Equal: False
G9 Formatting:
Original: 0.1
Formatted: 0.1
Round-trip Equal: True
Your observations are correct. Calling Single.ToString
without any arguments can indeed roundtrip the number.
From the documentation of ToString()
,
The
ToString()
method formats a Single value in the default ("G", or general) format of the current culture.
From standard numeric format strings, it is said that for float
, "G" on its own without any precision will use as much precision as needed to round-trip the number.
If the precision specifier is omitted or zero, the type of the number determines the default precision, as indicated in the following table.
Numeric Type | Default Precision |
---|---|
Single |
Smallest round-trippable number of digits to represent the number (in .NET Framework, G7 is the default) |
Note that it does say that it will use G7 on .NET Framework, and this is possibly why the book author made that remark about using G9.
See also this very old answer that quotes the documentation for Single.ToString()
, which is different from what it says today.
By default, the return value only contains 7 digits of precision although a maximum of 9 digits is maintained internally.
If you require more precision, specify format with the "G9" format specification, which always returns 9 digits of precision, or "R", which returns 7 digits if the number can be represented with that precision or 9 digits if the number can only be represented with maximum precision.
So this purpose of "G9" was actually documented.