These lines in C#
decimal a = 2m;
decimal b = 2.0m;
decimal c = 2.00000000m;
decimal d = 2.000000000000000000000000000m;
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);
Generates this output:
2
2.0
2.00000000
2.000000000000000000000000000
So I can see that creating a decimal variable from a literal allows me to control the precision.
Preserving trailing zeroes like this was introduced in .NET 1.1 for more strict conformance with the ECMA CLI specification.
There is some info on this on MSDN, e.g. here.
You can adjust the precision as follows:
Math.Round (or Ceiling, Floor etc) to decrease precision (b from c)
Multiply by 1.000... (with the number of decimals you want) to increase precision - e.g. multiply by 1.0M to get b from a.