Why is it that when I save a value of say 40.54 in SQL Server to a column of type Real, that it returns a value that is more like 40.53999878999 instead of 40.54? I've seen this a few times but have never figured out quite why it happens. Has anyone else experienced this issue, and if so, what causes it?
Have a look at What Every Computer Scientist Should Know About Floating Point Arithmetic.
Floating point numbers in computers don't represent decimal fractions exactly. Instead, they represent binary fractions. Most fractional numbers don't have an exact representation as a binary fraction, so there is some rounding going on. When such a rounded binary fraction is translated back to a decimal fraction, you get the effect you describe.
For storing money values, SQL databases normally provide a DECIMAL type that stores exact decimal digits. This format is slightly less efficient for computers to deal with, but it is quite useful when you want to avoid decimal rounding errors.