NSDecimalNumber *minVal = [NSDecimalNumber decimalNumberWithString:@"0.0"];
NSDecimalNumber *maxVal = [NSDecimalNumber decimalNumberWithString:@"111.1"];
NSDecimalNumber *valRange = [maxVal decimalNumberBySubtracting:minVal];
CGFloat floatRange = [valRange floatValue];
NSLog(@"%f", floatRange); //prints 111.099998
Isn't NSDecimalNumber
supposed to be able to do base-10 arithmetic correctly?
OK, just doing that CGFloat aNumber = 111.1;
shows as 111.099998
in the debugger, even before any operation has been performed on it. Therefore the precision is lost right when it is assigned to the less precise data type, regardless of any arithmetic operations occurring later.