Being new to fixed point types in Ada, it surprised me to hear that the default value of 'Small is a power of 2, smaller or equal to the delta specified. Here is a short snippet to introduce the problem:
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
type foo is delta 0.1 range 0.0..1.0;
x : foo := foo'delta;
begin
Put (x'Image);
while true loop
x := x + foo'delta;
Put (x'Image);
end loop;
end Main;
The output shows that 'Small is indeed the largest power of 2 smaller than 0.1, as some printed values appear twice:
0.1 0.1 0.2 0.3 0.3 0.4 0.4 0.5 0.6 0.6 0.7 0.8 0.8 0.9 0.9 1.0
raised CONSTRAINT_ERROR : main.adb:9 range check failed
If we really wanted 0.1 as the delta, we could say so:
real_delta : constant := 0.1;
type foo is delta real_delta range 0.0..1.0
with Small => real_delta;
If optimization were the only use-case for this difference, it could have been a boolean attribute, or even just a warning "selected delta is not a power of two (suggest 2**-4 instead)". Could there be any reason to specify both as separate values, such as:
type foo is delta 0.1 range 0.0..1.0
with Small => 0.07;
x : foo := 0.4 + 0.4; -- equals 0.7 rather than 0.8
This seems to only confuse the poor reader who encounters this later. The following example is taken from John Barnes' Programming in Ada 2012, section 17.5 on page 434. He doesn't explain why the delta is a value much larger and not a multiple of the actual 'Small used.
π : constant := Ada.numerics.π;
type angle is delta 0.1 range -4*π .. 4 *π;
for Angle'Small use π * 2.0**(-13);
The only difference I see is that 'Image prints just one digit of precision now. Is that the only difference?
for foo'Small use foo'Delta
?I encountered code that does the above directly, without the constant:
type foo is delta 0.1 range 0.0..1.0;
for foo'Small use foo'Delta;
but GNAT complains that foo is frozen immediately after its declaration:
main.adb:6:04: representation item appears too late
Was that changed in some version of Ada? Should it be valid Ada2012?
Yes. To prevent underflow and/or to avoid wasting bits.
Consider if you multiply 0.2 by 0.2 and then at some later point divide by 0.1.
The correct answer is 0.4. However, if your 'Small
is the same as your 'Delta
(i.e. 0.1), when 0.2 is squared, the true value 0.04 will underflow and so will be calculated as zero. When you then divide by 0.1 you get zero as the answer, instead of 0.4.
If on the other hand you have specified your 'Small
as 0.01, the answer will be calculated correctly. The first part of the calculation will be correctly reported as zero if you try to access it, because 0.04 is closest to zero and that is the correct representation to the nearest 0.1, however when that value is then divided by 0.1 the correct value emerges, 0.4.
Consider if you have a 16 bit value, and the range is as specified in your example range 0.0..1.0
, there are only 11 possible values. These can be represented in 4 bits, but that leaves the other 12 bits completely wasted. Why not use them to hold the extra precision so that in the event there is any arithmetic the answer will be accurate. I have noticed that a lot of engineers battle with themselves over these kinds of things and find it difficult to specify the extra precision unless they can see a reason why a calculation might be needed. However, I think that's the wrong question to ask. A better question is are you going to waste the other bits? It doesn't cost anything to make use of them, and you future-proof yourself against any unforseen calculations that may occur on the type, and avoid a bug.
Here's how the bug happens. Engineer A thinks "I can't see any reason why this measurement would ever be involved in a calculation it just gets logged and reported, I'll set 'Small to 'Delta". Unfortunately Engineer A is only thinking about the current project. 5 years later Engineer A has left the company and Engineer B is asked to add a feature / re-use the existing code on a new project / turn it into a product, and Engineer C then ends up having to do some arithmetic on it... not realising that this 16 bit value actually only has 4 bits of precision. Bang. Mr Barnes is clearly well aware of these kinds of issues!
One final point - going back to your initial investigation - well done. Yes, that's the way to do it. In fact because of these types of issues (i.e. the very non-intuitive default behaviour) Ada now has decimal fixed point types, so you can do e.g type Foo2 is delta 0.1 digits 2;
which specifies a fixed point type with an actual delta of 0.1 (not a binary fraction that's smaller than it), and so behaves much more intuitively. Specifying 2 digits gives a range of -9.9 to +9.0, specifying digits 1
-0.9 to +0.9 etc. The delta can be any power of ten.
I hope this is useful.