In Ada you can define so called 'mod' and 'range' types:
type Unsigned_n is mod 2**n;
type Range_Type is range -5 .. 10;
How are these implemented at the language machine level? What kind of performance penalty do you experience when performing operations on these types?
It's not clear what you mean by 'at the language level'. At the Ada level, they just are! and at the machine level, they're implemented how you'd expect.
For modular types, if you use a power of 2 for the modulus, the compiled code uses masks; otherwise, there will be tests.
type Modular is mod 42;
...
procedure T (M : in out Modular) is
begin
M := M + 1;
end T;
translates (x86_64, -O2) to
_foo__t:
LFB2:
leal 1(%rdi), %eax
cmpb $40, %dil
leal -41(%rdi), %edx
cmovg %edx, %eax
ret
I don't write assembler nowadays, but that doesn't look too bad (and, in a language that didn't support modular types, you'd have to write something similar yourself if the problem demanded it).
For integer types, the implementation is again as you'd expect, except of course that assigning a value to a variable involves a constraint check (unless the compiler can prove that there's no need).
But really, for most uses, you don't write these increment operations yourself; if you need to loop over all values of the type you can say
for J in Modular loop
or, if you have declared Arr : array (Range_Type) of Foo;
,
for J in Arr'Range loop
and there's no need to check the validity of J
, and therefore no performance penalty.
It's always possible to suppress constraint checks (in GNAT, -gnatp
suppresses all checks); but it's a bit like taking the seatbelt off as soon as you leave the driveway!