When creating a const
array, there seems to be two methods to do this, A
and B
:
const A: &[i32] = &[1, 2, 3];
const B: [i32; 3] = [1, 2, 3];
They both appear to work identically.
I prefer A
since I don't need to hardcode the length, but is there any benefit to using B
? Is there even any difference for how they're compiled?
They both appear to work identically.
Well, they have different types, so I'm not sure how you can say that (other than because arrays such as B
coerce to slices such as A
)?
They also have different sizes: as a slice reference, A
is two-pointers wide; whereas, as an array of 3 i32
elements, B
is 12-bytes wide.
Where is the data that A
references? Well, its definition causes an array of 3 i32
elements to be allocated in static memory (much like declaring a static
would do), and it every use of it results in a reference to that slice of memory. On the other hand, uses of B
conceptually result in an inline array being copied onto the stack at every usage site.
Therefore accessing the data in A
will entail some runtime indirection (that might well be removed during compilation by an optimisation pass), whereas accessing the data in B
will be direct.