It's well documented that [T; n]
can coerce to [T]
. The following code is also well-formed:
fn test(){
let _a: &[i32] = &[1, 2, 3];
}
Here we have that &[T; n]
is coerced to &[T]
.
Is it true that for all types T
, U
if T
is coerced to U
then &T
is coerced to &U
?
It's not documented in the reference (at least explicitly).
No, because adding one more layer of &
causes it to fail:
fn oops() {
let a: &[i32; 3] = &[1, 2, 3];
let _b: &&[i32] = &a;
}
error[E0308]: mismatched types
--> src/lib.rs:8:23
|
8 | let _b: &&[i32] = &a;
| ------- ^^ expected slice `[i32]`, found array `[i32; 3]`
| |
| expected due to this
|
= note: expected reference `&&[i32]`
found reference `&&[i32; 3]`
Further, it is not the case that [T; n]
coerces to [T]
in the same sense that &[T; n]
coerces to &[T]
. The documentation you linked describes the two traits related to unsized coercions: Unsize
and CoerceUnsized
. [T; n]
implements Unsize<[T]>
, and therefore &[T; n]
implements CoerceUnsized<&[T]>
; this is essentially the same thing, and your code effectively demonstrates both. It would not be possible to write a function that coerces [T; n]
to [T]
without using references (or pointers of some sort) because unsizing coercions only take place behind some kind of pointer.