Is the underlying bit representation for an std::array<T,N> v
and a T u[N]
the same?
In other words, is it safe to copy N*sizeof(T)
bytes from one to the other? (Either through reinterpret_cast
or memcpy
.)
Edit:
For clarification, the emphasis is on same bit representation and reinterpret_cast
.
For example, let's suppose I have these two classes over some trivially copyable type T
, for some N
:
struct VecNew {
std::array<T,N> v;
};
struct VecOld {
T v[N];
};
And there is the legacy function
T foo(const VecOld& x);
If the representations are the same, then this call is safe and avoids copying:
VecNew x;
foo(reinterpret_cast<const VecOld&>(x));
I say yes (but the standard does not guarantee it).
According to [array]/2:
An array is an aggregate ([dcl.init.aggr]) that can be list-initialized with up to N elements whose types are convertible to T.
And [dcl.init.aggr]:
An aggregate is an array or a class (Clause [class]) with
no user-provided, explicit, or inherited constructors ([class.ctor]),
no private or protected non-static data members (Clause [class.access]),
no virtual functions ([class.virtual]), and
no virtual, private, or protected base classes ([class.mi]).
In light of this, "can be list-initialized" is only possible if there are no other members in the beginning of the class and no vtable.
Then, data()
is specified as:
constexpr T* data() noexcept;
Returns: A pointer such that[data(), data() + size())
is a valid range, anddata() == addressof(front())
.
The standard basically wants to say "it returns an array" but leaves the door open for other implementations.
The only possible other implementation is a structure with individual elements, in which case you can run into aliasing problems. But in my view this approach does not add anything but complexity. There is nothing to gain by unrolling an array into a struct.
So it makes no sense not to implement std::array
as an array.
But a loophole does exist.