Consider the following piece of code, which is perfectly acceptable by a C++11 compiler:
#include <array>
#include <iostream>
auto main() -> int {
std::array<double, 0> A;
for(auto i : A) std::cout << i << std::endl;
return 0;
}
According to the standard § 23.3.2.8 [Zero sized arrays]:
1
Array shall provide support for the special caseN == 0
.
2
In the case thatN == 0
,begin() == end() ==
unique value. The return value of
data()
is unspecified.
3
The effect of callingfront()
orback()
for a zero-sized array is undefined.
4
Member functionswap()
shall have a noexcept-specification which is equivalent tonoexcept(true)
.
As displayed above, zero sized std::array
s are perfectly allowable in C++11, in contrast with zero sized arrays (e.g., int A[0];
) where they are explicitly forbidden, yet they are allowed by some compilers (e.g., GCC) in the cost of undefined behaviour.
Considering this "contradiction", I have the following questions:
Why the C++ committee decided to allow zero sized std::array
s?
Are there any valuable uses?
If you have a generic function it is bad if that function randomly breaks for special parameters. For example, lets say you could have a template function that takes N
random elements form a vector:
template<typename T, size_t N>
std::array<T, N> choose(const std::vector<T> &v) {
...
}
Nothing is gained if this causes undefined behavior or compiler errors if N
for some reason turns out to be zero.
For raw arrays a reason behind the restriction is that you don't want types with sizeof T == 0
, this leads to strange effects in combination with pointer arithmetic. An array with zero elements would have size zero, if you don't add any special rules for it.
But std::array<>
is a class, and classes always have size > 0. So you don't run into those problems with std::array<>
, and a consistent interface without an arbitrary restriction of the template parameter is preferable.