About a year or two ago I read about SFINAE rules in C++. They state, in particular,
The following type errors are SFINAE errors:
...
attempting to create an array of void, array of reference, array of function, array of negative size, array of non-integral size, or array of size zero
I decided to use this rule in my homework, but it wouldn't work. Gradually reducing it, I came to this small example of code which I don't understand:
#include <iostream>
template<int I>
struct Char {};
template<int I>
using Failer = Char<I>[0];
template<int I>
void y(Failer<I> = 0) {
std::cout << "y<" << I << ">, Failer version\n";
}
template<int I>
void y(int = 0) {
std::cout << "y<" << I << ">, int version\n";
}
int main() {
y<0>();
y<1>();
y<2>();
y<3>();
}
Moreover, several C++ compilers seem to not understand it either. I created a Godbolt example, where you can find three different compilers resolving the y
ambiguity differently:
int
version (this is what I would think complies with the SFINAE rule);Failer
version.Which among them is correct, and what is actually going on?
[dcl.array] p1 states that:
[The constant-expression]
N
specifies the array bound, i.e., the number of elements in the array;N
shall be greater than zero.
Zero-size arrays are thus disallowed in principle. Note that your zero-size array appears in a function parameter, and this may be relevant according to [dcl.fct] p5:
After determining the type of each parameter, any parameter of type “array of
T
” or of function typeT
is adjusted to be “pointer toT
”.
However, this type adjustment rule only kicks on after determining the type of the parameters, and one parameter has type Char<I>[0]
.
This should disqualify the first overload from being a candidate.
In fact, your program is IFNDR because no specialization of y
would be well-formed (see [temp.res.general] p6).
It is not totally clear from the wording, but the first overload would be ill-formed despite the type adjustment, and both GCC and clang agree on this (see -pedantic-errors
diagnostic triggering for char[0]
parameters).
Even if the compiler supports zero-size arrays as an extension, this isn't allowed to affect valid overload resolution according to [intro.compliance.general] p8:
A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program. Implementations are required to diagnose programs that use such extensions that are ill-formed according to this document. Having done so, however, they can compile and execute such programs.
Your program is IFNDR because no specialization of the first overload of y
is valid.
All compilers are correct through their own extensions.
However, if we assume that the first overload of y
is valid, then it should not be a viable candidate during the call y<N>()
, and should be removed from the overload set, even if zero-size arrays are supported as a compiler extension.
Only clang implements this correctly.
In this section, let's assume that zero-size arrays were allowed. This is just for the sake of understanding the observed compiler behavior better.
Then hypothetically, a call y<N>(0)
is unambiguous, and all compilers agree and call the int
overload.
This is because int
requires no conversions, but a conversion from 0
to a pointer type would require pointer conversion.
Overload resolution does not consider default arguments; see Are default argument conversions considered in overload resolution?.
Thus hypothetically, both overloads of y
are viable candidates for y<N>()
and neither is a better match because neither is more specialized according to the rule of partial ordering of function templates. This is GCC's behavior.
Note: both GCC's and Clang's behavior can be explained, where Clang is more correct looking past the IFNDR issue. I am unable to explain ICC's behavior; it makes no sense.