AFAIK overloading a function for types that are relative by conversion or a function call that needs some cast applied to the argument to match a best a match are of bad design.
void foo(int)
{
std::cout << "foo(int)\n";
}
void foo(float)
{
std::cout << "foo(float)\n";
}
int main()
{
foo(5.3);// ambiguous call
foo(0u); // ambiguous call
}
Because 5.3
is of type double so it can be equally converted to either float
or int
thus there are more than best match for the call consequently the call is ambiguous. In the second call the same thing: 0u
is of type unsigned int
which can be converted to int
or float
equally thus the call is ambiguous.
To disambiguate the calls I can use a an explicit cast
:
foo(static_cast<float>(5.3)); // calls foo(float)
foo(static_cast<int>(0u)); // calls foo(int)
The code now works but it is a bad design because that breaks the principle of Function overloading where the compiler is the responsible for choosing the best match function for a call depending on the arguments passed it.
Until here I'm OK. But what about templates argument deduction? :
*The compiler applies only a few conversions on the arguments passed in to function template call to deduce the type of template arguments.
So the compiler doesn't apply arithmetic conversion nor integral promotion but instead it often generates a new version that best math the call:
template <typename T>
void foo(T)
{
std::cout << "foo(" << typeid(T).name() << ")\n";
}
int main()
{
foo(5.3); // calls foo(double)
foo(0u); // calls foo(unsigned)
}
Now it works fine: the compiler generates two versions of foo
one with double
and the second with unsigned
.
The thing that matters me: Is it a bad idea to pass arguments of related types into a function template that uses template argument deduction for its arguments?
Or the problem is in the language itself? Because the compiler generates versions that can be relative by conversion?
Or the problem is in the language itself? Because the compiler generates versions that can be relative by conversion?
Okay, they are related by conversion. Now which one should the compiler generate? What deterministic algorithm do you propose it employ to choose the best function to instantiate? Should it parse the entire translation unit first to figure out the best match, or should it still parse top to bottom and keep a "running best" function?
Now, assuming those questions are answered. What happens when you modify your code a bit? What happens if you include a header that instantiates an even better function? You didn't really change your code, but its behavior is still altered, possibly very drastically.
Considering the design headache it is for the language, and the potential chaos this behavior can bring unto unsuspecting code, it'd be a very bad idea to try and make compilers do this.
So no, it's not a language problem. The current behavior is really the sanest choice, even if it's not always what we want, it's something we can learn to expect.
The thing that matters me[sic]: Is it a bad idea to pass arguments of related types into a function template that uses template argument deduction for its arguments?
There's no way to answer it generally for all cases. There are no silver bullets. It could be exactly what your overload set needs to do. Or it could be that you need to build a more refined set of function(s) (templates) that interact with overload resolution via SFINAE or more modern techniques. For instance, you could do this in C++20
template <std::integral I>
void foo(I)
{
//
}
template <std::floating_point F>
void foo(F)
{
//
}
The concepts constrain each template to work only with a specific family of types. That's one way to build the overload set you wanted in your first example, avoid the ambiguity, and work with exact types as templates are designed.