The toy problem of recursive (non-caching) fibonacci can be implemented as following:
#include <iostream>
int fibonacci(int n) {
if (n <= 1)
return n;
else
return fibonacci(n - 1) + fibonacci(n - 2);
}
int main() {
int N = 5;
int result = fibonacci(N);
std::cout << "Fibonacci(" << N << ") = " << result << std::endl;
return 0;
}
The output of this program is Fibonacci(5) = 5
. With metaprogramming, this can be evaluated during compilation:
#include <iostream>
consteval int fibonacci(int n) {
if (n <= 1)
return n;
else
return fibonacci(n - 1) + fibonacci(n - 2);
}
int main() {
constexpr int N = 10;
constexpr int result = fibonacci(N);
std::cout << "Fibonacci(" << N << ") = " << result << std::endl;
return 0;
}
But why is it necessary to make the program more verbose? Couldn't the compiler analyze the first program, and figure out that the output is always 5?
Compilers often do inline and constant-propagate through functions at compile time, even without consteval
to force it. But they're not forced to, so it doesn't happen in unoptimized debug builds.
And surprisingly in this case, it doesn't happen for GCC or clang even for N=3 or higher. https://godbolt.org/z/6j7T1c5WY
I guess the default heuristics for inlining recursive functions even at -O3
are reluctant to go far enough, although GCC -O3
does balloon fibonacci
to pretty large code size. GCC and clang convert one of the recursions into looping, but GCC goes farther. I'm not sure what exactly it's doing with all that code, and why it doesn't evaluate fibonacci(3)
to a compile-time constant without consteval
.
Since constexpr
exists as a way to let programmers write programs that do stuff like int arr[foo(N)]
, it's somewhat natural to extend that to a way to get guaranteed constant evaluation even in contexts where it's not required. (e.g. something other than an array dimension or template parameter.)
Where previously a programmer would have had to use template metaprogramming to be guaranteed that there was no runtime overhead for something they wanted to compute, consteval
lets them use normal code, taking advantage of the same compiler features that constexpr
depends on, including in debug builds.
constexpr
itself exists because the C++ committee wants programs to be valid or invalid according to the standard, not depending on how well a given compiler is able to optimize. If you want to use int arr[foo(N)]
, you need a guarantee that the return value is a constant expression. The fact that some compilers would be able to resolve foo(N)
to a compile time constant while others couldn't would be a problem. Or even the same compiler in a non-optimizing build; code that can only be compiled with optimization enabled is not good.
So why use consteval
? Do you want a guaranteed constant expression, or are you happy with just checking that some compiler you care about is able to optimize well in your use-case, when optimization is enabled? Often the latter is sufficient for most use-cases.
It's a way to tell compilers that doing constant-propagation through some code will definitely result in a compile-time constant if it continues long enough. Usually compilers don't know that so they bail out after some heuristic limits.
(There are lots of use-cases for constexpr
other than performance, but I'm less sure about consteval
.)