Search code examples
c++constexprconsteval

constexpr for amortizing heavy computation


It seems that constexpr evaluation, is just a extremely slow dynamic language. Everything is allocated on the heap (even scalar types), and garbage collected.

With msvc and gcc, this program takes up all my memory (clang doesn't), and takes multiple minutes to compile (msvc: 4m48s, clang: 3m34s, gcc: 3m39s, runtime: 0.001s).

#include <cstdint>

constexpr auto compute()
{
    auto a = int64_t{};
    for (int64_t i = 0; i < 100'000'000; i++)
        a += i;
    return a;
}

int main()
{
    static constexpr auto a = compute();
}

How do people stand the compile times? (even if they can be firewalled).

Is there anyway way to speed up compile times, using practices from dynamic languages?

Are there any performance improvements on the horizon?

I was hoping constexpr evaluation was a good way amortize heavy computation. But its not practical at the moment. I think I'm just going to write/read to/from a binary file instead. Perhaps if the compilation time of the translation unit in question was less than that of a clean build, multiple cores would make the compile times workable?

Compile commands:

g++ -std=c++23 main.cpp -fconstexpr-loop-limit=1000000001 -fconstexpr-ops-limit=68719476736

clang++ -std=c++2b -fconstexpr-steps=1100000000 main.cpp


Solution

  • Jason Turner's new mental model was way off:

    Yes, but that's not the original use case of constexpr. It's original purpose is to allow easier calculation of constants needed as template arguments or in other compile-time only contexts. (The alternative was template metaprogramming which is even slower and more difficult to use.) @user17732522

    Jason did mention that clang might jit constexpr evaluation, but this is obviously didn't happen and is not even in the works. We haven't even got constexpr trig fuctions yet. So constexpr as it stands, remains just an ergonomic replacement for template metaprogramming.