I'm working on a compiler for a small language. Inside the compiler, I'm using the LLVM C++ API to generate llvm code, similar to the LLVM Kaleidoscope tutorial. So I'm using TheModule, TheContext, BasicBlocks, and calls to Builder.Create...().
I can currently generate valid llvm code for arithmetic, control flow, and methods. However, I would also like my small language to support very simple OpenMP pragmas. For example,
#pragma omp parallel
{
print "Hello World"
}
I've tried writing a similar program in C++,
#include <iostream>
int main() {
#pragma omp parallel
{
std::cout << "Hi";
}
}
and generating llvm using clang++ -S -emit-llvm file.cpp -fopenmp
. Along with the rest of the code, this generates the following lines which seem to implement the OpenMP functionality:
declare void @__kmpc_fork_call(%ident_t*, i32, void (i32*, i32*, ...)*, ...)
define internal void @.omp_outlined.(...)
From researching these statements, I found the Clang OpenMP API that contains calls like
OMPParallelDirective * OMPParallelDirective::Create(...)
I'm guessing this is what the Clang compiler uses to generate the statements above. However, it seems to be separate from the LLVM C++ API, as it doesn't reference TheContext, TheModule, etc...
So my question: Is there any way to leverage the Clang OpenMP API calls with my LLVM C++ API calls to generate the kmpc__fork_call
and @.omp_outlined
IR needed for parallel computation?
I did try decompiling the llvm generated from the C++ code back into LLVM C++ API code using llc -march=cpp file.bc ...
but was unsuccessful.
The API you found operate on clang AST and are hardly usable outside clang. In fact, there are no OpenMP constructs at the LLVM IR level - everything is already lowered down to runtime calls, etc.
So, you'd really need to implement codegeneration for OpenMP by yourself emitting runtime calls as necessary (and per your language semantics).