In my CI-setup I'm compiling my C-code to a number of different architectures (x86_64-linux-gnu, i386-linux-gnu
, aarch64-linux-gnu
, arm-linux-gnueabihf
, x86_64-apple-darwin
, i386-apple-darwin
, i686-w64-mingw32
, x86_64-w64-mingw32
,...).
I can add new architectures (at least for the *-linux-gnu) simply by "enabling them".
The same CI-configuration is used for a number of projects (with different developers), and strives to be practically "zero config" (as in: "drop this CI-configuration in your project and forget about it, the rest will be taken care of for you").
Some of the targets are being compiled natively, others are cross-compiled. some cross-compiled architectures are runnable on the build machines (e.g. i could run the i386-apple-darwin
binaries on the x86_64-apple-darwin
), others are incompatible (e.g. i cannot run aarch64-linux-gnu binaries
on the x86_64-linux-gnu builder
).
Everything works great so far.
However, I would also like to run unit-tests during the CI - but only if the unit-tests can actually be executed on the build machine. I'm not interested at all in getting a lot of failed tests simply because I'm cross-building binaries.
To complicate things a bit, what I'm building are not self-contained executables, but plugins that are dlopen()ed (or whatever is the equivalent on the target platform) by a host application. The host application is typically slow to startup, so I'd like to avoid running it if it cannot use the plugins anyhow. Building plugins also means that I cannot just try-run them.
I'm using the GNU toolchain (make, gcc), or at least something compatible (like clang)).
In my first attempt to check whether I am cross-compiling, I compare the target-architecture of the build process (as returned by ${CC} -dumpmachine
) with the architecture of GNU make (GNU make will output the architecture triplet used to build make itself when invoked with the -v
flag).
Something like the following works surprisingly well, at least for the *-linux-gnu
targets:
if make --version | egrep ${TARGETARCH:-$(${CC:-cc} -dumpmachine) >/dev/null; then
echo "native compilation"
else
echo "cross compiling"
fi
However, it doesn't work at all on Windows/MinGW (when doing a native build, gcc targets x86_64-w64-mingw32
but make was built for x86_64-pc-msys
; and worse when building 32bit binaries which are of course fully runnable) or macOS (gcc says x86_64-apple-darwin18.0.0
, make says i386-apple-darwin11.3.0
(don't ask me why)).
It's becoming even more of an issue, as, while I am writing this and doing some checks I noticed that even on Linux I get differences like x86_64-pc-linux-gnu
vs x86_64-linux-gnu
; these differences haven't emerged on my CI-builders yet, but I'm sure that's only a matter of time).
So, I'm looking for a more robust solution to detect whether my build-host will be able to run the produced binaries, and skip unit-tests if it does not.
Perhaps include an extra unit-test that is directly runnable, just a "hello world" or return EXIT_SUCCESS;
, and if it fails, skip all the other plugin tests of that architecture?
Fun fact: on Linux at least, a shared library (ELF shared object) can have an entry point and be executable. (That's how PIE executables are made; what used to be a silly compiler / linker trick is now the default.) IDK if it would be useful to bake a main
into one of your existing plugin tests.