I was recently debugging a vague problem which turned out to be caused by a misplaced sub-Makefile which was conditionally included into a main Makefile by the -include
directive. Mind the starting minus sign. According to GNU Make manual:
If you want make to simply ignore a makefile which does not exist or cannot be remade, with no error message, use the
-include
directive instead of include, like this:-include filenames…
This acts like include in every way except that there is no error (not even a warning) if any of the filenames (or any prerequisites of any of the filenames) do not exist or cannot be remade.
For compatibility with some other make implementations,
sinclude
is another name for-include
.
The nastiest problem with this directive is that no diagnostics whatsoever are given when the sub-Makefile cannot be found. Needless to say, this complicates debugging a lot.
In fact, there was no real need to use it there, a regular include
worked just fine and is much more robust. I understand the original author's intention for using -include
. That sub-Makefile contained some "secret" stuff that was not meant to be shared with 3rd party engineers. But this functionality was never used in the end, and it could have been implemented in a more transparent way.
I wonder if there are other practical cases when -include
is useful. Maybe some cases when one or several makefiles are dynamically generated during the build process?
Surely, the most useful application of -include
is when the include file is auto-generated by make itself.
Remember that all include files also become make's targets automatically. So -include generated_file
does not make make to fail prematurely, but implies that generated_file
will be (re-)built using other rules in the current Makefile. This can be exploited in auto-dependencies generation, for example.
BTW. Another trick with 'include' is that include $(empty_var)
also works without errors (i.e. is no-op).