This question is based on a previous question: How does C# compilation get around needing header files?.
Confirmation that C# compilation makes use of multiple passes essentially answers my original question. Also, the answers indicated that C# uses type and method signature metadata stored in assemblies to check code syntax at compile time.
Q: how does C/C++/Objective-C know what code to load at run time that was linked at compile-time? And to tie it into a technology I'm familiar with, how does C#/CLR do this?
Correct me if I'm wrong, but for C#/CLR, my intuitive understanding is that certain paths are checked for assemblies upon execution, and basically all code is loaded and linked dynamically at run time.
Edit: Updated to include C++ and Objective-C with C.
Update: To clarify, what I really am curious about is how C/C++/Objective-C compilation matches an "externally defined" symbol in my source with the actual implementation of that code, what is the compilation output, and basically how the compilation output is executed by the microprocessor to seamlessly pass control into the library code (in terms of instruction pointer). I have done this with the CLR virtual machine, but am curious to know how this works conceptually in C++/Objective-C on an actual microprocessor.
The linker plays an essential role in C/C++ building to resolve external dependencies. .NET languages don't use a linker.
There are two kinds of external dependencies, those whose implementation is available at link time in another .obj or .lib file offered as input to the linker. And those that are available in another executable module. A DLL in Windows.
The linker resolves the first ones at link time, nothing complicated happens since the linker will know the address of the dependency. The latter step is highly platform dependent. On Windows, the linker must be provided with an import library. A pretty simple file that merely declares the name of the DLL and a list of the exported definitions in the DLL. The linker resolves the dependency by entering a jump in the code and adding a record to the external dependency table that indicates the jump location so that it can be patched at runtime. The loading of the DLL and setting up the import table is done at runtime by the Windows loader. This is a bird's-eye view of the process, there are many boring details to make this happen as quickly as possible.
In managed code all of this is done at runtime, driven by the JIT compiler. It translates IL into machine code, driven by program execution. Whenever code executes that references another type, the JIT compiler springs into action, loads the type and translates the called method of the type. A side-effect of loading the type is loading the assembly that contains the type, if it wasn't loaded before.
Notable too is the difference for external dependencies that are available at build time. A C/C++ compiler compiles one source file at a time, the dependencies are resolved by the linker. A managed compiler normally takes all source files that create an assembly as input instead of compiling them one at a time. Separate compilation and linking is in fact supported (.netmodule and al.exe) but is not well supported by available tools and thus rarely done. Also, it cannot support features like extension methods and partial classes. Accordingly, a managed compiler needs many more system resources to get the job done. Readily available on modern hardware. The build process for C/C++ was established in an era where those resources were not available.