Search code examples
compiler-constructioncpucompiler-optimizationcpu-architecturevliw

What's the advantage of compiler instruction scheduling compared to dynamic scheduling?


Nowadays, super-scalar RISC cpus usually support out-of-order execution, with branch prediction and speculative execution. They schedule work dynamically.

What's the advantage of compiler instruction scheduling, compared to an out-of-order CPU's dynamic scheduling? Does compile-time static scheduling matter at all for an out-of-order CPU, or only for simple in-order CPUs?

It seems currently most software instruction scheduling work focuses on VLIW or simple CPUs. The GCC wiki's scheduling page also shows not much interest in updating gcc's scheduling algorithms.


Solution

  • Advantage of static (compiler) scheduling:

    • No time bound, therefore can use very complicated algorithms;
    • No bound on the instruction window. This allows for example to exchange an instruction with a whole loop of function call.

    Advantage of dynamic (processor scheduling):

    • Take care of the actual environment (cache, arithmetic unit busy due to another hyperthread);
    • Do not force to recompile the code for each architecture upgrade.

    That's all I can think of for now.