Search code examples
debuggingmicrocontrollercompiler-optimization

What's wrong if compiler optimization is turned on in debug build?


Why is it necessary/recommended to turn off all compiler optimizations when debugging application?

Background

I'm working in an 8-bit micro controller (OKI 411) which has 15K usable code memory space for interrupt service routines + ROM region/window (const global variables) + code. We nearly eat up ~13K so it is very tempting to turn on maximum possible optimization during even debugging.


Solution

  • When compiling a debug binary, the compiler tries to maintain a 1:1 correspondence between code statements (or pieces of code statements) to assembly language instructions. This way, when you are debugging, you can step through, instruction by instruction, and it is easy for the debugger to correlate its current position in the binary with the correct source code. Usually the compiler also ensures that all named variables actually exist somewhere in memory so that you can view their contents in the debugger.

    Compiler optimizations may elide unused or unnecessary local variables and may restructure your code so that it is more efficient. Functions may be inlined and expressions may be partially or wholly precomputed or rearranged. Most of these and similar optimizations make it difficult to correlate the original source code with the generated assembly.