Lets say we are compiling the program to create a process binary using ICC.. and later when its executing in the machine we want to debug it.. (very common process .. and sorry for such trivial explanation).. how does GDB handles the code optimizations performed by ICC?
When it comes to compiler optimizations, gdb generally works on the GIGO principle. That is, the compiler emits descriptions of what it did into the debug info, and gdb reads these and interprets them. Consequently gdb is at the mercy of the compiler; and in fact there are real quality differences between the debug info generated by different compilers.
Users run into a few of these. This isn't an exhaustive list but I think it covers the common ones.
Sometimes printing a variable gives <optimized out>
. This often happens with local variables, and it means that the compiler has emitted debug info noting that the variable existed, but no debug info about how to recover the variable's value. GCC, since the "VTA" patches landed, has tried harder to emit clever debug info in these cases, but even with those patches it can't always be done.
Sometimes inlining means that backtraces look strange. Here GCC emits DWARF describing inlining decisions pretty well; but there are other cases, like partial inlining, that can be confusing.
Optimization often leads to non-linear stepping in gdb. This happens as instructions attributed to one line are moved before or after instructions attributed to other lines. As far as I know nobody has made a real effort to do anything about this problem, and the answer for users is just that it is something one must get used to when debugging optimized code.
I don't know what your compiler does in these situations. It's not too hard, if you have some knowledge of DWARF, to write little test cases and check the generated debug info.