I am debating an optimization issue with a friend, and need some assistance tracking down both the answer to this problem, and hopefully some official documentation I could read further.
I am told that when compiling a simple program in a production build setting (i.e.: CCOPTS+=-O4
, no debug, etc.), that the following code:
#define COEFFICIENT_F (5.0f)
...
...
float f = 1.0f / COEFFICIENT_F;
...will be automatically optimized to something like so:
#define COEFFICIENT_F (5.0f)
...
...
#define INV_COEFFICIENT_F (0.2f)
float f = 1.0f * INV_COEFFICIENT_F;
While, if I'm compiling for a debug build (i.e., CCOPTS+=-O0 DEBUG=-g
), the code will not optimize such an operation at the preprocessor level.
So, my question is twofold:
Your answers are:
No. Preprocessing is performed as defined in the standard and is not affected by any optimization level.
The optimization you are referring to is not performed at preprocessing time, but during the long road from the front-end to the code generator.