I recently discovered at work that it is the policy not to use compiler optimizations for hard real time embedded systems because of the risk of compiler bugs (we mainly use gcc but the policy extends to other compilers as well). Apparently this policy started because someone was burnt in the past by a bug with an optimizer. My gut feeling is that this is being overly paranoid so I've started looking for data on this issue but the problem is I can't find any hard data on this.
Does anyone know of a way to actually get this type of data? Can the gcc bugzilla page be used to generate some statistics of bugs vs compiler optimization level? Is it even possible to get unbiased data like this?
I don't have any data (and haven't heard of anyone that does ...) but ...
I'd choose which compiler I would use before I'd choose to disable optimizations. In other words, I wouldn't use any compiler I couldn't trust the optimizations on.
The linux kernel is compiled with -Os. That's a lot more convincing to me than any bugzilla analysis.
Personally, I'd be okay with any version of gcc linux is okay with.
As another data point, Apple's been converting from gcc to llvm, with and without clang. llvm has traditionally had issues with some C++ and while llvm-gcc is now a lot better, there still seem to be issues with clang++. But that just kind of proves the pattern: while Apple (purportedly) now compiles OS X and iOS with clang, they don't use much if any C++ and Objective C++. So for pure C and Objective C, I'd trust clang, but I still don't yet trust clang++.