I've been reading through the dragon book and I'm wondering about single pass compilers, so correct me if I am mistaken but as a compiler goes through analysis it generates a more and more accurate intermediate representation, or maybe accurate isn't the word, maybe optimal would be a better term to use, regardless. A single pass compiler only goes through each phase once, and will sometimes group multiple phases into one single pass, so does that mean that the intermediate representation will be less optimal? I'm sure that I have something wrong with how I am thinking about this so please feel free to correct any wrong assumptions I have made (or just let me know how dumb this question is, either way).
Also, if the intermediate representation does not suffer, then why would we ever want to use a multi-pass compiler if they cause slower compile times?
A single pass compiler produces the final result directly in one go. No intermediate representation at all. And because that gets either too complex (to implement, understand, and maintain) or too naive (no optimizations), we have multi pass compilers.
Wikipedia states
One-pass compilers are unable to generate as efficient programs as multi-pass compilers due to the limited scope of available information. Many effective compiler optimizations require multiple passes over a basic block, loop (especially nested loops), subroutine, or entire module. Some require passes over an entire program. Some programming languages simply cannot be compiled in a single pass, as a result of their design.