Wondering at a high level when type check typically occurs (textbook vs. in practice) in the compilation process (at a high level). Roughly my understanding of the compilation process is:
Wondering if the typechecking occurs in between (1) and (2), (2) and (3), or after (4), or if it occurs sprinkled throughout the whole process, or something else. I'd be interested to know the answer for object oriented, functional, and logic programming (in that order of priority), but if I had to pick one then OO such as a dynamically typed language like Ruby, or statically typed functional language like Haskell.
Static type checking is usually performed on the AST, so it either happens between 1 and 2 or as part of 2 (meaning that the IR-generator invokes functions from the type checker whenever it processes an AST-node - of course the IR generator and the type checker should still live in different modules/files).
In theory, you could perform type checking on the IR, but that will usually lead to at least one of the following problems:
Usually working on the IR instead of the AST means that you don't have to handle as many cases (exactly because the IR represents different things using the same instructions). That's the main benefit. But if you then jump through extra hoops just to be able to treat the cases differently again, you might just as well use the AST in the first place.
So type checking on the AST¹ is usually preferred. GHC (the main Haskell compiler) performs type checking the AST.
¹ Or at least something very close to the AST - there might, for example, be a representation between the AST and the final IR, which simplifies things in some ways (such as removing flattening nested expressions), without losing information relevant to type checking.
Dynamic type checking happens at run time. The code that performs these dynamic type checks is either part of the interpreter (if there is an interpreter) or inserted by the code generator.
Ruby performs type checking in the interpreter.