During the time of pre-processing we usually scan all the macros
but I am confused a bit that does the pre-processor also generates the tokens which are actually taken by the lexical analyzer and it further scans the input file ,
I have gone through this link and it is mentioned that Preprocessing tokens fall into five broad classes: identifiers, preprocessing numbers, string literals, punctuators, and other,so are tokens generated during the state of pre-processing only or not ?
The result of preprocessing is a stream of tokens.
Most tokens are produced prior to the macro expansion phase, but during macro expansion it is possible to create new tokens using the stringify (#
) and token-concatenate (##
) operators.
After preprocessing, many tokens need to be reanalyzed, and it is possible that some preprocessing tokens cannot be reanalyzed as valid tokens. For example, 2.3G is a preprocessing-number token, but it does not correspond to any valid numeric token. This will cause an error message.