EDIT: I asked around a bit, and apparently my mistake was this: we usually edit our 8086 Assembly code in a special debug.exe
environment on MS-DOS. This particular environment indeed defaults to hexadecimal numbers, but other assemblers for 8086 default to decimal.
In writing Assembly language (e.g. for Intel's 8086), we can represent numbers either as 3F or 3FH, or as 16 or 16H, because all numbers default to hexadecimal notation.
In my experience there is no real difference between both representations as far as the Assembler is concerned: it works happily with both, even when mixed.
My question is: are there any strict rules on when or when not to append -h/-H after a number?
I can see that it can help to prevent the confusion (for beginning Assembly programmers) that would arise from seeing numbers we usually think of as decimal, as in my 16 vs. 16H example where the 16 is actually hexadecimal for decimal 22 -- I myself have been bitten by this error several times. But is clarity really the only criterion?
Yes, there are strict rules and they should be mentioned in the documentation of your assembler (usually in a section named "Numeric literals"). To be honest, I've never encountered an assembler which defaults to hex; pretty much all of them default to decimal. The syntax can be all over the map; the most common notations for hex are [0]dddh and 0xddd, but sometimes you can also have h'ddd, $ddd or 16_ddd.