Search code examples
assemblyhexdecimalintelmasm

Converting hex digits to corresponding decimal digits


so i have this question on a homework and i don't know if i understand it correctly. it says: *Write Assembly code to convert a packed BCD byte in AL to binary. Example: suppose Al = 35H which represents the decimal number 35. It should be converted to 00100011 = 23H = 2*16+3 =35d.* from what i understand it needs me to convert the digits in the hex number to digits of a decimal number. that is 35H to 35d. can someone confirm if this is what it's asking? and if so, can someone help me with an algorithm to do it?


Solution

  • As @Michael commented, you simply need to calculate (al >> 4) * 10 + (al & 0Fh).

    You can apparently assume that the input is proper BCD with both nibbles of the byte between 0 and 9, storing separate decimal digits.

    Note that it isn't an arbitrary hex string, and it's not even stored with the digits in ASCII. So you don't have to do anything to handle the fact that ASCII '0'..'9' has a gap before 'A'..'F' like if you were actually treating a digit-string as non-normalized base 10 with digit values of 0..15 in place-values of 10^n.

    You just have packed BCD. And yes, a base-16 interpretation of those bits would give you 0x35.

    But you want a binary integer that represents the same value as treating those nibbles as decimal digits, with place value of 10^n. Thus you split the nibbles and multiply the high one by 10.


    If you don't care about performance, you can use x86's legacy ASCII/BCD instructions. They're available in 16 and 32-bit modes. (x86-64 removed the BCD instructions). The only advantage here is code-size: two 2-byte instructions can get the job done slowly.

        aam 16          ; split AL into AH = AL/16; AL = AL%16
        aad             ; AL = AH*10 + AL;  AH=0
    

    aam uses division so it can work for any immediate; using it with a power of 2 is very inefficient for performance, only good for code-size. The intended use is splitting a binary integer into 2 unpacked decimal digits (ASCII Adjust AX After Multiply), with the default immediate divisor of 10.

    The default immediate for AAD is also 10, which we want.

    https://www.felixcloutier.com/x86/aam and https://www.felixcloutier.com/x86/aad.

    There is no dad instruction that does the same multiplication on packed-BCD in AL, only DAA and DAS (after addition / subtraction).


    Doing it efficiently (for performance not code-size) is left as an exercise for the reader. (Or for a compiler; GCC does a nice job using 2x LEA for the multiply part: https://godbolt.org/z/K-tjXv)