Search code examples
encodingbinarydecimalbcd

How is BCD format used in programming?


Excuse me if you find it silly. I am not able to understand the purpose of BCD and its implementation in programming?

What I so far understand about BCD is that it is an internal way of data representation in a byte or a nibble. In a simple C program, we declare variables int, float etc and we manipulate them as we like. Internally they may be represented as binary number or BCD.

Modern calculators use BCD format only. In Intel chips, the instruction FBLD is used to move a packed BCD value into the FPU register.

I have seen few exercises in assembly language programming that convert BCD into decimal and other way round.

How does it help a programmer to know that?

What is the usefulness of the knowledge of BCD to a high-level language programmer?


Solution

  • Intel chip does not uses BCD representation internally. It uses binary representation including 2's complement for negative integers. However, it has certain instructions like AAA, AAS, AAM, AAD, DAA, DAS which are used to convert the binary results of addition, subtraction, multiplication and division on unsigned integer values into unpacked/packed BCD results. Therefore, Intel chips can produced BCD results for unsigned integers INDIRECTLY. These instructions use the implied operand located in AL register and place the result of the conversion in the AL register. There are advanced BCD-handling instructions to move from memory a 80-bit packed signed BCD format value into the FPU register, where it is converted automatically into binary form, processed and converted back into the BCD format.