For example BCD code of decimal number 395 is 001110010101 and direct binary conversion is 110001011, Which one value computer can use for arithmetic operations
Most CPUs only have efficient hardware support for binary numbers (and usually only for power-of-2 sizes in modern CPUs, so you'd want that padded to at least 16 bits).
You can implement packed BCD (1 digit per nibble), though, using shift/mask and compare operations to check for a digit becoming >9
, and manually propagating carry to the next digit (high nibble of this byte, or to the next byte). i.e. as part of addition / subtraction, similar to extended-precision binary add/sub with carry-out.
Some CPUs even have a flag for carry-out from the low nibble. e.g. x86's AF. In 16 and 32-bit mode, x86 even has (slow) instructions to adjust the result after normal binary addition of subtraction of a byte containing two packed BCD digits. (DAA / DAS, and other instructions for unpacked BCD (1 digit per byte), all removed in 64-bit mode and mostly slow microcoded on current CPUs in 16 and 32-bit modes).
Some other ISAs have similar support for assisting computation of packed BCD, like a nibble-carry flag.
Unpacked BCD (1 digit per byte) is easier to implement, because you can compare a whole byte for >9
without extracting the low 4 bits first, to generate a carry-in signal for the next digit. (The high 4 bits can be checked for >= (10<<4)
without unpacking.)
But any normal architecture with AND/OR and right/left-shift instructions makes it pretty easy to extract nibbles and do the >9
and adjust + carry check fully manually. If you're doing multiple operations, you might be better off unpacking to bytes temporarily and repacking at the end, or even converting to binary.
But converting back from binary to BCD is expensive: you have to divide by 10 for each non-zero digit of the result, producing it one digit at a time. This takes a multiply and shift, which is slow on low-end CPUs. Why does GCC use multiplication by a strange number in implementing integer division?. If you have more binary bits that you can feed to a single multiply or divide, then it's significantly more expensive per digit until the number gets small enough.