Search code examples
assemblyx86cpu-architecturedataformatbcd

Assembler : why BCD exists?


I know BCD is like more intuitive datatype if you don't know binary. But I don't know why to use this encoding, its like don't makes a lot of sense since its waste representation in 4bits (when representation is bigger than 9).

Also I think x86 only supports adds and subs directly (you can convert them via FPU).

Its possible that this comes from old machines, or other architectures?


Solution

  • I think BCD is useful for many things, the reasons given above. One thing that is sort of obvious that seems to have been overlooked is providing an instruction to go from binary to BCD and the opposite. This could be very useful in converting an ASCII number to binary for arithmatic.

    One of the posters was wrong about numbers being stored often in ASCII, actually a lot of binary number storage is done because its more efficient. And converting ASCII to binary is a little complicated. BCD is sort of go between ASCII and binary, if there were a bsdtoint and inttobcd instructions it would make conversions as such really easy. All ASCII values must be converted to binary for arithmatic. So, BCD is actually useful in that ASCII to binary conversion.