Differentiate between the character code representation of a decimal digit and its pure binary representation
I study computer science and this is a concept I need to know for the exams but I am not sure I fully understand it
Would this just be the ASCII code equivalent?
Meaning:
2 has ASCII code 50
6 has ASCII code 54
1 has ASCII code 49
So the character code representation is 50, 54, 49
Is this just the binary conversion of 261?
So 100000101?
ASCII defines character digits 0 to 9 with the decimal number codes 48 to 57.
So there is a representation in binary for the character but also for the the decimal digit.
The binary representation of the character 46 is: 00110100 00110110
.
The character 4 is code 52 in ASCII; hence, you get 00110100
. While character 6 is 54, for which you get 00110110
.
Meanwhile, the decimal number 46 is stored in a 16-bit word with the following representation: 00000000 00101110
.
For the character 261, you would need to get the ASCII code for 2, 6 and 1.
2: 50
6: 54
1: 49
So you get for 50, 54, 49 : 00110010 00110110 00110001