I'm trying to understand how binary arrays work. Here is a CAPL example that converts a decimal number into a binary array:
byte binaryArray[16];
binary ( int number )
{
int index;
index = 0;
for ( ; number != 0; )
{
binaryArray[index++] = number % 10;
number = number / 10;
}
}
If the input is 1234, the output is apparently 11010010
If I'm not mistaken, the for loop runs 4 times:
1234 mod 10 -> 4
123 mod 10 -> 3
12 mod 10 -> 2
1 mod 10 -> 1
If we weren't dealing with a binary array, it would look like this: { 4, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }. But it is a binary array, and the "conversion" should happen here: binaryArray[index++] = number % 10;
(number is a 16-bit signed integer, binaryArray is a 8-bit unsigned byte).
So how does one convert (by hand) an int to a byte?
I find it very hard to "guess" what your actual intent is and the given example is not really fitting to what I think you want to do. But I will give it a try.
As you have already explained by yourself, your example code always cuts the least significant digit in a base-10 system (i.e. the ones place) of the given integer "number" and then stores it sequentially into a byte array.
If the input is 1234, the output is apparently 11010010
This statement is false. Currently, if the input for your given function is 1234, the "output" (i.e. binaryArray contents) is
binaryArray = { 4, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
Furthermore, the actual binary representation (assuming "MSB0 first/left" and big-endian) of your input number as byte array is
{ 0b00000100, 0b11010010 }
Because of your "apparently" (wrong) statement and your final question, I guess what you really want to achieve and what you're actually asking for is: serialization of an integer into a byte array - which first seems quite simple, but actually there are some pitfalls you can run into for multi-byte values - especially when you work together with others (e.g. endianness and bit-ordering).
Assuming you have a 16-bit integer, you could store the first 8-bits (byte) in binaryArray[0] and then shift your input integer by 8 bits to the right (since you already stored those). Now you can finally store the remaining 8 bits into binaryArray[1].
Given your example input of 1234 you will end up with the array
binaryArray = { 210, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
which is equivalent to its "binary" representation:
binaryArray = { 0b11010010, 0b00000100, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000 }
Notice that this time the byte-order (i.e. endianness) is reversed (little-endian) since we fill the array "bottom up" and the inputs "binary values" are read "right-to-left".
But since you have this 16-cells byte array you might instead want to convert the integer "number" into an array, representing its binary format, e.g. binaryArray = { 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0 }.
You could achieve this easily by substituting the "modulo 10" / "divide by 10" with "modulo 2" / "divide by 2" in your code - at least for unsigned integers. For signed integers, it should also work that way™ but you may get negative values for the modulo (and of course for the division by 2, since the dividend is non-positive and the divisor is positive).
So to make this work without thinking about whether a number is signed or not, just grab one single bit after the other and right-sift the input until it is 0, while decrementing your arrays index, i.e. filling it "top-down" (since MSB0 first / left).
byte binaryArray[16];
binary ( int number )
{
int index;
index = 15;
for ( ; number != 0; )
{
binaryArray[index--] = number & 0x01;
number = number >> 1;
}
}
Side note: runtime (assuming one instruction as single effort) is the same as "modulo/divide by 2", since right-shifting by one equals dividing by 2. Actually, it is even a bit better, since binary-and (&) is "cheaper" than modulo (%).
But keep track of bit-ordering and endianness for this kind of conversions.