I have a java program, where I m trying to convert a byte array to short.
byte arr[] = {-1,-1};
int value = ByteBuffer.wrap(arr).order(ByteOrder.LITTLE_ENDIAN).getShort();
the value I get from above is -1
. But its actual value should be 65535
.
I decided to take a look how the library converts the byte array value to actual short. On getting deep inside the functions, I think I saw the function inside this class java.nio.Bits
which is doing the actual work of converting byte array to short, its implementation is like this
private static short makeShort(byte var0, byte var1) {
return (short)(var0 << 8 | var1 & 255);
}
The function is taking care of the negative value of 2nd byte but not for the 1st byte.
I m not sure but shouldnt the return statement of the above function should be like this
private static short makeShort(byte var0, byte var1) {
return (short)((var0 & 0xFF) << 8 | var1 & 255);
}
java version : 8
Thank You.
short
is signed. Its maximum value is 32767, and minimum is -32768.
ByteBuffer.getShort
cannot return a short
value of 65535, no matter how you change its implementation, because 65535 is not a valid value for short
.
The implementation of getShort
is correct. The bits of the two bytes are both 1111 1111
(= -1), and the resulting short
is 1111 1111 1111 1111
(also = -1).
But you do assign what getShort
returns, to an int
variable value
, so value
can store 65535. However, the implicit conversion from short
to int
sign-extends, instead of zero-extends, the bits, so you still get a value of -1.
You can use Short.toUnsignedInt
to treat the signed short
as unsigned, and get an int
.
int value = Short.toUnsignedInt(byteBuffer.getShort());
// value is 65535
And the implementation of toUnsignedInt
is probably what you have expected:
public static int toUnsignedInt(short x) {
return ((int) x) & 0xffff;
}
Side note: the implementation of makeShort
does var1 & 255
, because |
causes numeric promotion. var0 << 8 | var1
would convert var1
to an int
by sign-extending it. This might add some extra 1s that shouldn't be there. Consider var1 = 0
and var2 = -1
. The bit patterns look like this when both sides are converted to int
.
var0 << 8: 0000 0000 0000 0000 0000 0000 0000 0000 (originally 0000 0000)
var1 : 1111 1111 1111 1111 1111 1111 1111 1111 originally 1111 1111)
ORing them and cutting off the most significant 16 bits would give 16 '1's, instead of the expected 0000 0000 1111 1111
.