I am retrieving a raw image from a camera and the specs of the image are as follows:
I retrieve the image as a byte array and have an array that is 2400 (1/2 * 80 * 60) bytes long. The next step is to convert the byte array into a Bitmap. I have already used the
BitmapFactory.decodeByteArray(bytes, 0, bytes.length)
but that didn't return a displayable image. I looked at this post and copied the code below into my Android application, but I got a "buffer not large enough for pixels" runtime error.
byte [] Src; //Comes from somewhere...
byte [] Bits = new byte[Src.length*4]; //That's where the RGBA array goes.
int i;
for(i=0;i<Src.length;i++)
{
Bits[i*4] =
Bits[i*4+1] =
Bits[i*4+2] = ~Src[i]; //Invert the source bits
Bits[i*4+3] = -1;//0xff, that's the alpha.
}
//Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(Width, Height, Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(Bits));
At the bottom of the thread, the original poster had the same error I currently have. However, his problem was fixed with the code pasted above. Does anyone have any suggestions on how I should convert the raw image or RGBA array into a Bitmap?
Thanks so much!
UPDATE:
I followed Geobits suggestion and this is my new code
byte[] seperatedBytes = new byte[jpegBytes.length * 8];
for (int i = 0; i < jpegBytes.length; i++) {
seperatedBytes[i * 8] = seperatedBytes[i * 8 + 1] = seperatedBytes[i * 8 + 2] = (byte) ((jpegBytes[i] >> 4) & (byte) 0x0F);
seperatedBytes[i * 8 + 4] = seperatedBytes[i * 8 + 5] = seperatedBytes[i * 8 + 6] = (byte) (jpegBytes[i] & 0x0F);
seperatedBytes[i * 8 + 3] = seperatedBytes[i * 8 + 7] = -1; //0xFF
}
Now, I am able to get a Bitmap using this command
Bitmap bm = BitmapFactory.decodeByteArray(seperatedBytes, 0, seperatedBytes.length);
but the Bitmap has a size of 0KB.
The image I am getting is a raw Image from this camera. Unfortunately, retrieving a pre-compressed JPEG image is not an option becuase I need 4-bit grayscale.
If the image coming in is only in 2400 bytes, that means there are two pixels per byte(4 bits each). You're only giving the byte buffer 2400 * 4 = 9600
bytes when an ARGB_8888 needs 4 bytes per pixel, or 60 * 80 * 4 = 19200
.
You need to split each incoming byte into an upper/lower nibble value, then apply that to the following 8 bytes(excluding alpha). You can see this answer for an example of how to split bytes.
Basically:
i
into two nibbles, ia
and ib
ia
to outgoing bytes i*8
through (i*8)+2
ib
to outgoing bytes (i*8)+4
through (i*8)+6
(i*8)+3
and (i*8)+7
are alpha (0xFF
)Once you have the right size byte buffer, you should be able to use decodeByteArry()
with no problems.