Search code examples
javaandroidtensorflowtensorflow-litefirebase-mlkit

How do I convert a .png image to a Byte Array - Android, TFLITE application


I have made a TFLITE model and am using Google's ML KIT to access it from my mobile application. I have run into trouble with trying to get my data into the byte[1][299][299][3] format that I need for feeding to my classifier.

I was trying to arrange a bytestream into that format but I don't know if it's feeding left-right, top-bottom, R-G-B, etc.

Can anyone point me to some documentation I can read about parsing .png files?


Solution

  • You can convert your .png file to an Android Bitmap using a BitmapFactory and an InputStream.

    ByteArrayOutputStream stream = new ByteArrayOutputStream();
    
    Bitmap bitmap = BitmapFactory.decodeFile(filePath);
    bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
    byte[] image = stream.toByteArray();
    

    With your Bitmap, you can also get the color of a pixel at a specified position. Unfortunately, the color is returned as int, which you have to decompose to your RGBA values. For that, see: https://developer.android.com/reference/android/graphics/Color