I am trying to get frame image to process while using new Android face detection mobile vision api.
So I have created Custom Detector to get Frame and tried to call getBitmap() method but it is null so I accessed grayscale data of frame. Is there a way to create bitmap from it or similiar image holder class?
public class CustomFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;
public CustomFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
// Byte array to Bitmap here
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}}
You have probably sorted this out already, but in case someone stumbles upon this question in the future, here's how I solved it:
As @pm0733464 points out, the default image format coming out of android.hardware.Camera
is NV21, and that is the one used by CameraSource.
This stackoverflow answer provides the answer:
YuvImage yuvimage=new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Although frame.getGrayscaleImageData()
suggests bitmap
will be a grayscale version of the original image, this is not the case, in my experience. In fact, the bitmap is identical to the one supplied to the SurfaceHolder
natively.