Search code examples
androidperformanceandroid-ndkscreen-captureandroid-mediaprojection

Reliable screen capturing using Android Media Projection API


Our implementations using the Media Projection API work fine. But we often lose frames while trying to write the captured images to files. Even if all the io code is executed in separate threads.

We wanted to do some online image analysis on the captured frames, therefore we can not use screenrecord or similar tools.

Is there a way to call Android Media Projection methods from native code? To get better performance?

We even tried not to call any io operations during the capturing process. We kept everything in memory until the end. But we still have issues on missing frames at 30fps. How can we avoid that?

        try {
            image = mImageReader.acquireLatestImage();
            if (image != null) {
                Image.Plane[] planes = image.getPlanes();
                ByteBuffer buffer = planes[0].getBuffer();
                int pixelStride = planes[0].getPixelStride();
                int rowStride = planes[0].getRowStride();
                int rowPadding = rowStride - pixelStride * mWidth;
                String.valueOf(pixelStride) + " -- rowStride: " + String.valueOf(rowStride) + " rowPadding: " + String.valueOf(rowPadding));
                // create bitmap
                if(mBitmap==null || rowPadding != mRowPadding || pixelStride != mPixelStride) {
                    mRowPadding = rowPadding; mPixelStride = pixelStride;
                    mBitmap = Bitmap.createBitmap(mWidth + rowPadding / pixelStride, mHeight, Bitmap.Config.ARGB_8888);
                }
                mBitmap.copyPixelsFromBuffer(buffer);
                Bitmap myBitmap = Bitmap.createScaledBitmap(mBitmap, (mWidth + rowPadding / pixelStride) / 16, mHeight / 16, false);

                ramStorage.add(myBitmap);

Solution

  • All of the code that does anything of consequence with Bitmap is implemented natively, so converting your existing code to use JNI is not going to make a difference. I'm guessing you're running up against memory bandwidth or CPU limitations (from copying and scaling the data for each frame) or just missing realtime deadlines because the scheduler let some other thread run.

    The best approach depends on the quality level you require, and what the app is doing. If you record as a video stream, rather than capturing a series of independent frames, you will use far less memory and very little CPU. Keeping the data in RAM is more practical when you're only burning a few MB/sec at 30fps. This may not work well if your app is frequently changing every pixel on the screen. Your analysis code would have to be tolerant of macroblocking and vector quantization errors.

    You can trade off CPU for RAM by not scaling the bitmap before you save it. You could also avoid Bitmap entirely, and perform your own copy + scale operation to transfer the ImageReader contents to RAM storage, doing both operations in a single pass (you'd want to write the copy-scale function in native code).