Search code examples
androidbarcode-scannergoogle-mlkit

How to read a barcode with MLKit?


I've been following these guides: Scan barcodes with ML Kit on Android and Image analysis to implement a simple Barcode Scanner. This is what I've got so far:

class MainActivity : ComponentActivity() {

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)

        registerForActivityResult(ActivityResultContracts.RequestPermission()) { permission ->
            Log.d("-- CAMERA PERMISSION --", permission.toString())
        }.launch(Manifest.permission.CAMERA)

        val options = BarcodeScannerOptions.Builder()
            .setBarcodeFormats(Barcode.FORMAT_EAN_13, Barcode.FORMAT_EAN_8, Barcode.FORMAT_QR_CODE)
            .build()

        val scanner = BarcodeScanning.getClient(options)

        val analyzer = ImageAnalysis.Builder().build().apply {
            setAnalyzer(Executors.newSingleThreadExecutor()) {
                @OptIn(ExperimentalGetImage::class)
                val mediaImage = it.image
                if (mediaImage != null) {
                    val image = InputImage.fromMediaImage(mediaImage, it.imageInfo.rotationDegrees)
                    val result = scanner.process(image)
                        .addOnSuccessListener { barcodes ->
                            for (barcode in barcodes) {
                                Log.d("Barcode Scanner:", barcode.toString())
                            }
                        }
                        .addOnFailureListener { e ->
                            e.printStackTrace()
                        }
                }
            }
        }

        setContent {
            MyShopTheme {
                Surface(
                    modifier = Modifier.fillMaxSize(),
                    color = MaterialTheme.colorScheme.background
                ) {
                    CameraPreview(analyzer)
                }
            }
        }
    }
}

@Composable
fun CameraPreview(analyzer: ImageAnalysis) {
    val lifecycleOwner = LocalLifecycleOwner.current
    AndroidView(factory = { context ->
        val previewView = PreviewView(context).apply {
            scaleType = PreviewView.ScaleType.FILL_CENTER
            layoutParams = ViewGroup.LayoutParams(
                ViewGroup.LayoutParams.MATCH_PARENT,
                ViewGroup.LayoutParams.MATCH_PARENT
            )
            implementationMode = PreviewView.ImplementationMode.COMPATIBLE
        }

        ProcessCameraProvider.getInstance(context).apply {
            addListener({
                val cameraProvider = this.get()
                val preview = androidx.camera.core.Preview.Builder().build().apply {
                    setSurfaceProvider(previewView.surfaceProvider)
                }

                try {
                    cameraProvider.unbindAll()
                    cameraProvider.bindToLifecycle(
                        lifecycleOwner,
                        CameraSelector.DEFAULT_BACK_CAMERA,
                        analyzer,
                        preview
                    )
                } catch (e: Exception) {
                    e.printStackTrace()
                }
            }, ContextCompat.getMainExecutor(context))
        }
        previewView
    })
}

What works is that I can see the camera preview on the display. When I point the camera to a barcode, I expected to see a continous output of:

Barcode Scanner: <whatever barcode.toString() yields>

But I get nothing. It seems like the image analysis is never triggered. Was my expectation wrong? Do I need to manually trigger the image analysis somehow? Or is there something wrong with my code?

Update: It seems that the image analysis runs but one time when the app launches. No barcode is detected, because the camera had no time to capture an image of the barcode. And thus, no output.

The documentation linked above led me to believe, that the camera would continously feed images to the analyzer:

Operating Modes

When the application's analysis pipeline can't keep up with CameraX's frame rate requirements, CameraX can be configured to drop frames in one of the following ways:

non-blocking (default): In this mode, the executor always caches the latest image into an image buffer (similar to a queue with a depth of one) while the application analyzes the previous image. If CameraX receives a new image before the application finishes processing, the new image is saved to the same buffer, overwriting the previous image.

So why does the image analyzer regards only one image? How to make it analyze continously the images coming from the camera?


Solution

  • SudoKoach's comment did the trick:

        val analyzer = ImageAnalysis.Builder().build().apply {
            setAnalyzer(Executors.newSingleThreadExecutor()) { imageProxy ->
                @OptIn(ExperimentalGetImage::class)
                val mediaImage = imageProxy.image
                if (mediaImage != null) {
                    val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
                    val result = scanner.process(image)
                        .addOnSuccessListener { barcodes ->
                            for (barcode in barcodes) {
                                Log.d("Barcode Scanner:", barcode.toString())
                            }
                        }
                        .addOnFailureListener { e ->
                            e.printStackTrace()
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }
                }
            }
        }
    

    There was indeed a note in the documentation right under the code example, which I missed.