Search code examples
androidocrandroid-visionfirebase-mlkit

Migrate from MVA to MLK


Since 4 months I am working on a project using Mobile Vision Android [MVA]. Which only require to use the play service and this tutorial Codelab. However since the beginning of the month Google came with a new version Machine Learning Kit [MLK] with :

with new capabilities.

and they :

strongly encourage us to try it out

My problem is that the new MLK is base Firebase. That is to say, we have to use a google dev account, with this Setup and a lot of anoying things that strongly link our project with Google (in my mind, tell me if I'm wrong).

My first question [answered by @Ian Barber] is : is there a way to use MLK without all this setup with firebase ? Or use it the way I use MVA, just implement a dependencies and that's all ?

EDIT : My application was using the Codelab of [MVA]. Which mean I was able to detect text in video stream (from the camera). All the optimisation of frame capture, processing etc... was taking into account by multiple well constructed thread. But now there is no example of video processing with [MLK]. The implementation of Camera Source and Preview look almost impossible without the work of MVA capabilites, just with the MLK.

My Second Question (according to the migration) is : how to use the CameraSource, CameraSourcePreview. Like we used in MVA to work on a camera source for text Detection


Solution

  • On the second part of your question:

    how to use the CameraSource, CameraSourcePreview. Like we used in MVA to work on a camera source for text Detection?

    Can you please take a look at the Android ML Kit Quickstart app? It contains sample app code to use camera source and preview with ML Kit.