Search code examples
opencvflutterfirebase-mlkit

flutter real time face detection


I am currently developing an app that requires real time face detection. Right now I have the mlkit library in the app and I am using the firebase face detector. At the moment, it produces an error every time I try to detect a face from file:

DynamiteModule(13840): Local module descriptor class for com.google.android.gms.vision.dynamite.face not found.

As for the real time part, I tried using the RepaintBoundary in flutter to get a screenshot of the camera widget (almost)every frame and convert it into a binary file for face detection. But for some reason, flutter crashed every time I tried to screenshot the camera widget. It worked for other widgets.

After coming across both of these problems and spending quite a while trying to solve them, I've been thinking about just doing the camera part of the app in android/iOS native code(I would do this with OpenCV so that I can have real time detection). Is there a way I could use platform channels to implement a camera view in kotlin and swift and import that to a flutter widget? Or is there another easier way to implement this?


Solution

  • For the real-time access to camera image stream, I answered in another question How to access camera frames in flutter quickly that you want to use CameraController#startImageStream

    import 'package:camera/camera.dart';
    import 'package:flutter/foundation.dart';
    import 'package:flutter/material.dart';
    
    void main() => runApp(MaterialApp(home: _MyHomePage()));
    
    class _MyHomePage extends StatefulWidget {
      @override
      _MyHomePageState createState() => _MyHomePageState();
    }
    
    class _MyHomePageState extends State<_MyHomePage> {
      dynamic _scanResults;
      CameraController _camera;
    
      bool _isDetecting = false;
      CameraLensDirection _direction = CameraLensDirection.back;
    
      @override
      void initState() {
        super.initState();
        _initializeCamera();
      }
    
      Future<CameraDescription> _getCamera(CameraLensDirection dir) async {
        return await availableCameras().then(
          (List<CameraDescription> cameras) => cameras.firstWhere(
                (CameraDescription camera) => camera.lensDirection == dir,
              ),
        );
      }
    
      void _initializeCamera() async {
        _camera = CameraController(
          await _getCamera(_direction),
          defaultTargetPlatform == TargetPlatform.iOS
              ? ResolutionPreset.low
              : ResolutionPreset.medium,
        );
        await _camera.initialize();
        _camera.startImageStream((CameraImage image) {
          if (_isDetecting) return;
          _isDetecting = true;
          try {
            // await doOpenCVDectionHere(image)
          } catch (e) {
            // await handleExepction(e)
          } finally {
            _isDetecting = false;
          }
        });
      }
      Widget build(BuildContext context) {
        return null;
      }
    }