Search code examples
tensorflowobject-detectiongoogle-mlkitreact-native-vision-camera

No objects detected with Google MLKit on iOS


I am new to Google MLKit and want to detect bills/receipts for Android and iOS. I use Object Detection and this model.

Detection is done by Google MLkit and then interpreted by react-native-vision-camera

On android in java I have no problem, the bills are well detected :

Image

On iOS and for identical code (but in Objective C instead of Java), I never have an invoice detected :

Image

#import <Foundation/Foundation.h>
#import <VisionCamera/FrameProcessorPlugin.h>
#import <VisionCamera/Frame.h>
#import <MLKit.h>

@interface VisionScanObjectsFrameProcessorPlugin : NSObject
+ (MLKObjectDetector*) objectDetector;
@end

@implementation VisionScanObjectsFrameProcessorPlugin

+ (MLKObjectDetector*) objectDetector {
  static MLKObjectDetector* objectDetector = nil;
  if (objectDetector == nil) {
    NSString *path = [[NSBundle mainBundle] pathForResource:@"lite-model_object_detection_mobile_object_labeler_v1_1" ofType:@"tflite"];
    MLKLocalModel *localModel = [[MLKLocalModel alloc] initWithPath:path];

    MLKCustomObjectDetectorOptions *options = [[MLKCustomObjectDetectorOptions alloc] initWithLocalModel:localModel];
    options.detectorMode = MLKObjectDetectorModeSingleImage;
    options.shouldEnableClassification = YES;
    options.classificationConfidenceThreshold = @(0.5);
    options.maxPerObjectLabelCount = 3;

    objectDetector = [MLKObjectDetector objectDetectorWithOptions:options];
  }

  return objectDetector;
}

static inline id scanObjects(Frame* frame, NSArray* arguments) {
  MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:frame.buffer];
  image.orientation = frame.orientation; // <-- TODO: is mirrored?

  NSError* error;
  NSArray<MLKObject*>* objects = [[VisionScanObjectsFrameProcessorPlugin objectDetector] resultsInImage:image error:&error];

  NSLog(@"Object detected : %ld", objects.count);

  NSMutableArray* results = [NSMutableArray arrayWithCapacity:objects.count];
  for (MLKObject* object in objects) {
    NSMutableArray* labels = [NSMutableArray arrayWithCapacity:object.labels.count];

    for (MLKObjectLabel* label in object.labels) {
      if (122 == label.index || 188 == label.index || 288 == label.index || 325 == label.index || 357 == label.index || 370 == label.index || 480 == label.index || 510 == label.index || 551 == label.index) {
        [labels addObject:@{
          @"index": [NSNumber numberWithFloat:label.index],
          @"label": label.text,
          @"confidence": [NSNumber numberWithFloat:label.confidence]
        }];
      }
    }

    if (labels.count != 0) {
      [results addObject:@{
        @"width": [NSNumber numberWithFloat:object.frame.size.width],
        @"height": [NSNumber numberWithFloat:object.frame.size.height],
        @"top": [NSNumber numberWithFloat:object.frame.origin.y],
        @"left": [NSNumber numberWithFloat:object.frame.origin.x],
        @"frameRotation": [NSNumber numberWithFloat:frame.orientation],
        @"labels": labels
      }];
    }
  }

  return results;
}

VISION_EXPORT_FRAME_PROCESSOR(scanObjects)

@end

I really think this code works because I don't have any crashes (I did before it worked ^^), but I never have a document detected. :/

NSLog(@"Object detected : %ld", objects.count); almost always return 0. Exceptionally it will return 1 on detect my computer keyboard but this is very very very rare.

I've tried a lot of things in the last 4 days (different model, asynchronous detection, resize before detection, etc.) but it's still the same :/


Solution

  • For Jaroslaw K., I had to reduce the frame size to make it work.

    This was recommended in the documentation of my model (https://tfhub.dev/tensorflow/efficientnet/lite0/classification/2) :

    For this module, the size of the input image is flexible, but it would be best to match the model training input, which is height x width = 224 x 224 pixels for this model. The input images are expected to have color values in the range [0,1], following the common image input conventions.

    public static func resizeFrameToUiimage(frame: Frame) -> UIImage! {
      let targetSize = CGSize(width: 224.0, height: 224.0)
    
      let imageBuffer = CMSampleBufferGetImageBuffer(frame.buffer)!
      let ciimage = CIImage(cvPixelBuffer: imageBuffer)
      
      let context = CIContext(options: nil)
      let cgImage = context.createCGImage(ciimage, from: ciimage.extent)!
      let uiimage = UIImage(cgImage: cgImage)
    
      let widthRatio  = targetSize.width  / uiimage.size.width
      let heightRatio = targetSize.height / uiimage.size.height
          
      var newSize: CGSize
      if(widthRatio > heightRatio) {
        newSize = CGSize(width: uiimage.size.width * heightRatio, height: uiimage.size.height * heightRatio)
      } else {
        newSize = CGSize(width: uiimage.size.width * widthRatio, height: uiimage.size.height * widthRatio)
      }
          
      let rect = CGRect(origin: .zero, size: newSize)
          
      UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
      uiimage.draw(in: rect)
    
      let newImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
    
      return newImage
    }