Search code examples
pythonopencvqr-code

Relaxed implementation to detect QR codes within confidence interval in Python


I am working on detecting a QR code attached to someone from a video of that person walking away from me. I have tried a few different implementations using OpenCV and pyzbar but they are not very accurate and will detect the QR code in about 50% of frames.

I have found that cropping the image helps with detection but since it is stuck on a person, the cropping would have to be much more complex/robust than I would like to use.

I only really need to decode/read the QR code in a few frames of the video. Detecting its location is more important than reading and detecting if it exists at all is most important.

I can't fully understand how the OpenCV implementation for QRCodeDetector().detect() works but I am trying to find a method (does not have to be OpenCV or pyzbar) where I could find a QR code within a confidence interval. I am not sure what threshold the implementations I have tried so far use, but sometimes between frames, it does not look like the QR code changes position/orientation but it cannot 'detect' it in subsequent frames, so I am looking for some way to relax those requirements.

This is an example of a frame from the video. This shows the relative size of the QR code to the entire frame that needs to be searched.

This is my current setup:

video = cv2.VideoCapture(input_video)
ret, frame = video.read()

while ret:
    ret, frame = video.read()

    qrCodeDetector = cv2.QRCodeDetector()
    points = qrCodeDetector.detect(image)[1]

    if points is not None:
        points = points[0]

        # get center of all points
        center = tuple(np.mean(np.array(points),axis=0).astype(int))

        # draw circle around QR code
        cv2.circle(frame,center,50,color=(255,0,0),thickness=2)

        cv2.imshow('frame', frame)
        cv2.waitKey(1)
video.release()
cv2.destroyAllWindows()

Solution

  • For these cases, you should probably rely on a Machine/Deep Learning based solution. As far as I have experimented, pyzbar works pretty well for the decoding, but have a lower detection rate. For cases like the one you propose, I tend to use QReader. It actually implements a Detection + Decode pipeline that is supported by a YoloV7 QR detection model. And uses pyzbar, sweetened with some image preprocessing fallbacks, on the decoding side.

    from qreader import QReader
    import cv2
    
    # Create a QReader instance
    qreader = QReader()
    
    video = cv2.VideoCapture(input_video)
    ret, frame = video.read()
    
    while ret:
       # Read the image and cast it to RGB
       ret, frame = cv2.cvtColor(video.read(), cv2.COLOR_BGR2RGB)
    
       # Use the detect_and_decode function to get the decoded QR data
       found_qrs = qreader.detect_and_decode(image=image, return_bboxes=True)
    
       # Process the detections as you like
       for (x1, y1, x2, y2), decoded_text in found_qrs:
           # Draw on your image
           pass
    
    video.release()