Search code examples
pythonopencvflaskpython-imaging-librarymicrosoft-custom-vision

Dealing with an opencv image (bytes, encoded as jpg) on a Flask API


My Challenge

I need to send and receive images using Flask, for doing realtime object detection over 1000's of camera streams.

Part 1: Sending my video frames to the API

I don't want to save the image frames on my disk, nowadays I don't have .png/.jpg files for sending to the Flask API. I already have image data in memory as a numpy.array (I'm using cv2.VideoCapture() to extract the frames from the video streams).

How can I send these bytes of the numpy.array to the Flask API?

Nowadays I'm trying to encode an image with cv2.imencode(), convert it to bytes and then encode it using base64.

### frame is a numpy.array with an image
img_encoded = cv2.imencode('.jpg', frame)[1]
img_bytes = img_encoded.tobytes()

So img_bytes is something like that:

b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00C\x00\x02\x01\x01\x01\x01\x01\x02\x01\x01\x01\x02\x02\x02\x02\x02\x04\x03\x02\x02\x02\x02\x05\x04\x04\x03'

I'm trying to send img_bytes on my requests.post():

files = {'imageData': img_bytes}
res = requests.post('http://127.0.0.1/image', files=files)

I'm receiving <Response [200]>, but the API is not working as expected. I believe my Flask server is not dealing well with this encoded bytes... So my problem is probably on Part 2.

Part 2: Receiving image bytes on my Flask server and converting it to a PIL.Image

I've defined a function predict_image() that receives the output of PIL.Image.open() and perform all the object detection tasks.

My problem is that my variable imageData apparently can't be opened correctly by PIL.Image.open(). The type() of imageData is <class 'werkzeug.datastructures.FileStorage'>.

In this snippet below my webservice is retrieving the image received on the request and it's executing predict_image() object detection over it:

def predict_image_handler(project=None, publishedName=None):
    try:
        imageData = None
        if ('imageData' in request.files):
            imageData = request.files['imageData']
        elif ('imageData' in request.form):
            imageData = request.form['imageData']
        else:
            imageData = io.BytesIO(request.get_data())

        img = Image.open(imageData)
        results = predict_image(img)
        return jsonify(results)

    except Exception as e:
        print('EXCEPTION:', str(e))
        return 'Error processing image', 500

I'm not receiving an error while sending images to the API, but the API is not working as expected. I believe it's not converting the bytes back to an image on the right way.

What am I missing in this code? What do I need to do with the object imageData before opening it with PIL.Image.open()?


Solution

  • I had a little mistake in my code, I was trying to send img_encoded on the post instead of img_bytes. Now it's all working.

    Part 1: making the request

    ### frame is a numpy.array with an image
    img_encoded = cv2.imencode('.jpg', frame)[1]
    img_bytes = img_encoded.tobytes()
    
    files = {'imageData': img_bytes}
    response = requests.post(url,
    files=files)
    

    Part 2: processing the received bytes

    def predict_image_handler(project=None, publishedName=None):
        try:
            imageData = None
            if ('imageData' in request.files):
                imageData = request.files['imageData']
            elif ('imageData' in request.form):
                imageData = request.form['imageData']
            else:
                imageData = io.BytesIO(request.get_data())
    
            img = Image.open(imageData)
            results = predict_image(img)
            return jsonify(results)
    
        except Exception as e:
            print('EXCEPTION:', str(e))
            return 'Error processing image', 500