Search code examples
python-3.xopencvazure-cognitive-servicesface-api

how to call Microsoft cognitive face and passing image as bytes python with cognitive_face


Hi im trying the same thing in this question How can i pass capture image directly as a binary data for processing in API calling (Microsoft Cognitive Services) using Python passing byte image to face detect library but with the cognitive_face library

faces =CF.face.detect(buf.tobytes(),True,False,attributes='age,gender,emotion')

but im getting an error

Traceback (most recent call last): File ".\cam.py", line 80, in faces = CF.face.detect(buf.tobytes(),True,False,attributes='age,gender,headPose,smile>,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,n>oise') File "Python37\lib\site-packages\cognitive_face\face.py", line 33, in detect headers, data, json = util.parse_image(image) File "Python37\lib\site-packages\cognitive_face\util.py", line 133, in parse_image elif os.path.isfile(image): # When image is a file path. File "Python37\lib\genericpath.py", line 30, in isfile st = os.stat(path) ValueError: stat: embedded null character in path


Solution

  • You are using the old package named cognitive_face which unfortunately expects the input argument to be either a file name or a URL.

    Fortunately, the new package name azure-cognitiveservices-vision-face supports streams, so if you switch over, you could do something like the following:

    from azure.cognitiveservices.vision.face import FaceClient
    from msrest.authentication import CognitiveServicesCredentials
    import cv2
    import os
    
    face_key = '...' # your API key
    face_endpoint = '...' # your endpoint, e.g. 'https://westus.api.cognitive.microsoft.com'
    
    credentials = CognitiveServicesCredentials(face_key)
    client = FaceClient(face_endpoint, credentials)
    
    # img is your unencoded (raw) image, from the camera
    img = ...
    
    # buf will be the encoded image
    ret,buf = cv2.imencode('.jpg', img)
    
    # stream-ify the buffer
    stream = io.BytesIO(buf)
    
    # call the Face API
    detected_faces = client.face.detect_with_stream(
        stream,
        return_face_id=True,
        return_face_attributes=['age','gender','emotion'])
    
    # access the response, example:
    for detected_face in detected_faces:
        print('{} happiness probability={}'.format(
            detected_face.face_id,
            detected_face.face_attributes.emotion.happiness))