OpenVino async inference for subset of images

I'm trying to implement async openvino inference for my service (that actually gets input images from a queue (using rabbitmq)).

Found an official tutorial, but the implementation uses video stream here. I have difficulties in changing video input to image input in async_api method. So they are doing something like this (in short), where VideoPlayer is a standard cv2 videostream.

player = utils.VideoPlayer(source, flip=flip, fps=fps, skip_first_frames=skip_first_frames)
frame =
curr_request.set_tensor(input_layer_ir, ov.Tensor(frame))
while True:
   next_frame =
   next_request.set_tensor(input_layer_ir, ov.Tensor(resized_frame))

So if I want to use some list of images or some consumer, what is a better approach to get the next frame?


  • OpenVINO™ does provide a Python sample that runs Image Classification using Asynchronous Inference Request API with image sources instead of video.

    Sample code for the Image Classification Async Python Sample is available from our OpenVINO repository.

    The sample supports image input via path to an image file(s) and runs the inference asynchronously. Furthermore, the input arguments for the image files is in list format, thus the easiest way would be to supply the image path list.