Search code examples
python-imaging-libraryscreen-capturedarkflow

How can I use the screen as a video input to darkflow


I've trained darkflow on my data set and have good result! I can feed it a pre recorded image or video and it draws the bounding boxes around the right things, win!

Now I'd like to run it live as has been done with camera feeds, except I'd like my feed to be from the screen, not the camera. I have a specific window, which is launched from a specific process, or I can just take a section of the screen (from coords) either is fine for my application.

Currently I use PILs image grab and then feed the images into darkflow, but this feels quite slow (maybe a few frames per second) nothing like the 30 ish fps you can get with video files!


Solution

  • I get more than 25 fps with Python MSS on my slow laptop under Ubuntu.

    Here is an example:

    from mss import mss
    from PIL import Image
    import time
    
    def capture_screenshot():
        with mss() as sct:
            monitor = sct.monitors[1]
            sct_img = sct.grab(monitor)
            # Convert to PIL/Pillow Image
            return Image.frombytes('RGB', sct_img.size, sct_img.bgra, 'raw', 'BGRX')
    
    N = 100
    t = time.time()
    for _ in range(N):
        capture_screenshot()
    print ("Frame rate = %.2f fps" % (N/(time.time()-t)))
    

    Output:

    Frame rate = 27.55 fps