I am attempting to learn a little bit about computer vision and there is a not lot of wisdom here so I apologize in advance …
Ultimately I am attempting to create some sort of a Boolean statement about extracting colors from what is being captured in RGB format. IE, (RGB, if 255,0,0 is captured or probability (?) a Boolean point/trigger would go true) The code below will take a screen shot of what is happening with pyautogui on my desktop and also print what's going on print(frame)
as the loop executes..
from imutils.video import VideoStream
from imutils.video import FPS
import numpy as np
import imutils
import time
import cv2
import pyautogui
fps = FPS().start()
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = np.array(pyautogui.screenshot(region = (0,200, 800,400)))
frame = cv2.cvtColor((frame), cv2.COLOR_RGB2BGR)
frame = imutils.resize(frame, width=400)
print(frame)
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# update the FPS counter
fps.update()
# stop the timer and display FPS information
fps.stop()
print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
I can see in the console as the loop executes arrays of numbers in matrix format.. Is it possible to extract RGB color codes from here or is it just pixel representations of objects? Or both color & pixel representation of objects?
The "Frame" window is what I am creating in imshow
openCV2 and it almost appears in each color screenshot captured thru pyautogui I can see it in the lower left corner in matrix format of the console output the RGB format for blue red & white.
Im using IDLE 3.6 on a Windows 10 laptop for this experiment and executing the .py file thru windows CMD. Ultimately is it possible to create a Boolean trigger for a range of Blues or a range of Reds and white??? Thank you...
Very simple, this blog post it explains it all: https://www.pyimagesearch.com/2014/03/03/charizard-explains-describe-quantify-image-using-feature-vectors/
One thing to watch out for is the colors come thru in BGR order not RGB... Add this to the loop:
means = cv2.mean(frame)
means = means[:3]
print(means)
The final product would this to print what colors are coming thru in BGR order:
from imutils.video import VideoStream
from imutils.video import FPS
import numpy as np
import imutils
import time
import cv2
import pyautogui
fps = FPS().start()
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = np.array(pyautogui.screenshot(region = (0,200, 800,400)))
frame = cv2.cvtColor((frame), cv2.COLOR_RGB2BGR)
frame = imutils.resize(frame, width=400)
#grab color in BGR order, blue value comes first, then the green, and finally the red
means = cv2.mean(frame)
#disregard 4th value
means = means[:3]
print(means)
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# update the FPS counter
fps.update()
# stop the timer and display FPS information
fps.stop()
print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))