I need to find the average color of a specified part of the image. Specifically I need to know If the color in there is red, green, brown, black or blue.
The images are 640x640, and the region I need to analyze is the rectangle between pixel x1=20 y1=600 and pixel x2=620 y2=640. (where x0y0 is top left corner)
I found several examples, like this one How to find the average colour of an image in Python with OpenCV? but they all deal with the whole image.
How can I get the average color of only apecified area?
A must is that it has to be as quick as possible, below 5 ms.
my aproach would be to go over each pixel in the range and do the maths, but I have the feeling that open CV or similar libraries already can do this.
Any help appreciated.
As your region of interest (ROI) is only a simple rectangle, I think you just want to use Numpy slicing to identify it.
So, I have made a test image that is green where you want to measure:
Then the code would go like this:
import cv2
import numpy as np
# Load the image
im = cv2.imread('start.png')
# Calculate mean of green area
A = np.mean(im[600:640, 20:620], axis=(0,1))
That gets green, unsurprisingly:
array([ 0., 255., 0.])
Now include some of the black area above the green to reduce the mean "greenness"
B = np.mean(im[500:640, 20:620], axis=(0,1))
That gives... "a bit less green":
aarray([ 0. , 72.85714286, 0. ])
The full sampling of every pixel in the green area takes 214 microsecs on my Mac, as follows:
IIn [5]: %timeit A = np.mean(im[600:640, 20:620], axis=(0,1))
214 µs ± 150 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Note that you could almost certainly sample every 4th pixel down and every 4th pixel across as follows in 50.6 microseconds and still get a very indicative result:
In [11]: %timeit A = np.mean(im[500:640:4, 20:620:4], axis=(0,1))
50.6 µs ± 29.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
You can make every pixel you are sampling into a red dot like this - look carefully:
im[600:640:4, 20:620:4] = [255,0,0]
As suggested by Fred (@fmw42), it is even faster if you replace np.mean()
with cv2.mean()
:
So, 11.4 microseconds with cv2.mean()
versus 214 microseconds with np.mean()
:
In [22]: %timeit cv2.mean(im[600:640, 20:620])
11.4 µs ± 11.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
And 7.85 microseconds with cv2.mean()
versus 50.6 microseconds with np.mean()
if sampling every 4th pixel:
In [23]: %timeit cv2.mean(im[600:640:4, 20:620:4])
7.85 µs ± 6.42 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)