So I am making an automated clicker, I want to ask what can I do to optimise my code to make it faster, it is currently working however it recognises the pixel colour and then clicks too slow. I have tried using space it seems just the same.
Would it be faster if instead of a box it was just a pixel? Also, is there an alternative to a.sum(), instead of calculating the total would it be faster to use the original value.
The aim of the project is to find a pixel/box on the screen, when the colour value changes to a certain color (white or in grey range) it would then click a button.
from pyautogui import *
from PIL import ImageGrab
from PIL import ImageOps
import pyautogui
import time
import keyboard
import random
from numpy import*
import os
emark = (1440, 1026)
pyautogui.click(x=1063, y=544)
def image_grab():
box = (emark[0]+1,emark[1],emark[0]+11,emark[1]+2)
image = ImageGrab.grab(box)
grayImage = ImageOps.grayscale(image)
a = array(grayImage.getcolors())
return a.sum()
def clickButton():
#pyautogui.press('space')
pyautogui.click(x=1160, y=600)
while True:
image_grab()
if 1000 <= image_grab() <= 1700:
clickButton()
You don't have to do all that color conversion and numpy array stuff. Just grab one pixel, save it's color value, then grab a new pixel, and test for equality.
#! /usr/bin/env python3
from PIL import ImageGrab
import pyautogui
emark = ( 1440, 1026 )
pyautogui .click( x=1063, y=544 )
pixel = ( emark[0]-1, emark[1]-1, emark[0]+1, emark[1]+1 )
original_pixel_color = ImageGrab .grab( pixel ) .getpixel( (0,0) )
## print( original_pixel_color )
def image_grab():
new_pixel_color = ImageGrab .grab( pixel ) .getpixel( (0,0) )
return new_pixel_color == original_pixel_color
def clickButton():
## print('yup')
pyautogui .click( x=1160, y=600 )
while True:
if not image_grab():
clickButton()
## I don't know if pyautogui has a built in pause function,
## or repeat delay, but you probably want a time.sleep(1)
## or similar here, so it doesn't rapid-fire autoclick
## 500 times in a row, once that pixel changes color.
Not needed for you script, but how I figured getpixel()
is the fastest, of those ImageGrab methods, that return comparable color data.
import timeit
loops = 99999
print( 'tobytes()', timeit .timeit( lambda : original_pixel_color .tobytes(), number=loops ) )
print( 'getcolors()', timeit .timeit( lambda : original_pixel_color .getcolors(), number=loops ) )
print( 'getpixel()', timeit .timeit( lambda : original_pixel_color .getpixel( (0,0) ), number=loops ) )
Edit: ImageGrab .grab( pixel )
grabs a screen rectangle, according to the size you give it.
( emark[0] -1, emark[1] -1, emark[0] +1, emark[1] +1 ) ## x, y, w, h
Due to limitations in PIL, the minimum coordinate space is 2x2, hence the emark -1, emark +1
.
I think that's right, but I'm going on recollections from years ago here. They might say the definitive pixel location here on SO somewhere, but you could try
( emark[0], emark[1], emark[0] +2, emark[1] +2 )
if off by one. Either way, it really only grabs one pixel.
.getpixel( (0,0) )
returns the RGB value of that pixel, as a tuple.
so yea, instead of snapping a pixel to begin with, you could just use your own value there.
original_pixel_color = ( 240, 240, 240 )
def brighter_than():
new_pixel_color = ImageGrab .grab( pixel ) .getpixel( (0,0) )
r = new_pixel_color[0] >= original_pixel_color[0]
g = new_pixel_color[1] >= original_pixel_color[1]
b = new_pixel_color[2] >= original_pixel_color[2]
return r and g and b
while True:
if not brighter_than():
clickButton()
Also, at the beginning, after your import:
Fail-Safes
Set up a 2.5 second pause after each PyAutoGUI call:
import pyautogui
pyautogui .PAUSE = 2.5
https://pyautogui.readthedocs.io/en/latest/quickstart.html
Post postcript: Well in some of those cases they're saving/displaying an image, not just getting the color, so a lot of that can be stripped out. Used sp
because it was easier to figure out at the time and was quick enough for what I was doing.
import subprocess as sp
xx, yy = 640, 480
ww, hh = 1, 1
pixel_location = f'{xx},{yy},{ww},{hh}'
cmd = [ 'scrot', 'temp.tiff', '--autoselect', pixel_location, '--silent' ]
sp .Popen( cmd )
I know scrot is a Linux command, but it would be a similar routine with screenshot in OSX, you'd just need to tweak the args to whatever it's expecting. I only say this because it worked for me, however calling an extra BASH process would create a tiny amount of overhead and miliseconds of lag, so you likely want to go with a direct call to the library instead...
Think that answer is in Tiger's response, and n4p strips a lot of it out, so that's promising; but those function calls are out of my scope of knowledge as well, so making educated guesses at this point. Have no way of testing it here, so you'd have to try and see. Trying to skip numpy and pillow if at all possible, by using a quick struct unpack - that should be the fastest.
import struct
import pyautogui
import Quartz.CoreGraphics as CG
pyautogui .PAUSE = 2.5
xx, yy = 500, 500
ww, hh = 1, 1
region = CG .CGRectMake( xx, yy, ww, hh )
orig = ( 240, 240, 240 )
def brighter_than(): ## think these properties are right...
pixel = CG .CGWindowListCreateImage( region, CG .kCGWindowListOptionOnScreenOnly, CG .kCGNullWindowID, CG .kCGWindowImageDefault )
data = CG .CGDataProviderCopyData( CG .CGImageGetDataProvider( pixel ) )
## backwards from RGBA, Mac stores pixels as BGRA instead
b, g, r, a = struct .unpack_from( 'BBBB', data ) ## BBBB = 4 Bytes
return r >= orig[0] and g >= orig[1] and b >= orig[2]
while True:
if brighter_than():
clickButton()
Some of that info comes from dbr's blog writeup . And CGRectMake()
comes from Converting CGImage to python image (pil/opencv) . Everybody else was using region = CG.CGRectInfinite
which likely grabs the entire screen instead of one pixel.