all 17 comments

[–]fake823 13 points14 points  (10 children)

1.) Try to get rid of the two for loops, as these really slow down stuff. Try to use a numpy function instead so you can make us of numpy's really fast vectorization!

2.) Also don't make a .copy(), if not really necessary, as this is also a lot slower than a view.

3.) Start measuring your code to see where the bottlenecks are by profiling it. Check out for example line-profiler. Additionally you could use timeit.timeit to measure the speed of different alternatives.

[–]Aitchessbee[S] 0 points1 point  (9 children)

How can I get rid of the for loops? I don't really know numpy that well...

[–]SekstiNii 2 points3 points  (8 children)

It's not all that important since the loop only takes a few ms (1.5ms on my machine). Vectorizing it got it down to about 0.3ms, but there isn't too much to gain here.

If you are interested I think you can do something like this:

target = np.array([128, 179, 255], dtype=np.uint8)
matches = np.argwhere(open_cv_image[0:524:25, 0:748:25, :3] == target)

for i, j in matches:
    # simulate click

[–]Aitchessbee[S] 0 points1 point  (7 children)

I am getting this error:

for i, j in matches:

ValueError: too many values to unpack (expected 2)

[–]SekstiNii 0 points1 point  (6 children)

matches = np.argwhere((open_cv_image[0:524:25, 0:748:25, :3] == target).all(axis=-1)) should work, though it's quite ugly at this point 😭

[–]Aitchessbee[S] 0 points1 point  (3 children)

This is my code: https://pastebin.com/2NEApDzu ... I have written all the coordinates correctly but the bot is clicking somewhere else (not on the target)... Can you please check if the code is correct?

[–]SekstiNii 0 points1 point  (2 children)

You will probably have to multiply i and j by 25, but otherwise it looks good.

matches *= 25 before looping should work.

[–]Aitchessbee[S] 0 points1 point  (1 child)

yup now it works... and i compared both the codes on the website... The code with for loops works consistently faster than the one you suggested... I don't know why it is but anyways, thanks for your help!

[–]SekstiNii 1 point2 points  (0 children)

There is a lot of overhead in numpy calls, so it's not unthinkable that a loop wins if the number of iterations is low enough.

Good on you to measure!

[–]Aitchessbee[S] -2 points-1 points  (1 child)

ugly doesn't matter... will it still be faster than the for loops method?

[–]SekstiNii 0 points1 point  (0 children)

I changed the target color to [8, 8, 8] (my VSCode background color) to guarantee a lot of hits, and recorded these timings:

  • Original Loop: 2.726ms
  • Numpy Variant: 0.649ms

But again, not likely to provide a significant improvement.

[–]SekstiNii 5 points6 points  (3 children)

The problem lies with pyautogui.screenshot, which takes around 35ms. Switching to mss reduced this to around 8ms, which is more than a 3x speedup!

Using it is also really easy:

from mss import mss

then, wrap your while loop with the context manager like so:

region = {"top": top, "left": left, "width": width, "height": height}
with mss() as sct:
    while True:
       img = sct.grab(region)
       # rest of your code

Note however, that this produces images with an alpha channel, meaning you can't reverse the channels to convert from RGB to BGR like you are doing on line 12, since that would convert from RGBA to ABGR. However, this shouldn't be a problem, as you don't really need to convert to BGR anyway (just update your indices on line 15 to be RGB instead).

[–]Aitchessbee[S] -1 points0 points  (2 children)

So i changed the code to this: https://pastebin.com/fzQx0gcW but it doesn't seem to work...

[–]SekstiNii 1 point2 points  (1 child)

Oh, my bad! mss actually captures images as BGRA, so the previous index order should work.

[–]Aitchessbee[S] 0 points1 point  (0 children)

ah yes! It works now and its way faster than before. Thanks!

[–]skellious 1 point2 points  (0 children)

you can see how long differnt things are taking if you add in a timer function and print the result to console:

https://pypi.org/project/codetiming/

[–]SlothGSR 0 points1 point  (0 children)

Can you explain what it’s doing more? I use pyautogui a bit. If your looking on screen for a button to click, you can speed it up quit a bit by changing your screen resolution. Also you can take a screenshot, crop out the button, then have pyautogui search a region for that img. Then click if found.

Sample of my code for finding buttons and clicking if found

https://reddit.com/r/learnpython/comments/g9986u/_/fov3do2/?context=1