i need to do many requests to one url, but after ~20 requests, I get a 429 too many requests
. So my plan was to use proxy requests. I have tried 3 things:
But all of them(even the scraperApi-trial) are unbelieveably slow, like 5-10 seconds each request. An example looks like this:
import requests
url = "https://httpbin.org/ip"
proxies = {"https": "164.155.149.1:80"}
r = requests.get(url,proxies=proxies)
print(r.text)
The proxy-ip was from some free proxy website. Sure, proxies are an extra node inbetween but was hoping to find a way to get proxies which at maximum take 1 second..
Is there any way to solve this issue?
Thanks in advance
Codedor, one way I could think is:
Eg:
Distributing the jobs on the VM would be a little trickier - but doable. You can have a master node running a Python script, that iterates through the VMs and spawns the request command on them. You could use various libraries to execute a command on a remote machine in Python - like paramiko, subprocess, os, etc.