I'm looking for a fast way to download all of the images that i can see in the network tab on developer tools? they come through as data:image/png;base64 links. You can open them into a new tab individually and save them manually from there but that seems to be the only way. Saving the whole webpage or a .har file dosent seem to capture them. Neither dose any addon i have tried. :/
is there a fast way to save them all? since manually doing this would take a lifetime.
Best regards, Matt
The easiest way i have found to achieve what im looking for is to: filter by images, select one of the results in the network tab, rightclick->copy->copy all as CURL(cmd). this will give you a full list of all resources you can then scrape out the data for each image and convert it to a file with a script, here is the script i made to do this:
each resource is saves as a new line as follows:
curl "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAgAAAAICAYAAADED76LAAAAbklEQVQoU42PsQ3CQAADzxPAKGECRJmO9qeAEWAbOkpC9ywQVoEFOPRCNCgCXNon2Q5AOV/X6ibQAXOhYvaHflHTQvTYwE9pVimnsRKWUwBySRlGJ8OXefsKiPc/Kn6NfN/k4dbYhczaOMmu3XwCriA4HJ2kao8AAAAASUVORK5CYII=" --compressed &
Script:
import base64
fname = "starvedump.txt"
dataToBeFound = "data:image/png;base64,"
imgext = ".png"
imgpfx = "img/img_"
with open(fname) as f:
d = f.readlines()
d[:] = [x for x in d if dataToBeFound in x]
d = [x.replace('curl "' + dataToBeFound, '') for x in d]
d = [x.replace("\" --compressed &\n", "") for x in d]
for i, x in enumerate(d) :
with open(imgpfx + str(i) + imgext, "wb") as fh:
fh.write(base64.b64decode(x))