Search code examples
pythonpython-3.xapigetdownloading-website-files

Capturing API automatic download file with python


I am trying to use the api at this url https://freeapi.robtex.com/pdns/reverse/(ip_address_here) I am new to coding so if im just completely using the wrong packages bear with me... When entering the URL with the ip address appended to the end it automatically downloads the json response as a file and displays no webpage. I would like to save this downloaded file to a temp directory and keep it for further parsing later on in my tool. I have tried using request.get, urlopen, and urllib but I only get the response code (200) not the actual file. Or it seems to be working but the website will not connect / respond to my script and it times out. I also added User agent headers copied when I was on their website. Main argument is using argparse so this can be used as a command line tool. The function getData is where im trying to get the file to download.

def getData(x):
    pdns_url="https://freeapi.robtex.com/pdns/reverse/"+x
    headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
       AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 
       Safari/537.36'}
    #req=urllib.request.Request(pdns_url)
    #resp=urllib.request.urlopen(pdns_url)
    resp= requests.get(pdns_url, headers=headers)
    respData=resp.read()
    return respData

def Main():
    parser = argparse.ArgumentParser()
    parser.add_argument("url", help="The IP to lookup.", type=str)
    parser.add_argument("-o", "--output", help="Output results to a file.", 
       action="store_true")
    args=parser.parse_args()

    result=getData(args.url)
    if args.output:
        f=open("Dns_Lookup", "a")
        f.write(str(result))
    else:
        print(str(args.url))

    if __name__=='__main__':
        Main()

Solution

  • Try starting with the basics:

    import requests
    
    r = requests.get('https://freeapi.robtex.com/pdns/reverse/')
    print(r.content)
    open('temp.txt', 'wb').write(r.content)
    

    This works without adding the HTTP header.