Search code examples
ajaxweb-scrapingpostpython-requestsinfinite-scroll

Cant get a valid response from post request


I am trying to scrape a website that uses infinite scroll to load the elements.I checked the network tab after inspecting and found out that it is using a POST request with the url https://search2.raritysniper.com/multi_search?use_cache=true&x-typesense-api-key=L1NoMW9ITm1SYWNodFk4cWpmaHphQWZTS2tuaTVFWDNGdmxjT1llcEpLdz1uNWhMeyJmaWx0ZXJfYnkiOiJwdWJsaXNoZWQ6dHJ1ZSJ9 .

Now, when i make the same request using the above url in requests.post(url) , the content of the response turn out be b'{"message": "Bad JSON."}

Cant figure out what i am doing wrong (Also a newbie to webscrapping ) . So need help!!!!


Solution

  • Your question is not completely clear, but I assumed what you are trying to get is the following, which works for me:

    import requests
    
    url = 'https://search2.raritysniper.com/multi_search'
    params = {'use_cache':True,
              'x-typesense-api-key':'L1NoMW9ITm1SYWNodFk4cWpmaHphQWZTS2tuaTVFWDNGdmxjT1llcEpLdz1uNWhMeyJmaWx0ZXJfYnkiOiJwdWJsaXNoZWQ6dHJ1ZSJ9'
              }
    payload = {"searches":
        [
            {"query_by":"name",
             "sort_by":"launchDate:desc",
             "highlight_full_fields":"name",
             "collection":"collections",
             "q":"*",
             "facet_by":"blockchain,supply,sevenDayVolume,floorPrice,totalVolume,thirtyDayVolume,oneDayVolume",
             "max_facet_values":20,
             "page":1,
             "per_page":24
             }
        ]
    }
    res = requests.post(url, params=params, json=payload)
    

    Your problem might have come from the fact you did not properly supplied the payload. I tested using the data argument to supply the payload (e.g. data=payload instead of json=payload), and I got the same error you mentioned. More on that here.

    Also, I did not really look into the 'x-typesense-api-key' and the proper way to get it, I just reused there the one I had in my browser but you should probably look out to find a cleaner way to get it.

    Note that I used requests feature to supply the parameters required to the URL, which is a bit cleaner even if it is not required.