So I'm trying to scrape this table using requests-html:
<table class="pet-listing__list rescue-details">
<tbody>
<tr>
<td>Rescue group:</td>
<td><a href="/groups/10282/Dog-Rescue-Newcastle">Dog Rescue Newcastle</a></td>
</tr>
<tr>
<td>PetRescue ID:</td>
<td>802283</td>
</tr>
<tr>
<td>Location:</td>
<td>Toronto, NSW</td>
</tr>
<tr>
<td class="first age">Age:</td>
<td class="first age">1 year 2 months</td>
</tr>
<tr>
<td class="adoption_fee">Adoption fee:</td>
<td class="adoption_fee">$550.00</td>
</tr>
<tr>
<td class="desexed">Desexed:</td>
<td class="desexed"><span class="boolean-image-true boolean-image-yes">Yes</span></td>
</tr>
<tr>
<td class="vaccinated">Vaccinated:</td>
<td class="vaccinated"><span class="boolean-image-true boolean-image-yes">Yes</span></td>
</tr>
<tr>
<td class="wormed">Wormed:</td>
<td class="wormed"><span class="boolean-image-true boolean-image-yes">Yes</span></td>
</tr>
<tr>
<td class="microchip_number">Microchip number:</td>
<td class="microchip_number">OnFile</td>
</tr>
<tr>
<td class="rehoming_organisation_id">Rehoming organisation:</td>
<td class="rehoming_organisation_id">R251000026</td>
</tr>
</tbody>
</table>
The docs don't seem to mention a way to find the next td e.g. if I want to scrape the dog's rescue group or location. Is there a way to scrape those cells of the table using only requests-html or would this need to additionally go through e.g. bs4/lxml/etc. to be parsed?
Code so far (returns an error as HTMLSession.html.find doesn't take an attribute text like bs4):
class PetBarnCrawler(DogCrawler):
"""Looks for dogs on Petbarn"""
def __init__(self, url="https://www.petrescue.com.au/listings/search/dogs"):
super(PetBarnCrawler, self).__init__(url)
def _get_dogs(self, **kwargs):
"""Get listing of all dogs"""
for html in self.current_page.html:
# grab all the dogs on the page
dog_previews = html.find("article.cards-listings-preview")
for preview in dog_previews:
new_session = HTMLSession()
page_link = preview.find("a.cards-listings-preview__content")[0].attrs["href"]
dog_page = new_session.get(page_link)
# populate the dictionary with all the parameters of interest
this_dog = {
"id": os.path.split(urllib.parse.urlparse(dog_page.url).path)[1],
"url": page_link,
"name": dog_page.html.find(".pet-listing__content__name"),
"breed": dog_page.html.find(".pet-listing__content__breed"),
"age": dog_page.html.find("td.age")[1],
"price": dog_page.html.find("td.adoption_fee")[1],
"desexed": dog_page.html.find("td.desexed")[1],
"vaccinated": dog_page.html.find("td.vaccinated")[1],
"wormed": dog_page.html.find("td.wormed")[1],
"feature": dog_page.html.find(".pet-listing__content__feature"),
"rescue_group": dog_page.html.find("td", text="Rescue group:").find_next("td"),
"rehoming_organisation_id": dog_page.html.find("td.rehoming_organisation_id")[1],
"location": dog_page.html.find("td", text="Location:").find_next("td"),
"description": dog_page.html.find(".personality"),
"medical_notes": dog_page.html.find("."),
"adoption_process": dog_page.html.find(".adoption_process"),
}
self.dogs.append(this_dog)
new_session.close()
As it turns out I didn't read the documentation carefully enough.
Using the xpath query function in requests-html should be sufficient without the need to use a library like bs4 or lxml to traverse the document tree:
{
...
"location": dog_page.html.xpath("//tr[td='Location:']/td[2]")[0].text,
...
}
cf. this post: XPath:: Get following Sibling