I am trying to replicate a web scraping code (from an educational site) for fetching names of Indian States from Wikipedia. I have been getting the "AttributeError: ResultSet object has no attribute 'find_all'" error and hence unable to proceed. I am adding my code here and hoping to find some guidance and help:
##import library to query a website
from urllib.request import urlopen
#the url is stored in a variable called wiki
wiki="https://en.wikipedia.org/wiki/List_of_state_and_union_territory_capitals_in_India"
#open the html page and store it in a variable called page
page=urlopen(wiki)
print(wiki)
#import the Beautiful soup functions to parse the data returned from the website
from bs4 import BeautifulSoup
#parsing the html page stored in variable page and storing it in beautiful soup #format
soup=BeautifulSoup(page,['lxml','xml'])
print(soup)
#using 'prettify' function to structure the html page
print(soup.prettify())
#storing all the links in a variable
all_links=soup.find_all("a")
for link in all_links:
print(link.get("href"))
#storing all the tables in a variable
all_tables=soup.find_all("table")
print(all_tables)
#storing required data in a variable
right_table=soup.find_all('table',{'class':'wikitable sortable plainrowheaders'})
print(right_table)
#Generating lists
A=[]
B=[]
C=[]
D=[]
E=[]
F=[]
G=[]
for row in right_table.find_all("tr"):
cells = row.find_all('td')
states=row.find_all('th') #To store second column data
if len(cells)==6: #Only extract table body not heading
A.append(cells[0].find(text=True))
B.append(states[0].find(text=True))
C.append(cells[1].find(text=True))
D.append(cells[2].find(text=True))
E.append(cells[3].find(text=True))
F.append(cells[4].find(text=True))
G.append(cells[5].find(text=True))
I am using python 3.6 and Windows 10
Thanks in advance for the help!
What i can see is when you call:
right_table=soup.find_all('table',{'class':'wikitable sortable plainrowheaders'})
the output of this command i.e right_table contains an array. You would need to call find_all on it's members(even though it has one element), not on entire thing.
So this line:
for row in right_table.find_all("tr"):
should be changed to:
for row in right_table[0].find_all("tr"):