I'm getting an Indentation Error whenever I run my program in CMD. To me, the indentation throughout the program looks perfect so I have absolutely no idea why I'm receiving the error.
CMD Error:
scraper9.py", line 50
browser.get(url2)
^
IndentationError: unexpected unindent
I have completely removed all indents and re-indented line for line to arrive the current iteration but I still errors out.
import os import sys import csv from bs4 import BeautifulSoup import urllib2 import xlsxwriter from selenium import webdriver
reload(sys)
sys.setdefaultencoding("utf8")
key_stats_on_main = ["Market Cap", "PE Ratio (TTM)"]
key_stats_on_stat = ["Enterprise Value", "Trailing P/E"]
stocks_arr =[]
pfolio_file = open("tickers.csv", "r")
for line in pfolio_file:
indv_stock_arr = line.strip().split(",")
stocks_arr.append(indv_stock_arr)
print(stocks_arr)
browser = webdriver.PhantomJS()
stock_info_arr = []
for stock in stocks_arr:
stock_info = []
ticker = stock[0]
stock_info.append(ticker)
url="https://finance.yahoo.com/quote/{0}?p={0}".format(ticker)
url2="https://finance.yahoo.com/quote/{0}/key-statistics?p={0}".format(ticker)
browser.get(url)
innerHTML = browser.execute_script("return document.body.innerHTML")
soup = BeautifulSoup(innerHTML, "html.parser")
for stat in key_stats_on_main:
page_stat1 = soup.find(text = stat)
try:
page_row1 = page_stat1.find_parent("tr")
try:
page_statnum1 = page_row1.find_all("span")[1].contents[1]
except:
page_statnum1 = page_row1.find_all("td")[1].contents[0]
except:
print("Invalid parent for this element")
page_statnum1 = "N/A"
stock_info.append(page_statnum1)
browser.get(url2)
innerHTML2 = browser.execute_script("return document.body.innerHTML2")
soup2 = BeautifulSoup(innerHTML2, "html.parser")
for stat in key_stats_on_stat:
page_stat2 = soup2.find(text=stat)
try:
page_row2 = page_stat2.find_parent("tr")
try:
page_statnum2 = page_row2.find_all("span")[1].contents[0]
except:
page_statnum2 = page_row2.find_all("td")[1].content[0]
except:
print("Invalid pareent for this element")
page_statnum2 = "N/A"
stock_info.append(page_statnum2)
stock_info_arr.append(stock_info)
print(stock_info_arr)
key_stats_on_main.extend(key_stats_on_stat)
workbook = xlsxwriter.Workbook("Stocks01.xlsx")
worksheet = workbook.add_worksheet()
row = 0
col = 2
for stat in key_stats_on_main:
worksheet.write(row, col, stat)
col +=1
row = 1
col = 0
for our_stock in stock_info_arr:
col = 0
for info_bit in our_stock:
worksheet.write(row, col, info_bit)
col += 1
row += 1
workbook.close()
print("Script completed")
I expect the code to execute without Ineation errors.
It errors out. I'm so lost.
Your try:
lacks an except:
or finally:
.
for stat in key_stats_on_main:
page_stat1 = soup.find(text = stat)
try: # <--------------- this one here
page_row1 = page_stat1.find_parent("tr")
try:
page_statnum1 = page_row1.find_all("span")[1].contents[1]
except:
page_statnum1 = page_row1.find_all("td")[1].contents[0]
except:
print("Invalid parent for this element")
page_statnum1 = "N/A"
stock_info.append(page_statnum1)
# <---------------- needs something here
browser.get(url2)
You probably meant the second except:
to be on the indenting level of the first try:
:
for stat in key_stats_on_main:
page_stat1 = soup.find(text = stat)
try: # <--------------- this one here
page_row1 = page_stat1.find_parent("tr")
try:
page_statnum1 = page_row1.find_all("span")[1].contents[1]
except:
page_statnum1 = page_row1.find_all("td")[1].contents[0]
except:
print("Invalid parent for this element")
page_statnum1 = "N/A"
stock_info.append(page_statnum1)
browser.get(url2)
Try that!
Btw, you should reduce the size of the code in your try:
clause and catch only the exceptions you are handling. In your case, an AttributeError
(if .contents
fails) would be caught by the first except:
. Better:
try:
found = page_row1.find_all("span")
index = 1
except XError: # should be the one .find_all() can raise
found = page_row1.find_all("td")
index = 0
page_statnum1 = found[1].contents[index]
And something similar for the outer try
/except
.
This way you do not cloak other exceptions you never meant to handle. If you do that cloaking, you will have a hard time figuring out what's going wrong, so avoid it.