Search code examples
pythonpython-3.xweb-scrapingbeautifulsoupexport-to-csv

BeautifulSoup: Merge tables and export to .csv


I have been trying to download data from different urls and then save it to a csv file.

The idea is extract Annual / Quarterly data from: https://www.marketwatch.com/investing/stock/MMM/financials/

Annual:

https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow

enter image description here

Quarter:

https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow/quarter

Quarter

With the following code:

 import requests
 import pandas as pd
    
    urls = ['https://www.marketwatch.com/investing/stock/AAPL/financials/cash-flow',
            'https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow']
    
    
    def main(urls):
        with requests.Session() as req:
            goal = []
            for url in urls:
                r = req.get(url)
                df = pd.read_html(
                    r.content, match="Cash Dividends Paid - Total")[0].iloc[[0], 0:3]
                goal.append(df)
            new = pd.concat(goal)
            print(new)
    
    
    main(urls)

Output: output

I can extract the desired information (in the example Annual 2015 and 2016 for 2 firms) but just for 1 set (quarter or annual)

I would like to merge the tables Annual + Quarter

For that I thought in this code:

import requests
import pandas as pd
from urllib.request import urlopen
from bs4 import BeautifulSoup
import csv

html = urlopen('https://www.marketwatch.com/investing/stock/MMM/financials/')
soup = BeautifulSoup(html, 'html.parser')

ids = ['cash-flow','cash-flow/quarter']


with open("news.csv", "w", newline="", encoding='utf-8') as f_news:
    csv_news = csv.writer(f_news)
    csv_news.writerow(["A"])

    for id in ids:
      a = soup.find("Cash Dividends Paid - Total", id=id)
      csv_news.writerow([a.text])

But In getting the following error:

error


Solution

  • BeautifulSoup elements do not have a property text, but a method get_text()

      csv_news.writerow([a.get_text()])
    

    https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text