Search code examples
pythonpython-3.xweb-scrapingbeautifulsoupexport-to-csv

BeautifulSoup: Scraping CSV list of URLs


I have been trying to download data from different urls and then save it to a csv file.

The idea is extract the highlighted data from: https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow Data

So far I built the following piece of code:

import pandas as pd
from bs4 import BeautifulSoup
import urllib.request as ur

url_is = 'https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow'
read_data = ur.urlopen(url_is).read()
soup_is=BeautifulSoup(read_data, 'lxml')
row = soup_is.select_one('tr.mainRow>td.rowTitle:contains("Cash Dividends Paid - Total")')
data=[cell.text for cell in row.parent.select('td') if cell.text!='']
df=pd.DataFrame(data)
print(df.T)

I get as an output:

Output1

All good so far.

Now my idea is to extract specific classes from multiple URLs, keep the same headers from the website and export it to a .csv.

The tags and classes stay the same

Sample URLs:

https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow
https://www.marketwatch.com/investing/stock/aapl/financials/cash-flow

Code (I wanted to try with 2 columns: 2015 and 2016)

As desidered ouput I would like something like: desidered output

I wrote the following code, but is giving me issues, any help or advice is welcome:

import pandas as pd
from bs4 import BeautifulSoup
import urllib.request as ur
import numpy as np
import requests


links = ['https://www.marketwatch.com/investing/stock/aapl/financials/cash-flow', 'https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow']

container = pd.DataFrame(columns=['Name', 'Name2'])
pos=0
for l in links:
    read_data = ur.urlopen(l).read()
    soup_is=BeautifulSoup(read_data, 'lxml')
    row = soup_is.select_one('tr.mainRow>td.rowTitle:contains("Cash Dividends Paid - Total")')
    results=[cell.text for cell in row.parent.select('td') if cell.text!='']
    records = []

    for result in results:
      records = []
      Name = result.find('span', attrs={'itemprop':'2015'}).text if result.find('span', attrs={'itemprop':'2015'}) is not None else ''

      Name2 = result.find('span', attrs={'itemprop':'2016'}).text if result.find('span', attrs={'itemprop':'2016'}) is not None else ''

      records.append(Name)
      records.append(Name2)

      container.loc[pos] = records
      pos+=1

Solution

  • import requests
    import pandas as pd
    
    urls = ['https://www.marketwatch.com/investing/stock/aapl/financials/cash-flow',
            'https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow']
    
    
    def main(urls):
        with requests.Session() as req:
            goal = []
            for url in urls:
                r = req.get(url)
                df = pd.read_html(
                    r.content, match="Cash Dividends Paid - Total")[0].iloc[[0], 0:3]
                goal.append(df)
            new = pd.concat(goal)
            print(new)
    
    
    main(urls)
    

    enter image description here