Search code examples
pythonseleniumbeautifulsoupdocx

How to WebScrape with Python using Selenium, Bs4 & Docx with Multiple URLs(Input) into Multiple Output Docx Files?


I've been messing around with a few different solutions for how to scrape multiple URLs with Selenium, BS4 and Docx and so far I've been able to scrape 1 URL to extract exactly what I want and also able to export output to single docx file. It's just when it comes to multiple or 1+ URL I'm having trouble.

At the moment, I have this code below that is working to scrape the content.

And I would like to create a loop to scrape, to start, just the 2 web pages or multiple url's and figure when it can loop through those I can append the list with the other URLs I have.

And I would like to export each url content/output to each separate docx file.

Below is the code:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ActionChains
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
import requests
import time
import docx
import os

doc = docx.Document()


url =["https://www.udemy.com/course/python-the-complete-python-developer-course/",
     "https://www.udemy.com/course/the-creative-html5-css3-course-build-awesome-websites/"]

#output = [1,2]

list1 = url
for item in url:
    try:
        PATH = "C:\Program Files (x86)\chromedriver.exe"

        driver = webdriver.Chrome(PATH)

        driver.get(item)
    except:
        # if the link cant be scraped
        break

    time.sleep(5)

    button = driver.find_element_by_xpath("//div/div[@class='curriculum--sub-header--23ncD']/button[@class='udlite-btn udlite-btn-medium udlite-btn-ghost udlite-heading-sm']")

    button.click()

    time.sleep(5)

    html = driver.page_source

    soup = BeautifulSoup(html,'html.parser')

    main = soup.find_all('div',{'class' : 'section--panel--1tqxC panel--panel--3NYBX'})

    for mains in main:
        header = mains.find_all("span",{'class' : 'section--section-title--8blTh'})
        for title in header:
            outputtitle = title.text
            doc.add_heading(outputtitle,1)
            for titles in header:
                sub = mains.find_all('div',{'class' : 'section--row--3PNBT'})
                for a in sub:
                    sub1 = a.find_all("span")
                    for sub in sub1:
                        outputsub = sub.text
                        doc.add_heading(outputsub,3)
    for i in range(len(list1)):
        doc.save("file%s.docx" %i)


Solution

  • create a list that will store links

    links = []
    

    loop through them with try except statement

    for item in links:
        try:
            # open item
        except:
            # if the link cant be scraped
            break
    
        # scrape the link
    
        with open(f'{item.replace(".", "").replace("/", "")}', 'w') as file:
            file.write(scraped_info)