Search code examples
pythonweb-scrapingbeautifulsouppypdf

Data scraping issue when downloading PDFs from a journal website


I am encountering an issue when scraping PDFs from the MDPI Remote sensing journal using BeautifulSoup and Python.

The purpose of my code is to scrape each journal volume, and the issues within, for PDFs that are downloaded to my local machine. Each Volume of the journal contains multiple issues, which in turn contain multiple articles. The PDFs of each article are in a class_="UD_Listings_ArticlePDF"

My problem is that my code only downloads a maximum of 30 articles, per issue, per volume, when in fact the majority of these issues have >30 articles (For example Volume 8, issue 2, has >30 articles). I canʻt figure out why this is happening, because the class_="UD_Listings_ArticlePDF" are visible in the source HTML and the code should be detecting them.

Can anyone help me figure out what is going on here? (see attached code)

import requests
from bs4 import BeautifulSoup
import os
import time

# Base URL for the journal (change if the base URL pattern changes)
base_url = "https://www.mdpi.com/2072-4292"

# Directory to save the PDFs
os.makedirs("mdpi_pdfs", exist_ok=True)

# Define the range of volumes and issues to scrape
start_volume = 1
end_volume = 16  # Change this number based on the latest volume available

# Time delay between requests in seconds
request_delay = 4  # Time delay between requests to avoid 429 errors

# Maximum number of retries after 429 errors
max_retries = 5

# Iterate over each volume
for volume_num in range(start_volume, end_volume + 1):
    print(f"\nProcessing Volume {volume_num}...")

    # Assume a reasonable number of issues per volume
    start_issue = 1
    end_issue = 30  # You may need to adjust this based on the number of issues per volume

    for issue_num in range(start_issue, end_issue + 1):
        issue_url = f"{base_url}/{volume_num}/{issue_num}"
        print(f"  Processing Issue URL: {issue_url}")

        retries = 0
        while retries < max_retries:

            try:
                # Get the content of the issue webpage
                response = requests.get(issue_url)

                # If issue URL doesn't exist, break the loop for this volume
                if response.status_code == 404:
                    print(f"  Issue {issue_num} in Volume {volume_num} does not exist. Moving to next volume.")
                    time.sleep(request_delay * 5)
                    break

                # Handle 429 errors gracefully
                if response.status_code == 429:
                    print(f"  Received 429 error. Too many requests. Retrying in {request_delay * 5} seconds...")
                    retries += 1
                    time.sleep(request_delay * 5)  # Exponential backoff strategy
                    continue

                response.raise_for_status()  # Check for other request errors

                # Parse the page content
                soup = BeautifulSoup(response.content, "html.parser")

                # Find all links that lead to PDFs
                pdf_links = soup.find_all("a", class_="UD_Listings_ArticlePDF")  # Adjust class if needed

                if not pdf_links:
                    print(f"  No PDF links found for Issue {issue_num} in Volume {volume_num}.")
                    break

                # Download each PDF for the current issue
                for index, link in enumerate(pdf_links, start=1):
                    try:
                        # Construct the full URL for the PDF
                        pdf_url = f"https://www.mdpi.com{link['href']}"

                        # Create a unique file name with volume and issue information
                        pdf_name = f"mdpi_volume_{volume_num}_issue_{issue_num}_article_{index}.pdf"
                        pdf_path = os.path.join("mdpi_pdfs", pdf_name)

                        print(f"    Downloading: {pdf_url}")

                        # Download the PDF
                        pdf_response = requests.get(pdf_url)
                        pdf_response.raise_for_status()  # Check for request errors

                        # Save the PDF file
                        with open(pdf_path, "wb") as file:
                            file.write(pdf_response.content)

                        print(f"    Successfully downloaded: {pdf_name}")

                        # Sleep after each successful download
                        time.sleep(request_delay)

                    except Exception as e:
                        print(f"    Failed to download {pdf_url}. Error: {e}")

                # Exit the retry loop since request was successful
                break

            except Exception as e:
                print(f"  Failed to process Issue {issue_num} in Volume {volume_num}. Error: {e}")
                retries += 1
                if retries < max_retries:
                    print(f"  Retrying in {request_delay * 2} seconds... (Retry {retries}/{max_retries})")
                    time.sleep(request_delay * 2)
                else:
                    print(f"  Maximum retries reached. Skipping Issue {issue_num} in Volume {volume_num}.")

print("\nDownload process completed for all specified volumes and issues!")

I have tried using expanded selectors in order to catch any weirdly formatted classes, but I still can only return 30 PDF links per issue, when in fact there are many more.


Solution

  • You can get the first 30 articles when you request the page in python and download the articles.

    for the remaining you need to do additional requests in this format https://www.mdpi.com/2072-4292/volume_number/issue_number/date/default/30/15 where 30 is the start point and 15 is the count. So subsequently the url needs to be

    https://www.mdpi.com/2072-4292/volume_number/issue_number/date/default/45/15 https://www.mdpi.com/2072-4292/volume_number/issue_number/date/default/60/15 https://www.mdpi.com/2072-4292/volume_number/issue_number/date/default/75/15

    till you find null items with the class "UD_Listings_ArticlePDF"