Let's assume that I have few pdf files stored in a directory and I want to read all those pdf files at one and extract all the sentences that contain a specific keyword (in this case 'provisions') instead of manually opening each file and looking for that keyword.
I have tried reading the files but how can I make R go through each pdf file to search for that keyword and output those sentences? Here's a small piece that I have written:
library(pdftools)
files <- list.files("filepath",pattern = "pdf$", full.names = TRUE)
comb <- lapply(files, pdf_text)
For file reference purpose, the links for pdf files are:
<https://www.supremecourt.gov/opinions/14pdf/13-1314_3ea4.pdf>
<https://www.supremecourt.gov/opinions/14pdf/14-7955_aplc.pdf>
<https://www.supremecourt.gov/opinions/14pdf/14-46_bqmc.pdf>
I have created a directory and saved the pdf files in it.
An update to the question: I have found the solution which can be achieved with the below code:
install.packages("textreadr")
install.packages("tidyverse")
install.packages("pdfsearch")
library(textreadr)
library(tidyverse)
library(pdfsearch)
dirct <- directory_path
result <- keyword_directory(dirct,
keyword = 'input_the_keyword_you_want_to_extract',
surround_lines = 0, full_names = TRUE)
head(result$line_text, n = 20)