I have a dataset of 200+ pdf's that I converted into a corpus. I'm using the TM package for R for text pre-processing and mining. So far, I've successfully created the DTM (document term matrix) and can find the x most frequently occuring terms. The goal of my research however, is to check if certain terms are used in the corpus. I'm not so much looking for the most frequent terms, but have my own list of terms that I want to check if they occur, and if so, how many times.
So far, I've tried this:
function <- content_transformer(function(x, pattern)regmatches(x,gregexpr(pattern, x, perl=TRUE, ignore.case = TRUE)))
keep = "word_1|word_2"
tm_map(my_corpus, function, keep)[[1]]
and these:
str_detect(my_corpus, "word_1", "word_2" )
str_locate_all(my_corpus, "word_1", "word_2")
str_extract(my_corpus, "funds")
This last one seems to come closest giving the output: [1] "funds" NA NA
Neither seems to be giving me what I need.
You can use the option dictionary
when you create your DocumentTermMatrix. See in the example code how it works. Once in the documenttermmatrix form or in a data.frame form you can use aggregation functions if you don't need the word counts per document.
library(tm)
data("crude")
crude <- as.VCorpus(crude)
crude <- tm_map(crude, content_transformer(tolower))
my_words <- c("oil", "corporation")
dtm <- DocumentTermMatrix(crude, control=list(dictionary = my_words))
# create data.frame from documenttermmatrix
df1 <- data.frame(docs = dtm$dimnames$Docs, as.matrix(dtm), row.names = NULL)
head(df1)
docs corporation oil
1 127 0 5
2 144 0 11
3 191 0 2
4 194 0 1
5 211 0 1
6 236 0 7