Search code examples
rsparse-matrixquanteda

Split up ngrams in (sparse) document-feature matrix


This is a follow up question to this one. There, I asked if it's possible to split up ngram-features in a document-feature matrix (dfm-class from the quanteda-package) in such a way that e.g. bigrams result in two separate unigrams.

For better understanding: I got the ngrams in the dfm from translating the features from German to English. Compounds ("Emissionsminderung") are quiet common in German but not in English ("emission reduction").

library(quanteda)

eg.txt <- c('increase in_the great plenary', 
            'great plenary emission_reduction', 
            'increase in_the emission_reduction emission_increase')
eg.corp <- corpus(eg.txt)
eg.dfm <- dfm(eg.corp)

There was a nice answer to this example, which works absolutely fine for relatively small matrices as the one above. However, as soon as the matrix is bigger, I'm constantly running into the following memory error.

> #turn the dfm into a matrix
> DF <- as.data.frame(eg.dfm)
Error in asMethod(object) : 
  Cholmod-error 'problem too large' at file ../Core/cholmod_dense.c, line 105

Hence, is there a more memory efficient way to solve this ngram-problem or to deal with large (sparse) matrices/data frames? Thank you in advance!


Solution

  • The problem here is that you are turning the sparse (dfm) matrix into a dense object when you call as.data.frame(). Since the typical document-feature matrix is 90% sparse, this means you are creating something larger than you can handle. The solution: use dfm handling functions to maintain the sparsity.

    Note that this is both a better solution than proposed in the linked question but also should work efficiently for your much larger object.

    Here's a function that does that. It allows you to set the concatenator character(s), and works with ngrams of variable sizes. Most importantly, it uses dfm methods to make sure the dfm remains sparse.

    # function to split and duplicate counts in features containing 
    # the concatenator character
    dfm_splitgrams <- function(x, concatenator = "_") {
        # separate the unigrams
        x_unigrams <-  dfm_remove(x, concatenator, valuetype = "regex")
    
        # separate the ngrams
        x_ngrams <- dfm_select(x, concatenator, valuetype = "regex")
        # split into components
        split_ngrams <- stringi::stri_split_regex(featnames(x_ngrams), concatenator)
        # get a repeated index for the ngram feature names
        index_split_ngrams <- rep(featnames(x_ngrams), lengths(split_ngrams))
        # subset the ngram matrix using the (repeated) ngram feature names
        x_split_ngrams <- x_ngrams[, index_split_ngrams]
        # assign the ngram dfm the feature names of the split ngrams
        colnames(x_split_ngrams) <- unlist(split_ngrams, use.names = FALSE)
    
        # return the column concatenation of unigrams and split ngrams
        suppressWarnings(cbind(x_unigrams, x_split_ngrams))
    }
    

    So:

    dfm_splitgrams(eg.dfm)
    ## Document-feature matrix of: 3 documents, 9 features (40.7% sparse).
    ## 3 x 9 sparse Matrix of class "dfmSparse"
    ##        features
    ## docs    increase great plenary in the emission reduction emission increase
    ##   text1        1     1       1  1   1        0         0        0        0
    ##   text2        0     1       1  0   0        1         1        0        0
    ##   text3        1     0       0  1   1        1         1        1        1
    

    Here, splitting ngrams results in new "unigrams" of the same feature name. You can (re)combine them efficiently with dfm_compress():

    dfm_compress(dfm_splitgrams(eg.dfm))
    ## Document-feature matrix of: 3 documents, 7 features (33.3% sparse).
    ## 3 x 7 sparse Matrix of class "dfmSparse"
    ##        features
    ## docs    increase great plenary in the emission reduction
    ##   text1        1     1       1  1   1        0         0
    ##   text2        0     1       1  0   0        1         1
    ##   text3        2     0       0  1   1        2         1