Search code examples
pythonmemorynumpyscikit-learntext-classification

how can I complete the text classification task using less memory


(1)My goal: I am trying to use SVM to classify 10000 documents(each with 400 words) into 10 classes(evenly distributed). The features explored in my work include word n-gram (n=1~4),character n-gram(n=1~6).

(2)My approach: I am representing each document using vectors of frequency values for each feature in the document. And using TF-IDF to formalize the vectors. parts of my code are below:

def commonVec(dicts,count1,count2):
    ''' put features with frequency between count1 and count2 into a common vector used for SVM training''' 
    global_vector = []
    master = {}
    for i, d in enumerate(dicts):
        for k in d:
            master.setdefault(k, []).append(i)
    for key in master:
        if (len(master[key])>=count1 and len(master[key])<=count2):  
            global_vector.append(key)
    global_vector1 = sorted(global_vector)
    return global_vector1 
def featureComb(mix,count1,count2,res1):
    '''combine word n-gram and character n-gram into a vector'''
    if mix[0]:
        common_vector1 = []
        for i in mix[0]:
            dicts1 = []
            for res in res1: #I stored all documents into database. res1 is the document result set and res is each document. 
                dicts1.append(ngram.characterNgrams(res[1], i)) # characterNgrams()will return a dictionary with feature name as the key, frequency as the value.
            common_vector1.extend(commonVec(dicts1, count1, count2))
    else:
        common_vector1 = []
    if mix[1]:
        common_vector2 = []
        for j in mix[1]:
            dicts2 = []
            for res in res1:
                dicts2.append(ngram.wordNgrams(res[1], j))        
            common_vector2.extend(commonVec(dicts2, count1, count2))
    else:
        common_vector2 = []
    return common_vector1+common_vector2

def svmCombineVector(mix,global_combine,label,X,y,res1):
    '''Construct X vector that can be used to train SVM'''
    lstm = []
    for res in res1:            
        y.append(label[res[0]]) # insert class label into y

        dici1 = {}
        dici2 = {}
        freq_term_vector = []
        for i in mix[0]:             
            dici1.update(ngram.characterNgrams(res[1], i))
        freq_term_vector.extend(dici1[gram] if gram in dici1 else 0 for gram in global_combine)    
        for j in mix[1]:
            dici2.update(ngram.wordNgrams(res[1], j))
        freq_term_vector.extend(dici2[gram] if gram in dici2 else 0 for gram in global_combine)
        lstm.append(freq_term_vector)
    freq_term_matrix = np.matrix(lstm)
    transformer = TfidfTransformer(norm="l2")
    tfidf = transformer.fit_transform(freq_term_matrix)
    X.extend(tfidf.toarray())

X = []
y = []
character = [1,2,3,4,5,6]
word = [1,2,3,4]
mix = [character,word]
global_vector_combine = featureComb(mix, 2, 5000, res1)
print len(global_vector_combine) # 542401
svmCombineVector(mix,global_vector_combine,label,X,y,res1)
clf1 = svm.LinearSVC()
clf1.fit(X, y)

(3)My problem: However, when I ran the source code, a memory error occurred.

Traceback (most recent call last):
      File "svm.py", line 110, in <module>
        functions.svmCombineVector(mix,global_vector_combine,label,X,y,res1)
      File "/home/work/functions.py", line 201, in svmCombineVector
        X.extend(tfidf.toarray())
      File "/home/anaconda/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 901, in toarray
        return self.tocoo(copy=False).toarray(order=order, out=out)
      File "/home/anaconda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 269, in toarray
        B = self._process_toarray_args(order, out)
      File "/home/anaconda/lib/python2.7/site-packages/scipy/sparse/base.py", line 789, in _process_toarray
    _args
        return np.zeros(self.shape, dtype=self.dtype, order=order)
    MemoryError

I really have a hard time with it and need help from stackoverflow.

  1. Could anyone explain some details and give me some idea how to solve it?
  2. could anyone check my source code and show me some other methods to make use of memory more effectively?

Solution

  • The main problem you're facing is that you're using far too many features. It's actually quite extraordinary that you've managed to generate 542401 features from documents that contain just 400 words! I've seen SVM classifiers separate spam from non-spam with high accuracy using just 150 features -- word counts of selected words that say a lot about whether the document is spam. These use stemming and other normalization tricks to make the features more effective.

    You need to spend some time thinning out your features. Think about which features are most likely to contain information useful for this task. Experiment with different features. As long as you keep throwing everything but the kitchen sink in, you'll get memory errors. Right now you're trying to pass 10000 data points with 542401 dimensions each to your SVM. That's 542401 * 10000 * 4 = 21 gigabytes (conservatively) of data. My computer only has 4 gigabytes of RAM. You've got to pare this way down.1

    A first step towards doing so would be to think about how big your total vocabulary size is. Each document has only 400 words, but let's say those 400 words are taken from a vocabulary of 5000 words. That means there will be 5000 ** 4 = 6.25 * 10 ** 14 possible 4-grams. That's half a quadrillion possible 4-grams. Of course not all those 4-grams will appear in your documents, but this goes a long way towards explaining why you're running out of memory. Do you really need these 4-grams? Could you get away with 2-grams only? There are a measly 5000 ** 2 = 25 million possible 2-grams. That will fit much more easily in memory, even if all possible 2-grams appear (unlikely).

    Also keep in mind that even if the SVM could handle quadrillions of datapoints, it would probably give bad results, because when you give any learning algorithm too many features, it will tend to overfit, picking up on irrelevant patterns and overgeneralizing from them. There are ways of dealing with this, but it's best not to deal with it at all if you can help it.

    I will also mention that these are not "newbie" problems. These are problems that machine learning specialists with PhDs have to deal with. They come up with lots of clever solutions, but we're not so clever that way, so we have to be clever a different way.

    Although I can't offer you specific suggestions for cleverness without knowing more, I would say that, first, stemming is a good idea in at least some cases. Stemming simply removes grammatical inflection, so that different forms of the same word ("swim" and "swimming") are treated as identical. This will probably reduce your vocabulary size significantly, at least if you're dealing with English text. A common choice is the porter stemmer, which is included in nltk, as well as in a number of other packages. Also, if you aren't already, you should probably strip punctuation and reduce all words to lower-case. From there, it really depends. Stylometry (identifying authors) sometimes requires only particles ("a", "an", "the"), conjunctions ("and", "but") and other very common words; spam, on the other hand, has its own oddball vocabularies of interest. At this level, it is very difficult to say in advance what will work; you'll almost certainly need to try different approaches to see which is most effective. As always, testing is crucial!

    1. Well, possibly you have a huge amount of RAM at your disposal. For example, I have access to a machine with 48G of RAM at my current workplace. But I doubt it could handle this either, because the SVM will have its own internal representation of the data, which means there will be at least one copy at some point; if a second copy is needed at any point -- kaboom.