Search code examples
pythondictionaryfor-looplemmatization

Improving for loop - Trying to compare 2 lists of dicts


I'll try to make myself as clear as possible: I have 50k tweets I would like to do text mining on, and I'd like to improve my code. The data look like below (sample_data).

I'm interested in lemmatizing the words I have cleaned and tokenized (which are the values of the twToken keys)

sample_data = [{'twAuthor': 'Jean Lassalle',
                'twMedium': 'iPhone',
                'nFav': None,
                'nRT': '33',
                'isRT': True,
                'twText': ' RT @ColPeguyVauvil : @jeanlassalle "allez aux bouts de vos rêves" ',
                'twParty': 'Résistons!',
                'cleanText': ' rt colpeguyvauvil jeanlassalle allez aux bouts de vos rêves ',
                'twToken': ['colpeguyvauvil', 'jeanlassalle', 'allez', 'bouts', 'rêves']},
               {'twAuthor': 'Jean-Luc Mélenchon',
                'twMedium': 'Twitter Web Client',
                'nFav': '806',
                'nRT': '375',
                'isRT': False,
                'twText': ' (2/2) Ils préfèrent créer une nouvelle majorité cohérente plutôt que les alliances à géométrie variable opportunistes de leur direction. ',
                'twParty': 'La France Insoumise',
                'cleanText': ' 2 2 ils préfèrent créer une nouvelle majorité cohérente plutôt que les alliances à géométrie variable opportunistes de leur direction ',
                'twToken': ['2', '2', 'préfèrent', 'créer', 'nouvelle', 'majorité', 'cohérente', 'plutôt', 'alliances', 'géométrie', 'variable', 'opportunistes', 'direction']},
               {'twAuthor': 'Nathalie Arthaud',
                'twMedium': 'Android',
                'nFav': '37',
                'nRT': '24',
                'isRT': False,
                'twText': ' #10mai Commemoration fin de l esclavage. Reste à supprimer l esclavage salarial defendu par #Macron et Hollande ',
                'twParty': 'Lutte Ouvrière',
                'cleanText': ' 10mai commemoration fin de l esclavage reste à supprimer l esclavage salarial defendu par macron et hollande ',
                'twToken': ['10mai', 'commemoration', 'fin', 'esclavage', 'reste', 'supprimer', 'esclavage', 'salarial', 'defendu', 'macron', 'hollande']
               }]

However, there are no reliable French lemmatizers in Python. So I used some resources to have my own French words lemmatizer dictionary. The dict looks like this:

sample_lemmas = [{"ortho":"rêves","lemme":"rêve","cgram":"NOM"},
                 {"ortho":"opportunistes","lemme":"opportuniste","cgram":"ADJ"},
                 {"ortho":"préfèrent","lemme":"préférer","cgram":"VER"},
                 {"ortho":"nouvelle","lemme":"nouveau","cgram":"ADJ"},
                 {"ortho":"allez","lemme":"aller","cgram":"VER"},
                 {"ortho":"défendu","lemme":"défendre","cgram":"VER"}]

So that ortho is the written form of a word (eg. processed), lemmeis the lemmatized form of the word (eg. process) and cgram is the grammatical category of a word (eg VER for a verb).

So what I wanted to do was to create a twLemmas key for each tweet, which is a list of the lemmas derived from the twToken list. So I loop through each tweet in sample_data, then I loop through each token in twToken, see if the token exists in my lemmas dictionary sample_lemmas, and if it does, I retrieve the lemma from the sample_lemmas dictionary and add it to the list that will be fed in each twLemmas key. If it doesn't, I simply add the word to the list.

My code is the following :

list_of_ortho = []                      #List of words used to compare if a token doesn't exist in my lemmas dictionary
for wordDict in sample_lemmas:          #This loop feeds this list with each word
    list_of_ortho.append(wordDict["ortho"])

for elemList in sample_data:            #Here I iterate over each tweet in my data
    list_of_lemmas = []                 #This is the temporary list which will be the value to each twLemmas key
    for token in elemList["twToken"]:   #Here, I iterate over each token/word of a tweet
        for wordDict in sample_lemmas:
            if token == wordDict["ortho"]:
                list_of_lemmas.append(wordDict["lemme"])
        if token not in list_of_ortho:  #And this is to add a word to my list if it doesn't exist in my lemmas dictionary
            list_of_lemmas.append(token)
    elemList["lemmas"] = list_of_lemmas

sample_data

The loop works fine, however it takes about 4 hours to complete. Now I know I'm no programmer nor a Python expert, and I know that it will take time to complete no matter what. However, this is why I wanted to ask you if anyone had a better idea about how I could improve my code ?

Thank you if anyone can take the time to understand my code and help me. I hope I was clear enough (English is not my first language sorry).


Solution

  • Use a dictionary that maps orthos to lemmes:

    ortho_to_lemme = {word_dict["ortho"]: word_dict["lemme"] for word_dict in sample_lemmas}
    for tweet in sample_data:
        tweet["twLemmas"] = [
            ortho_to_lemme.get(token, token) for token in tweet["twToken"]
        ]