Search code examples
python-3.xcounterfrequency

Group the word count of a data frame according to the diferent labels stored in a column


I would like to know which are the most representative words for two classes on all the cells of a dataframe

Q1                                                             Q2                                                                           Q3                                                                                                                                                   Label
Cada vez que gobiernan los socialistas provocan paro y crisis. Zapatero hace su 39 visita a la dictadura venezolana ¿Qué motiva este viaje? Según Sánchez en Cataluña la ley no basta, y según Iceta hay que amnistiar a los que dieron un golpe al Estado. ¿Cuánta dignidad cuesta el poder?    PP
Los #10acuerdosdepais responden a los#ODS #Agenda2030          Hacia un nuevo contrato social global : capital,trabajo,planeta y Estado.   Premio muy merecido,@duarteoceans. Es uno de los biólogos marinos más prestigiosos,que nos ayudan a entender la interacción océano-cambio climático.  PSOE
...

And I would like:

{
    'PP':{'Zapatero':2, 'truco': 3, ...},
    'PSOE':{'Gobierno':4,'truco':2}
}

I thought to do :

wordfreq = []
for i, row in df.iloc[:,25:].iterrows():
    for column in df.iloc[:,25:].columns:
        wordlist = row[column].split() # I divided each cell in a column of words
        for w in wordlist:
            wordfreq.append(wordlist.count(w)) # I add up the words in one
            # But I don't know how to add them to the dictionary of specific words for each labe

My problem is that I don't know how to add them to the dictionary of specific 'labelwordfreq' words for each label.

The terminal answers me:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-54-cc03daf63866> in <module>
      2 for i, row in df.iloc[:,25:].iterrows():
      3     for column in df.iloc[:,25:].columns:
----> 4         wordlist = row[column].split() # Dividí cada celda de una columna de palabras
      5         for w in wordlist:
      6             wordfreq.append(wordlist.count(w)) # Sumo las palabras en uno

AttributeError: 'float' object has no attribute 'split'

I might have a nan here.

I think I might have a problem with the size as it will count for all worlds (I am okay to only have a the top 10)

Update trying Grzegorz Skibinski's answer

I tried the following code:

cols=df.iloc[:,25:-1].columns.values
df["Q"]=x[0]
for i in cols[1:]:
    df["Q"]=df["Q"].str.cat(df[i], sep=" ")

    df["Q"]=df["Q"].str.lower()

But got:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-87-67083b1bcc84> in <module>
      6     df["Q"]=df["Q"].str.lower()
      7 
----> 8 df["Q"]=df["Q"].str.split("[^\w]").apply(Counter)

C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
   3589             else:
   3590                 values = self.astype(object).values
-> 3591                 mapped = lib.map_infer(values, f, convert=convert_dtype)
   3592 
   3593         if len(mapped) and isinstance(mapped[0], Series):

pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()

C:\ProgramData\Anaconda3\lib\collections\__init__.py in __init__(*args, **kwds)
    564             raise TypeError('expected at most 1 arguments, got %d' % len(args))
    565         super(Counter, self).__init__()
--> 566         self.update(*args, **kwds)
    567 
    568     def __missing__(self, key):

C:\ProgramData\Anaconda3\lib\collections\__init__.py in update(*args, **kwds)
    651                     super(Counter, self).update(iterable) # fast path when counter is empty
    652             else:
--> 653                 _count_elements(self, iterable)
    654         if kwds:
    655             self.update(kwds)

TypeError: 'float' object is not iterable

df["Q"]=df["Q"].str.split("[^\w]").apply(Counter)

I noticed line 72 was NaN so I removed it with:

df = df.drop(72)

So now df['Q'] is:

0     {'s02q02': 1, 'self': 1, 'employed': 3, '': 11...
1     {'s02q02': 1, 'unemployed': 2, '': 270, 'perso...

How can I gather them according to df['Label']?


Solution

  • You can do:

    from collections import Counter
    
    df["Q"]=df["Q1"].str.cat(df["Q2"], sep=" ").str.cat(df["Q3"], sep=" ").str.lower()
    
    df["Q"]=df["Q"].str.split("[^\w]").apply(Counter)
    

    This will basically do the following:

    (1) concat all the Q columns (which if I understand you correctly - is what you want i.e. count words irrespective of Q they appear on). Also - I took lower case from final column (I'm assuming you want to get count case insensitive)

    (2) split concatenated value by every character, that is not a letter

    (3) apply Counter to count the words on the lists achieved from split in (2)