Search code examples
pythonstringpandastokenize

Split sentences into substrings containing varying number of words using pandas


My question is related to this past of question of mine: Split text in cells and create additional rows for the tokens.

Let's suppose that I have the following in a DataFrame in pandas:

id  text
1   I am the first document and I am very happy.
2   Here is the second document and it likes playing tennis.
3   This is the third document and it looks very good today.

and I want to split the text of each id in tokens of random number of words (varying between two values e.g. 1 and 5) so I finally want to have something like the following:

id  text
1   I am the
1   first document
1   and I am very
1   happy
2   Here is
2   the second document and it
2   likes playing
2   tennis
3   This is the third
3   document and
3   looks very
3   very good today

Keep in mind that my dataframe may also have other columns except for these two which should be simply copied at the new dataframe in the same way as id above.

What is the most efficient way to do this?


Solution

  • Define a function to extract chunks in a random fashion using itertools.islice:

    from itertools import islice
    import random
    
    lo, hi = 3, 5 # change this to whatever
    def extract_chunks(it):
        chunks = []
        while True:
            chunk = list(islice(it, random.choice(range(lo, hi+1))))
            if not chunk:
                break
            chunks.append(' '.join(chunk))
    
        return chunks
    

    Call the function through a list comprehension to ensure least possible overhead, then stack to get your output:

    pd.DataFrame([
        extract_chunks(iter(text.split())) for text in df['text']], index=df['id']
    ).stack()
    
    id   
    1   0                    I am the
        1        first document and I
        2              am very happy.
    2   0                 Here is the
        1         second document and
        2    it likes playing tennis.
    3   0           This is the third
        1       document and it looks
        2            very good today.
    

    You can extend the extract_chunks function to perform tokenisation. Right now, I use a simple splitting on whitespace which you can modify.


    Note that if you have other columns you don't want to touch, you can do something like a melting operation here.

    u = pd.DataFrame([
        extract_chunks(iter(text.split())) for text in df['text']])
    
    (pd.concat([df.drop('text', 1), u], axis=1)
       .melt(df.columns.difference(['text'])))