Search code examples
pythonnlpnltk

Tokenizing emojis contiguous to words


I am trying to tokenize strings that have the two following patterns:

  • contiguous emojis, for instance "Hey, 😍🔥"
  • emojis contiguous to words, for instance "surprise💥 !!"

To do this, I have tried the word_tokenize() function from nltk (doc). However, it does not split the contiguous entities when emojis are involved.

For instance,

from nltk.tokenize import word_tokenize
word_tokenize("Hey, 😍🔥")

output: ['Hey', ',', '😍🔥']

I'd like to get: ['Hey', ',', '😍', '🔥']

and

word_tokenize("surprise💥 !!")

output: ['surprise💥', '!', '!']

I'd like to get ['surprise', '💥', '!', '!']

Therefore, I was thinking maybe using specific regex pattern could solve the issue but I don't know what pattern to use.


Solution

  • Try using TweetTokenizer

    from nltk.tokenize.casual import TweetTokenizer
    t = TweetTokenizer()
    >>> t.tokenize("Hey, 😍🔥")
    ['Hey', ',', '😍', '🔥']