Search code examples
pythonjsonpandastwitterbigdata

Processing a large amount of tweets for exploratory data analysis such as number of unique tweets, and histogram of tweet counts per user


I have 14M tweets that are in a single tweet.txt file (given to me) in which the entire JSON of the tweet is one line of the txt file. I want to get some basic statistics such as number of unique tweets, number of unique users, and a historgram of retweet count for each tweet as well as a histogram of a tweets per user. Later I am interested in perhaps more intricate analysis.

I have the following code but it is extremely slow. I left it running for the entire day and it is only at 200,000 tweets processed. Can the current code be fixed somehow so it can be sped up? Is the current idea of creating a pandas dataframe of 14M tweets even a good idea or feasible for exploratory data analysis? My current machine has 32GB RAM and 12 CPUs. If this is not feasible on this machine, I also have access to shared cluster at my university.

import pandas as pd

import json
from pprint import pprint
tweets = open('tweets.txt')

columns = ['coordinates', 'created_at', 'favorite_count', 'favorited', 'tweet_id', 'lang', 'quote_count', 'reply_count', 'retweet_count',
           'retweeted', 'text', 'timestamp_ms', 'user_id', 'user_description', 'user_followers_count', 'user_favorite_count',
           'user_following_count', 'user_friends_count', 'user_location', 'user_screenname', 'user_statuscount', 'user_profile_image', 'user_name', 'user_verified' ]

#columns =['coordinates', 'created_at']


df = pd.DataFrame()

count = 0
for line in tweets:
    count += 1
    print(count)
    #print(line)
    #print(type(line))
    tweet_obj = json.loads(line)
    #pprint(tweet_obj)
    #print(tweet_obj['id'])
    #print(tweet_obj['user']['id'])
    df = df.append({'coordinates': tweet_obj['coordinates'],
                    'created_at': tweet_obj['created_at'],
                    'favorite_count': tweet_obj['favorite_count'],
                    'favorited': tweet_obj['favorited'],
                    'tweet_id': tweet_obj['id'],
                    'lang': tweet_obj['lang'],
                    'quote_count': tweet_obj['quote_count'],
                    'reply_count': tweet_obj['reply_count'],
                    'retweet_count': tweet_obj['retweet_count'],
                    'retweeted': tweet_obj['retweeted'],
                    'text': tweet_obj['text'],
                    'timestamp_ms': tweet_obj['timestamp_ms'],
                    'user_id': tweet_obj['user']['id'],
                    'user_description': tweet_obj['user']['description'],
                    'user_followers_count': tweet_obj['user']['followers_count'],
                    'user_favorite_count': tweet_obj['user']['favourites_count'],
                    'user_following': tweet_obj['user']['following'],
                    'user_friends_count': tweet_obj['user']['friends_count'],
                    'user_location': tweet_obj['user']['location'],
                    'user_screen_name': tweet_obj['user']['screen_name'],
                    'user_statuscount': tweet_obj['user']['statuses_count'],
                    'user_profile_image': tweet_obj['user']['profile_image_url'],
                    'user_name': tweet_obj['user']['name'],
                    'user_verified': tweet_obj['user']['verified']

                    }, ignore_index=True)

df.to_csv('tweets.csv')

Solution

  • One significant speed increase would be to append the dictionary to a list and not using df.append and then outside the loop, create the dataframe. Something like:

    count = 0
    l_tweets = []
    for line in tweets:
        count += 1
        tweet_obj = json.loads(line)
        #append to a list
        l_tweets.append({'coordinates': tweet_obj['coordinates'],
                         # ... copy same as yours
                         'user_verified': tweet_obj['user']['verified']
                        })
    df = pd.DataFrame(l_tweets, columns=columns)
    

    Regarding if 14M tweets can be handle by your RAM, I don't really know. On the cluster usually yes, but regarding how to process the data depends on the config of the cluster I think.

    Or maybe, if you ensure the order of the elements same as in your list columns, then a list instead of a dictionary would work too:

    count = 0
    l_tweets = []
    for line in tweets:
        count += 1
        tweet_obj = json.loads(line)
        #append to a list
        l_tweets.append([tweet_obj['coordinates'], tweet_obj['created_at'], 
                         # ... copy just the values here in the right order
                         tweet_obj['user']['name'], tweet_obj['user']['verified']
                        ])
    df = pd.DataFrame(l_tweets, columns=columns)