Search code examples
pythonpandasdataframereddit

Row values turn into NaNs after using pandas append function


I am trying to extract all reddit comments mentioning X stemming from subreddit X from date X onwards and add their date, comment (body), and score/upvotes to my dataframe.

So far I (with help of the lovely internet) managed to come up with this code:

import requests
from datetime import datetime
import traceback
import time
import json
import sys
import numpy as np

username = ""  # put the username you want to download in the quotes
subreddit = "GME"  # put the subreddit you want to download in the quotes
# leave either one blank to download an entire user's or subreddit's history
# or fill in both to download a specific users history from a specific subreddit

filter_string = None
if username == "" and subreddit == "":
    print("Fill in either username or subreddit")
    sys.exit(0)
elif username == "" and subreddit != "":
    filter_string = f"subreddit={subreddit}"
elif username != "" and subreddit == "":
    filter_string = f"author={username}"
else:
    filter_string = f"author={username}&subreddit={subreddit}"

url = "https://api.pushshift.io/reddit/search/{}/?q=gamestop&size=500&subreddit=gme&sort=desc&{}&before="

start_time = datetime.utcnow()

# Dataframe: comments
df_comments = pd.DataFrame()
df_comments["date"] = ""
df_comments["comment"] = ""
df_comments["score"] = ""

# Dataframe: posts
df_posts = pd.DataFrame()

def redditAPI(object_type):
    print(f"\nLooping through {object_type}s and append to dataframe...")

    count = 0
    previous_epoch = int(start_time.timestamp())
    while True:
        # Ensures that loop breaks at March 12 2021 for testing purposes
        if previous_epoch <= 1615503600:
            break
            
        new_url = url.format(object_type, filter_string)+str(previous_epoch)
        json_text = requests.get(new_url)
        time.sleep(1)  # pushshift has a rate limit, if we send requests too fast it will start returning error messages
        try:
            json_data = json.loads(json_text.text)
        except json.decoder.JSONDecodeError:
            time.sleep(1)
            continue

        if 'data' not in json_data:
            break
        objects = json_data['data']
        df2 = pd.DataFrame.from_dict(objects)
        if len(objects) == 0:
            break
        for object in objects:
            previous_epoch = object['created_utc'] - 1
            count += 1
            if object_type == "comment":
                df_comments["date"] = df_comments["date"].append(df2["created_utc"], ignore_index=True)
                df_comments["comment"] = df_comments["comment".append(df2["body"], ignore_index=True)
                df_comments["score"] = df_comments["score"].append(df2["score"], ignore_index=True)
            elif object_type == "submission":
                df_posts["date"] = df2["created_utc"]
                df_posts["post"] = df2["selftext"] # include condition to skip empty selftext
                df_posts["score"] = df2["score"]        
    # Convert UNIX to datetime
    #df_comments["date"] = pd.to_datetime(df_comments["date"],unit='s')
    #df_posts["date"] = pd.to_datetime(df_posts["date"],unit='s')
    print("\nDone. Saved to dataframe.")


redditAPI("comment")
#redditAPI("submission")

Please ignore the "submission" object code for now.

When I inspect the first 5 rows of the df_comments dataframe:

enter image description here

Since the API has a limit of 100 queries per request, I use a loop until it reaches a certain UNIX value. In every loop, the code should append the data to the set column.

Any idea how it's possible these values turn into NAN and/or how to fix it?


Solution

  • The problem is in:

                if object_type == "comment":
                    df_comments["date"] = df_comments["date"].append(df2["created_utc"], ignore_index=True)
                    df_comments["comment"] = df_comments["comment".append(df2["body"], ignore_index=True)
                    df_comments["score"] = df_comments["score"].append(df2["score"], ignore_index=True)
    

    In short, by assigning series to Dataframe column, the series will be conformed to the DataFrames index. The result of append() has more elements than the index of df_comments, so the original dataframe column won't change. For more detail, you could see my analyzing in a MWE of your question.

    You could do dataframe append to avoid it:

                if object_type == "comment":
                    df2.rename(columns={'created_utc': 'date', 'body': 'comment'}, inplace=True)
                    df_comments = df_comments.append(df2[['date', 'comment', 'score']])
    

    In this case, df_comments is assigned a new value within the function’s body, it’s assumed to be a local variable. So you need to add global df_comments in the redditAPI() function.