Search code examples
pythonsocketsapache-sparkpysparktweepy

Tweepy Streaming Socket cant send preprocessed text


I have two programms, who connect via sockets. One is a tweepy StreamListener, where I also preprocess the data with the library "tweet-preprocessor". The other programm shall connect to that socket and analyze the data via Spark Structured Streaming. The Problem is, that Spark doesn't get batches when I preprocess the data before sending them.

This is the StreamListener

import tweepy
import socket
import json
import preprocessor as p

CONSUMER_KEY = ""
CONSUMER_SECRET = ""
ACCESS_TOKEN = ""
ACCESS_TOKEN_SECRET = ""
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)

p.set_options(p.OPT.URL, p.OPT.EMOJI, p.OPT.SMILEY)

class MyStreamListener(tweepy.StreamListener):
    def __init__(self, csocket):
        self.client_socket = csocket

    def on_data(self, raw_data):
        try:
            data = json.loads(raw_data)
            clean_text = p.clean(data["text"])
            print(clean_text)
            self.client_socket.send(clean_text.encode("utf-8"))
            return True
        except BaseException as e:
            print("Error: " + str(e))
        return True

    def on_error(self, status_code):
        print(status_code)
        return True


skt = socket.socket()
host = "localhost"
port = 5555
skt.bind((host, port))
skt.listen()
client, address = skt.accept()

myStreamListener = MyStreamListener(csocket=client)
myStream = tweepy.Stream(auth=auth, listener=myStreamListener, )
myStream.filter(track=["Trump"], languages=["en"])

And simple Spark code:

from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, split, size

spark = SparkSession.builder.appName("TwitterSpark").getOrCreate()

lines = spark.readStream.format("socket").option("host", "localhost").option("port", 5555).load()

#tweetlength = lines.select(
#        size(split(lines.value, " ")).alias("tweetlength")
#)

query = lines.writeStream.outputMode("update").format("console").start()

query.awaitTermination()

Solution

  • Most likely clean_text does not have a new line character (\n) at the end. Unlike print(clean_text), which automatically adds a new line, socket.send() sends the bytes from clean_text.encode("utf-8") as-is and you need to add the \n explicitly:

    self.client_socket.send((clean_text + "\n").encode("utf-8"))
    

    With no \n to separate the lines in the socket data, Spark sees the input as one growing line, unless there are new lines in the tweet text itself.