I want to perform tweets sentiment analysis on a stream of messages I get from a Kafka cluster that, in turn, gets the tweets from the Twitter API v2.
When I try to apply the pre-trained sentiment analysis pipeline I get an error message saying: Exception: target must be either a spark DataFrame, a list of strings or a string
, and I'd like to know if there is a way to work around this.
I've checked the documentation and I couldn't find anything on streaming data.
This is the code I'm using:
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, split, col, from_json, from_unixtime, unix_timestamp
from pyspark.sql.types import StructType, StructField, IntegerType, StringType, DoubleType, TimestampType, MapType, ArrayType
from sparknlp.pretrained import PretrainedPipeline
spark = SparkSession.builder.appName('twitter_app')\
.master("local[*]")\
.config('spark.jars.packages',
'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1,com.johnsnowlabs.nlp:spark-nlp-spark32_2.12:3.4.2')\
.config('spark.streaming.stopGracefullyOnShutdown', 'true')\
.config("spark.driver.memory","8G")\
.config("spark.driver.maxResultSize", "0") \
.config("spark.kryoserializer.buffer.max", "2000M")\
.getOrCreate()
schema = StructType() \
.add("data", StructType() \
.add("created_at", TimestampType())
.add("id", StringType()) \
.add("text", StringType())) \
.add("matching_rules", ArrayType(StructType() \
.add('id', StringType()) \
.add('tag', StringType())))
kafka_df = spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094") \
.option("subscribe", "Zelensky,Putin,Biden,NATO,NoFlyZone") \
.option("startingOffsets", "latest") \
.load() \
.select((from_json(col("value").cast("string"), schema)).alias('text'),
col('topic'), col('key').cast('string'))
nlp_pipeline = PretrainedPipeline("analyze_sentimentdl_use_twitter", lang='en')
df = kafka_df.select('key',
col('text.data.created_at').alias('created_at'),
col('text.data.text').alias('text'),
'topic') \
.withColumn('sentiment', nlp_pipeline.annotate(col('text.data.text')))
And then I get the error I mentioned before:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Input In [11], in <cell line: 1>()
1 df = kafka_df.select('key',
2 col('text.data.created_at').alias('created_at'),
3 col('text.data.text').alias('text'),
4 'topic') \
----> 5 .withColumn('sentiment', nlp_pipeline.annotate(col('text.data.text')))
File ~/.local/share/virtualenvs/spark_home_lab-iuwyZNhT/lib/python3.9/site-packages/sparknlp/pretrained.py:183, in PretrainedPipeline.annotate(self, target, column)
181 return pipeline.annotate(target)
182 else:
--> 183 raise Exception("target must be either a spark DataFrame, a list of strings or a string")
Exception: target must be either a spark DataFrame, a list of strings or a string
Maybe it's not possible using Spark-NLP for streaming data?
You could try nlp_pipeline.transform(kafka_df)
in the following way:
text_df = kafka_df.select('key',
col('text.data.created_at').alias('created_at'),
col('text.data.text').alias('text'),
'topic')
df = (nlp_pipeline
.transform(text_df)
.select('key', 'created_at', 'text', 'topic', 'sentiment.result')
)
df
will be a structured stream you are looking for.
Because Spark-NLP is based on Spark ML, you can treat a structured stream kafka_df
as a DataFrame. nlp_pipeline
is a pyspark.ml.Pipeline
. And a working way to use Pipeline
for prediction is to call .transform(kafka_df)
.
Here is an example of how Spark NLP creators built the pipeline you used https://nlp.johnsnowlabs.com/2021/01/18/sentimentdl_use_twitter_en.html