I have the following code in R to get the recent tweets about the local mayor candidates and create a wordcloud:
library(twitteR)
library(ROAuth)
require(RCurl)
library(stringr)
library(tm)
library(ggmap)
library(plyr)
library(dplyr)
library(SnowballC)
library(wordcloud)
(...)
setup_twitter_oauth(...)
N = 10000 #Number of twetts
S = 200 #200Km radius from Natal (Covers the whole Natal area)
candidate = 'Carlos+Eduardo'
#Lists so I can add more cities in future codes
lats = c(-5.7792569)
lons = c(-35.200916)
# Gets the tweets from every city
result = do.call(
rbind,
lapply(
1:length(lats),
function(i) searchTwitter(
candidate,
lang="pt-br",
n=N,
resultType="recent",
geocode=paste(lats[i], lons[i], paste0(S,"km"), sep=",")
)
)
)
# Get the latitude and longitude of each tweet,
# the tweet itself, how many times it was re-twitted and favorited,
# the date and time it was twitted, etc and builds a data frame.
result_lat = sapply(result, function(x) as.numeric(x$getLatitude()))
result_lat = sapply(result_lat, function(z) ifelse(length(z) != 0, z, NA))
result_lon = sapply(result, function(x) as.numeric(x$getLongitude()))
result_lon = sapply(result_lon, function(z) ifelse(length(z) != 0, z, NA))
result_date = lapply(result, function(x) x$getCreated())
result_date = sapply(result_date,
function(x) strftime(x, format="%d/%m/%Y %H:%M%S", tz="UTC")
)
result_text = sapply(result, function(x) x$getText())
result_text = unlist(result_text)
is_retweet = sapply(result, function(x) x$getIsRetweet())
retweeted = sapply(result, function(x) x$getRetweeted())
retweet_count = sapply(result, function(x) x$getRetweetCount())
favorite_count = sapply(result, function(x) x$getFavoriteCount())
favorited = sapply(result, function(x) x$getFavorited())
tweets = data.frame(
cbind(
tweet = result_text,
date = result_date,
lat = result_lat,
lon = result_lon,
is_retweet=is_retweet,
retweeted = retweeted,
retweet_count = retweet_count,
favorite_count = favorite_count,
favorited = favorited
)
)
# World Cloud
#Text stemming require the package ‘SnowballC’.
#https://cran.r-project.org/web/packages/SnowballC/index.html
#Create corpus
corpus = Corpus(VectorSource(tweets$tweet))
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, stopwords('portuguese'))
corpus = tm_map(corpus, stemDocument)
wordcloud(corpus, max.words = 50, random.order = FALSE)
But I'm getting these errors:
Error in simple_triplet_matrix(i = i, j = j, v = as.numeric(v), nrow = length(allTerms), :
'i, j, v' different lengths
In addition: Warning messages:
1: In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
10000 tweets were requested but the API can only return 518
#I understant this one, I cannot get more tweets that exists2: In mclapply(unname(content(x)), termFreq, control) : all scheduled cores encountered errors in user code
3: In simple_triplet_matrix(i = i, j = j, v = as.numeric(v), nrow = length(allTerms), : NAs introduced by coercion
It's my first time building a wordcloud and I followed tutorials like this one.
It's there a way to fix it? Another things is: the class of tweets$tweet
is "factor", should I convert it or something? If yes, how I do that?
I followed this tutorial where it's defined a function to "clean" the text and also creating a TermDocumentMatrix instead of a stemDocument before building the wordcloud. It's working properly now.