Search code examples
apache-sparkpysparkapache-spark-sqlsparkr

Subsetting SparkR DataFrame based on column values matching another DataFrame's column values


I have two SparkR DataFrames, newHiresDF and salesTeamDF. I want to get a subset of newHiresDF based on the values of newHiresDF$name that are in salesTeamDF$name, but I can't figure out a way to do this. Below is the code for my attempts.

#Create DataFrames
newHires <- data.frame(name = c("Thomas", "George", "Bill", "John"),
    surname = c("Smith", "Williams", "Brown", "Taylor"))
salesTeam <- data.frame(name = c("Thomas", "Bill", "George"),
    surname = c("Martin", "Clark", "Williams"))
newHiresDF <- createDataFrame(newHires)
salesTeamDF <- createDataFrame(salesTeam)
display(newHiresDF)

#Try to subset newHiresDF based on name values in salesTeamDF
#All of the below result in errors
NHsubset1 <- filter(newHiresDF, newHiresDF$name %in% salesTeamDF$name)
NHsubset2 <- filter(newHiresDF, intersect(select(newHiresDF, 'name'), 
    select(salesTeamDF, 'name')))
NHsubset3 <- newHiresDF[newHiresDF$name %in% salesTeamDF$name,] #This is how it would be done in R

#What I'd like NHsubset to look like:
    name  surname
1 Thomas    Smith
2 George Williams
3   Bill    Brown

PySpark code will also work if you prefer that.


Solution

  • Figured out a solution that seems simple in hindsight: just use merge.

    NHsubset <- merge(newHiresDF, select(salesTeamDF, 'name'))