Search code examples
pythonapache-sparkjoinpysparkapache-spark-sql

How to join on multiple columns in Pyspark?


I am using Spark 1.3 and would like to join on multiple columns using python interface (SparkSQL)

The following works:

I first register them as temp tables.

numeric.registerTempTable("numeric")
Ref.registerTempTable("Ref")

test  = numeric.join(Ref, numeric.ID == Ref.ID, joinType='inner')

I would now like to join them based on multiple columns.

I get SyntaxError: invalid syntax with this:

test  = numeric.join(Ref,
   numeric.ID == Ref.ID AND numeric.TYPE == Ref.TYPE AND
   numeric.STATUS == Ref.STATUS ,  joinType='inner')

Solution

  • You should use & / | operators and be careful about operator precedence (== has lower precedence than bitwise AND and OR):

    df1 = sqlContext.createDataFrame(
        [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
        ("x1", "x2", "x3"))
    
    df2 = sqlContext.createDataFrame(
        [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x3"))
    
    df = df1.join(df2, (df1.x1 == df2.x1) & (df1.x2 == df2.x2))
    df.show()
    
    ## +---+---+---+---+---+---+
    ## | x1| x2| x3| x1| x2| x3|
    ## +---+---+---+---+---+---+
    ## |  2|  b|3.0|  2|  b|0.0|
    ## +---+---+---+---+---+---+