Search code examples
apache-sparkapache-spark-sqlhiveqlapache-spark-dataset

Array Intersection in Spark SQL


I have a table with a array type column named writer which has the values like array[value1, value2], array[value2, value3].... etc.

I am doing self join to get results which have common values between arrays. I tried:

sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECTION(R1.writer, R2.writer)[0] is not null ")

And

sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECT(R1.writer, R2.writer)[0] is not null ")

But got same exception:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Undefined function: 'ARRAY_INTERSECT'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 80

Probably Spark SQL does not support ARRAY_INTERSECTION and ARRAY_INTERSECT. How can I achieve my goal in Spark SQL?


Solution

  • You'll need an udf:

    import org.apache.spark.sql.functions.udf
    
    spark.udf.register("array_intersect", 
      (xs: Seq[String], ys: Seq[String]) => xs.intersect(ys))
    

    and then check if intersection is empty:

    scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('3', '4'))) = 0").show
    +-----------------------------------------+
    |(size(UDF(array(1, 2), array(3, 4))) = 0)|
    +-----------------------------------------+
    |                                     true|
    +-----------------------------------------+
    
    
    scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('1', '4'))) = 0").show
    +-----------------------------------------+
    |(size(UDF(array(1, 2), array(1, 4))) = 0)|
    +-----------------------------------------+
    |                                    false|
    +-----------------------------------------+