Search code examples
jsonscalaapache-sparkjson4s

How to convert Row to json in Spark 2 Scala


Is there a simple way to converting a given Row object to json?

Found this about converting a whole Dataframe to json output: Spark Row to JSON

But I just want to convert a one Row to json. Here is pseudo code for what I am trying to do.

More precisely I am reading json as input in a Dataframe. I am producing a new output that is mainly based on columns, but with one json field for all the info that does not fit into the columns.

My question what is the easiest way to write this function: convertRowToJson()

def convertRowToJson(row: Row): String = ???

def transformVenueTry(row: Row): Try[Venue] = {
  Try({
    val name = row.getString(row.fieldIndex("name"))
    val metadataRow = row.getStruct(row.fieldIndex("meta"))
    val score: Double = calcScore(row)
    val combinedRow: Row = metadataRow ++ ("score" -> score)
    val jsonString: String = convertRowToJson(combinedRow)
    Venue(name = name, json = jsonString)
  })
}

Psidom's Solutions:

def convertRowToJSON(row: Row): String = {
    val m = row.getValuesMap(row.schema.fieldNames)
    JSONObject(m).toString()
}

only works if the Row only has one level not with nested Row. This is the schema:

StructType(
    StructField(indicator,StringType,true),   
    StructField(range,
    StructType(
        StructField(currency_code,StringType,true),
        StructField(maxrate,LongType,true), 
        StructField(minrate,LongType,true)),true))

Also tried Artem suggestion, but that did not compile:

def row2DataFrame(row: Row, sqlContext: SQLContext): DataFrame = {
  val sparkContext = sqlContext.sparkContext
  import sparkContext._
  import sqlContext.implicits._
  import sqlContext._
  val rowRDD: RDD[Row] = sqlContext.sparkContext.makeRDD(row :: Nil)
  val dataFrame = rowRDD.toDF() //XXX does not compile
  dataFrame
}

Solution

  • I need to read json input and produce json output. Most fields are handled individually, but a few json sub objects need to just be preserved.

    When Spark reads a dataframe it turns a record into a Row. The Row is a json like structure. That can be transformed and written out to json.

    But I need to take some sub json structures out to a string to use as a new field.

    This can be done like this:

    dataFrameWithJsonField = dataFrame.withColumn("address_json", to_json($"location.address"))
    

    location.address is the path to the sub json object of the incoming json based dataframe. address_json is the column name of that object converted to a string version of the json.

    to_json is implemented in Spark 2.1.

    If generating it output json using json4s address_json should be parsed to an AST representation otherwise the output json will have the address_json part escaped.