Search code examples
scalaapache-spark

How to select the scala dataframe column with special character in it?


I am reading a json file where the key is having come special character. E.g

[{
        "ABB/aws:1.0/CustomerId:2.0": [{
            "id": 20,
            "namehash": "de8cfcde-95c5-47ac-a544-13db50557eaa"
        }]
}]

I am creating a scala dataframe and then trying to select the column using spark.sql "ABB/aws:1.0/CustomerId:2.0". Thats when its complaining about special character.

dataframe looks like this enter image description here


Solution

  • Use backtick to select column which has special characters. Check below code.

    scala> df.select("`ABB/aws:1.0/CustomerId:2.0`").show(false)
    +--------------------------------------------+
    |ABB/aws:1.0/CustomerId:2.0                  |
    +--------------------------------------------+
    |[{20, de8cfcde-95c5-47ac-a544-13db50557eaa}]|
    +--------------------------------------------+