Search code examples
pythonapache-sparkpysparkapache-spark-sqlrename

How to change dataframe column names in PySpark?


I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:

df.columns = new_column_name_list

However, the same doesn't work in PySpark dataframes created using sqlContext. The only solution I could figure out to do this easily is the following:

df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
  k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)

This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.

Is there a better and more efficient way to do this like we do in pandas?

My Spark version is 1.5.0


Solution

  • There are many ways to do that:

    • Option 1. Using selectExpr.

       data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], 
                                         ["Name", "askdaosdka"])
       data.show()
       data.printSchema()
      
       # Output
       #+-------+----------+
       #|   Name|askdaosdka|
       #+-------+----------+
       #|Alberto|         2|
       #| Dakota|         2|
       #+-------+----------+
      
       #root
       # |-- Name: string (nullable = true)
       # |-- askdaosdka: long (nullable = true)
      
       df = data.selectExpr("Name as name", "askdaosdka as age")
       df.show()
       df.printSchema()
      
       # Output
       #+-------+---+
       #|   name|age|
       #+-------+---+
       #|Alberto|  2|
       #| Dakota|  2|
       #+-------+---+
      
       #root
       # |-- name: string (nullable = true)
       # |-- age: long (nullable = true)
      
    • Option 2. Using withColumnRenamed, notice that this method allows you to "overwrite" the same column. For Python3, replace xrange with range.

       from functools import reduce
      
       oldColumns = data.schema.names
       newColumns = ["name", "age"]
      
       df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
       df.printSchema()
       df.show()
      
    • Option 3. using alias, in Scala you can also use as.

       from pyspark.sql.functions import col
      
       data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
       data.show()
      
       # Output
       #+-------+---+
       #|   name|age|
       #+-------+---+
       #|Alberto|  2|
       #| Dakota|  2|
       #+-------+---+
      
    • Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables.

       sqlContext.registerDataFrameAsTable(data, "myTable")
       df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
      
       df2.show()
      
       # Output
       #+-------+---+
       #|   name|age|
       #+-------+---+
       #|Alberto|  2|
       #| Dakota|  2|
       #+-------+---+