Search code examples
pythonapache-sparkpysparkapache-spark-sqlrdd

RDD to DF conversion


I am new to Pyspark. My code looks something like below. I am not sure why df.collect() is showing None values for all the string values.

>> rdd = sc.parallelize([{'name': 'test', 'age': {"id": 326, "first_name": "Will", "last_name": "Cur"}}, 
      {'name': 'test2', 'age': {"id": 751, "first_name": "Will", "last_name": "Mc"}}])
>> rdd.collect()
[{'name': 'test', 'age': {'id': 326, 'first_name': 'Will', 'last_name': 'Cur'}}, {'name': 'test2', 'age': {'id': 751, 'first_name': 'Will', 'last_name': 'Mc'}}]
>> df = spark.createDataFrame(rdd)
>> df.collect()
[Row(age={'last_name': None, 'first_name': None, 'id': 326}, name='test'), Row(age={'last_name': None, 'first_name': None, 'id': 751}, name='test2')]

Solution

  • For complex data structures, Spark might have difficulty in inferring the schema from the RDD, so you can instead provide a schema to make sure that the conversion is done properly:

    df = spark.createDataFrame(
        rdd, 
        'name string, age struct<id:int, first_name:string, last_name:string>'
    )
    
    df.collect()
    # [Row(name='test', age=Row(id=326, first_name='Will', last_name='Cur')), 
    #  Row(name='test2', age=Row(id=751, first_name='Will', last_name='Mc'))]