Search code examples
pythonapache-sparkdictionarypysparkrdd

How to convert PySpark dataframe to dictionary: first column as main key, the other columns and their contents key-value pairs?


I have created a data frame as follows in PySpark:

from pyspark.sql.types import StructType, StructField, StringType, IntegerType

data_1 = [
    ("rule1", "", "1", "2", "3", "4"),
    ("rule2", "1", "3", "5", "6", "4"),
    ("rule3", "", "0", "1", "2", "5"),
    ("rule4", "0", "1", "3", "6", "2"),
]

schema = StructType(
    [
        StructField("_c0", StringType(), True),
        StructField("para1", StringType(), True),
        StructField("para2", StringType(), True),
        StructField("para3", StringType(), True),
        StructField("para4", StringType(), True),
        StructField("para5", StringType(), True),
    ]
)
 
df = spark.createDataFrame(data=data_1,schema=schema)

This gives:

+-----+-----+-----+-----+-----+-----+
|_c0  |para1|para2|para3|para4|para5|
+-----+-----+-----+-----+-----+-----+
|rule1|     |1    |2    |3    |4    |
|rule2|1    |3    |5    |6    |4    |
|rule3|     |0    |1    |2    |5    |
|rule4|0    |1    |3    |6    |2    |
+-----+-----+-----+-----+-----+-----+

I want to convert it into a dictionary like this:

dict = {'rule1': {'para2': '1', 'para3': '2','para4': '3','para5': '4'},
        'rule2': {'para1': '1', 'para2': '3','para3': '5','para4': '6','para5': '4'}, ...}

The columns with empty "" values should not appear in the final dictionary, e.g. in the dictionary for "rule1", "para1" is not present. The rest are all present.

I tried this as an initial code, but it is unsatisfactory:

dict1 = df.rdd.map(lambda row: row.asDict()).collect()
final_dict = {d['_c0']: d[col] for d in dict1 for col in df.columns}

# Returns {'rule1': '4', 'rule2': '4', 'rule3': '5', 'rule4': '2'}

Solution

  • You can try these nested dictionary comprehensions:

    dict_rules = {r['_c0']: {k: v 
                             for k, v in r.asDict().items() 
                             if k != '_c0' and v != ''}
                  for r in df.collect()}
    
    # {'rule1': {'para2': '1', 'para3': '2', 'para4': '3', 'para5': '4'},
    #  'rule2': {'para1': '1', 'para2': '3', 'para3': '5', 'para4': '6', 'para5': '4'},
    #  'rule3': {'para2': '0', 'para3': '1', 'para4': '2', 'para5': '5'},
    #  'rule4': {'para1': '0', 'para2': '1', 'para3': '3', 'para4': '6', 'para5': '2'}}