Search code examples
pysparkrdd

Pyspark | map JSON rdd and apply broadcast


In pyspark, how to transform an input RDD having JSON to the below specified output while applying the broadcast variable to a list of values?

Input

[{'id': 1, 'title': "Foo", 'items': ['a','b','c']}, {'id': 2, 'title': "Bar", 'items': ['a','b','d']}]

Broadcast variable

[('a': 5), ('b': 12), ('c': 42), ('d': 29)]

Desired Output

[(1, 'Foo', [5, 12, 42]), (2, 'Bar', [5, 12, 29])]

Solution

  • Edit: Originally I was under the impression that functions passed to map functions are automatically broadcast, but after reading some docs I am no longer sure of that.

    In any case, you can define your broadcast variable:

    bv = [('a', 5), ('b', 12), ('c', 42), ('d', 29)]
    
    # turn into a dictionary
    bv = dict(bv)
    broadcastVar = sc.broadcast(bv)
    print(broadcastVar.value)
    #{'a': 5, 'c': 42, 'b': 12, 'd': 29}
    

    Now it is available on all machines as a read-only variable. You can access the dictionary using broascastVar.value:

    For example:

    import json
    
    rdd = sc.parallelize(
        [
            '{"id": 1, "title": "Foo", "items": ["a","b","c"]}',
            '{"id": 2, "title": "Bar", "items": ["a","b","d"]}'
        ]
    )
    
    def myMapper(row):
        # define the order of the values for your output
        key_order = ["id", "title", "items"]
    
        # load the json string into a dict
        d = json.loads(row)
    
        # replace the items using the broadcast variable dict
        d["items"] = [broadcastVar.value.get(item) for item in d["items"]]
    
        # return the values in order
        return tuple(d[k] for k in key_order)
    
    print(rdd.map(myMapper).collect())
    #[(1, u'Foo', [5, 12, 42]), (2, u'Bar', [5, 12, 29])]