Search code examples
pysparkaggregatepyspark-pandas

Pandas on Spark apply() seems to be reshaping columns


Can anybody explain the following behavior?

import pyspark.pandas as ps

loan_information = ps.read_sql_query([blah])

loan_information.shape
#748834, 84

loan_information.apply(lambda col: col.shape)
#Each column has 75 dimensions. The first 74 are size 10000, the last is size 8843
#This still sums to 748834, but hardly seems like desirable behavior

My guess is that batches of size 10000 are being fed to the executors but, again, this seems like pretty undesirable behavior.


Solution

  • The documentation is quite clear:

    when axis is 0 or ‘index’, the func is unable to access to the whole input series. pandas-on-Spark internally splits the input series into multiple batches and calls func with each batch multiple times. Therefore, operations such as global aggregations are impossible. See the example below.

    .apply is for non-aggregation functions, if you want to do aggregate type functions, use something like .aggregate