Search code examples
apache-sparkapache-spark-sqljvmdirectmemory

spark shuffle memory error: failed to allocate direct memory


When performing a couple of joins on spark data frames (4x) I get the following error:

org.apache.spark.shuffle.FetchFailedException: failed to allocate 16777216 byte(s) of direct memory (used: 4294967296, max: 4294967296)

Even when setting:

--conf "spark.executor.extraJavaOptions-XX:MaxDirectMemorySize=4G" \

it is not solved.


Solution

  • Seems like there are too many in flight blocks. Try with smaller values of spark.reducer.maxBlocksInFlightPerAddress. For reference take a look at this JIRA

    Quoting text:

    For configurations with external shuffle enabled, we have observed that if a very large no. of blocks are being fetched from a remote host, it puts the NM under extra pressure and can crash it. This change introduces a configuration spark.reducer.maxBlocksInFlightPerAddress , to limit the no. of map outputs being fetched from a given remote address. The changes applied here are applicable for both the scenarios - when external shuffle is enabled as well as disabled.