Search code examples
pythonapache-sparkpysparkmemory-leaks

PySpark df.toPandas() throws error "org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)"


Using PySpark, I am attempting to convert a spark DataFrame to a pandas DataFrame using the following:

# Enable Arrow-based columnar data transfers
spark.conf.set("spark.sql.execution.arrow.enabled", "true")

data.toPandas()

I am getting the error "org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)", but I'm not sure why this error occurs. It occurs even on a subset of data with only 10 rows. Running without spark.conf.set("spark.sql.execution.arrow.enabled", "true") has no effect on the error I receive.

Full error:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
In  [20]:
Line 8:     data.toPandas()

File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\pandas\conversion.py, in toPandas:
Line 108:   batches = self.toDF(*tmp_column_names)._collect_as_arrow()

File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\pandas\conversion.py, in _collect_as_arrow:
Line 244:   jsocket_auth_server.getResult()

File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py, in __call__:
Line 1305:  answer, self.gateway_client, self.target_id, self.name)

File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\utils.py, in deco:
Line 128:   return f(*a, **kw)

File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py, in get_return_value:
Line 328:   format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o204.getResult.
: org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302)
    at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:88)
    at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:84)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.base/java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 26.0 failed 1 times, most recent failure: Lost task 0.0 in stage 26.0 (TID 26, <user>, executor driver): org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
Allocator(toBatchIterator) 0/376832/376832/9223372036854775807 (res/actual/peak/limit)


Previous exception in task: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
    io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:490)
    io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
    io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
    io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
    org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(ArrowRecordBatch.java:222)
    org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:240)
    org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:226)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.$anonfun$next$1(ArrowConverters.scala:118)
    scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.next(ArrowConverters.scala:121)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.next(ArrowConverters.scala:97)
    scala.collection.Iterator.foreach(Iterator.scala:941)
    scala.collection.Iterator.foreach$(Iterator.scala:941)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.foreach(ArrowConverters.scala:97)
    scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
    scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.to(ArrowConverters.scala:97)
    scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
    scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.toBuffer(ArrowConverters.scala:97)
    scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
    scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.toArray(ArrowConverters.scala:97)
    org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$6(Dataset.scala:3562)
    org.apache.spark.SparkContext.$anonfun$runJob$6(SparkContext.scala:2193)
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    org.apache.spark.scheduler.Task.run(Task.scala:127)
    org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base/java.lang.Thread.run(Unknown Source)
    at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
    at org.apache.spark.scheduler.Task.run(Task.scala:137)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.base/java.lang.Thread.run(Unknown Source)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2194)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$5(Dataset.scala:3560)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3564)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3541)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1(Dataset.scala:3541)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1$adapted(Dataset.scala:3540)
    at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:130)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:132)
    at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:127)
    at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:104)
    at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:98)
    at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$1(SocketAuthServer.scala:60)
    at scala.util.Try$.apply(Try.scala:213)
    at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:60)
Caused by: org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
Allocator(toBatchIterator) 0/376832/376832/9223372036854775807 (res/actual/peak/limit)


Previous exception in task: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
    io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:490)
    io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
    io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
    io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
    org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(ArrowRecordBatch.java:222)
    org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:240)
    org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:226)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.$anonfun$next$1(ArrowConverters.scala:118)
    scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.next(ArrowConverters.scala:121)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.next(ArrowConverters.scala:97)
    scala.collection.Iterator.foreach(Iterator.scala:941)
    scala.collection.Iterator.foreach$(Iterator.scala:941)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.foreach(ArrowConverters.scala:97)
    scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
    scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.to(ArrowConverters.scala:97)
    scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
    scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.toBuffer(ArrowConverters.scala:97)
    scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
    scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
    org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.toArray(ArrowConverters.scala:97)
    org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$6(Dataset.scala:3562)
    org.apache.spark.SparkContext.$anonfun$runJob$6(SparkContext.scala:2193)
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    org.apache.spark.scheduler.Task.run(Task.scala:127)
    org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base/java.lang.Thread.run(Unknown Source)
    at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
    at org.apache.spark.scheduler.Task.run(Task.scala:137)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.base/java.lang.Thread.run(Unknown Source)

---------------------------------------------------------------------------

Any hints or suggestions on how to resolve this?

Thank you


Solution

  • It turned out the older version of Spark I was on was the problem. Upgrading Spark resolved the issue for me. You could use the SPARK_HOME env variable to try different version(s):

    # 1. get spark-3.1.1-bin-hadoop2.7.tgz from https://archive.apache.org/dist/spark/spark-3.1.1/
    # (You can get different version, this one worked for me, newer might be better for you - version with log4j fix might be available now)
    # 2. open git bash, then:
    # >> cd <spark-3.1.1-bin-hadoop2.7.tgz location>
    # >> tar xzvf spark-3.1.1-bin-hadoop2.7.tgz
    # 3. set system environment variable (used by spark_esri):
    # SPARK_HOME: <path/to/spark-3.1.1-bin-hadoop2.7>
    os.environ["SPARK_HOME"] = r"C:\spark-3.1.1-bin-hadoop2.7"