Search code examples
google-cloud-dataprocgoogle-cloud-bigtable

Dataproc serverless writing to Bigtable: org.apache.spark.SparkException: Task failed while writing rows


How do I find out root cause? (I'm reading from Casssandra and writing to Bigtable)

I've tried:

  • looking through Cassandra logs
  • eliminating columns in case it was a data issue
  • reducing spark.cassandra.input.fetch.size_in_rows from 100 to 10
  • spark.speculation both true and false
  • etc.

It does load 100s of thousands of rows first before it throws the error. Bigtable has TBs of free space.

23/03/30 18:13:42 WARN TaskSetManager: Lost task 5.0 in stage 1.0 (TID 6) (10.128.0.46 executor 1): org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.internal.io.SparkHadoopWriter$.executeTask(SparkHadoopWriter.scala:163)
        at org.apache.spark.internal.io.SparkHadoopWriter$.$anonfun$write$1(SparkHadoopWriter.scala:88)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:131)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IllegalArgumentException: 1 time, servers with issues: bigtable.googleapis.com
        at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.getExceptions(BigtableBufferedMutator.java:188)
        at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:142)
        at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.mutate(BigtableBufferedMutator.java:133)
        at org.apache.hadoop.hbase.mapred.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:101)
        at org.apache.hadoop.hbase.mapred.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:52)
        at org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil.write(SparkHadoopWriter.scala:246)
        at org.apache.spark.internal.io.SparkHadoopWriter$.$anonfun$executeTask$1(SparkHadoopWriter.scala:138)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1525)
        at org.apache.spark.internal.io.SparkHadoopWriter$.executeTask(SparkHadoopWriter.scala:135)
        ... 9 more

Solution

  • It turns out that a few rows from Cassandra were corrupt: there were nulls in the keys for a few rows. I discovered this accidentally after dumping the table to csv files and loading into another database.

    After removing those corrupt rows, everything loaded fine.