We are using the Spark 1.6 version and while running the jobs in Spark-shell,we observed that tasks are reading the data but not writing them back to complete the tasks as shown in below table
Address TaskTime TotalTask FailedTask succeededtask Shuffle/read Shuffle/write
1 0 0 0 0 188KB/707 0.0B/670
Spark program is using 5 executors 5 GB of size and 3 cores Please suggest here
I have solved this issue by increasing the number of tasks for the partitions in cluster settings