Search code examples
apache-sparkhadooppysparkhadoop-yarnoozie

Oozie Spark Action (Containing Pyspark Script) Stuck in RUNNING


It's my first time trying to run a Spark Action containing a Pyspark Script in Oozie. Please note, that i'm using cdh5.13 in my local machine (vm with 12G of RAM), and HUE to build the workflow.

The workflow.xml as follow:

<workflow-app name="sparkMLpy" xmlns="uri:oozie:workflow:0.5">
    <start to="spark-c06a"/>
    <kill name="Kill">
        <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <action name="spark-c06a">
        <spark xmlns="uri:oozie:spark-action:0.2">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <master>yarn</master>
            <mode>client</mode>
            <name>MySpark</name>
            <jar>sparkml.py</jar>
            <file>/user/cloudera/sparkml.py#sparkml.py</file>
        </spark>
        <ok to="End"/>
        <error to="Kill"/>
    </action>
    <end name="End"/>
</workflow-app>

I've also tried to add some options:

--conf spark.dynamicAllocation.enabled=true 
--conf spark.shuffle.service.enabled=true 
--conf spark.dynamicAllocation.minExecutors=1

Here is the Pyspark script (it does pretty much nothing):

from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
sc=SparkContext()
log_txt=sc.textFile("/user/cloudera/CCHS.txt")
header = log_txt.first()
log_txt = log_txt.filter(lambda line: line != header)
temp_var = log_txt.map(lambda k: k.split(","))
c_path_out = "/user/cloudera/output/Frth"
temp_var.saveAsTextFile(c_path_out)

Here is a view of the workflow in HUE:

View of the workflow in HUE

Here is the job.properties:

oozie.use.system.libpath=True
send_email=False
dryrun=False
nameNode=hdfs://quickstart.cloudera:8020
jobTracker=quickstart.cloudera:8032
security_enabled=False

When I run the workflow, it gives no error but it keeps running with no result (it's not even suspended). Here is a veiw of the logs below:

View of the logs

I've tried to add the options bellow:

--conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=/usr/local/bin/python2.7 
--conf spark.yarn.appMasterEnv.PYSPARK_DRIVER_PYTHON=/usr/local/bin/python2.7

And it is always stuck in running. When I verified the logs I found this warnings:

Heart beat
2019-01-04 02:05:32,398 [Timer-0] WARN  org.apache.spark.scheduler.cluster.YarnScheduler  - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
2019-01-04 02:05:47,397 [Timer-0] WARN  org.apache.spark.scheduler.cluster.YarnScheduler  - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Can you please help!


Solution

  • I had to run the same workflow on local (not yarn) and it works!

            <master>local</master>
            <mode>client</mode>
            <name>MySpark</name>
            <jar>sparkml.py</jar>