Search code examples
hadoopamazon-web-servicesapache-sparkemr

terminating a spark step in aws


I want to set up a series of spark steps on an EMR spark cluster, and terminate the current step if it's taking too long. However, when I ssh into the master node and run hadoop jobs -list, the master node seems to believe that there is no jobs running. I don't want to terminate the cluster, because doing so would force me to buy a whole new hour of whatever cluster I'm running. Can anyone please help me terminate a spark-step in EMR without terminating the entire cluster?


Solution

  • That's easy:

    yarn application -kill [application id]
    

    you can list your running applications with

    yarn application -list