Search code examples
spring-batchhadoop-yarnspring-cloudspring-cloud-dataflowspring-cloud-task

Spring Data Flow Yarn - unable access jarfile


I try to run simple spring batch task on Spring Cloud Data Flow for Yarn. Unfortunatelly while running it I got error message in ResourceManager UI:

Application application_1473838120587_5156 failed 1 times due to AM    Container for appattempt_1473838120587_5156_000001 exited with exitCode:  1
For more detailed output, check application tracking page:http://ip-10-249-9-50.gc.stepstone.com:8088/cluster/app/application_1473838120587_5156Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1473838120587_5156_01_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.

More informations from Appmaster.stderror stated that:

Log Type: Appmaster.stderr
Log Upload Time: Mon Nov 07 12:59:57 +0000 2016
Log Length: 106
Error: Unable to access jarfile spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.BUILD-SNAPSHOT.jar

If it comes to Spring Cloud Data Flow I'm trying to run in dataflow-shell:

app register --type task --name simple_batch_job --uri https://github.com/spring-cloud/spring-cloud-dataflow-samples/raw/master/tasks/simple-batch-job/batch-job-1.0.0.BUILD-SNAPSHOT.jar
task create foo --definition "simple_batch_job"
task launch foo

Its really hard to know why this error occurs. I'm sure that connection from dataflow-server to yarn works fine because in standard HDFS localization (/dataflow) some files was copied (servers.yml, jars with jobs and utilities) but it is unaccessible in some way.

My servers.yml config:

logging:
  level:
    org.apache.hadoop: DEBUG
    org.springframework.yarn: DEBUG
maven:
  remoteRepositories:
    springRepo:
      url: https://repo.spring.io/libs-snapshot
spring:
  main:
    show_banner: false
  hadoop:
    fsUri: hdfs://HOST:8020
    resourceManagerHost: HOST
    resourceManagerPort: 8032
    resourceManagerSchedulerAddress: HOST:8030
datasource:
    url: jdbc:h2:tcp://localhost:19092/mem:dataflow
    username: sa
    password:
    driverClassName: org.h2.Driver

I'll be glad to hear any informations or spring-yarn tips&tricks to make this work.

PS: As hadoop environment I use Amazon EMR 5.0

EDIT: Recursive path from hdfs:

drwxrwxrwx   - user hadoop          0 2016-11-07 15:02 /dataflow/apps
drwxrwxrwx   - user hadoop          0 2016-11-07 15:02 /dataflow/apps/stream
drwxrwxrwx   - user hadoop          0 2016-11-07 15:04 /dataflow/apps/stream/app
-rwxrwxrwx   3 user hadoop        121 2016-11-07 15:05 /dataflow/apps/stream/app/application.properties
-rwxrwxrwx   3 user hadoop       1177 2016-11-07 15:04 /dataflow/apps/stream/app/servers.yml
-rwxrwxrwx   3 user hadoop   60202852 2016-11-07 15:04 /dataflow/apps/stream/app/spring-cloud-deployer-yarn-appdeployerappmaster-1.0.0.RELEASE.jar
drwxrwxrwx   - user hadoop          0 2016-11-04 14:22 /dataflow/apps/task
drwxrwxrwx   - user hadoop          0 2016-11-04 14:24 /dataflow/apps/task/app
-rwxrwxrwx   3 user hadoop        121 2016-11-04 14:25 /dataflow/apps/task/app/application.properties
-rwxrwxrwx   3 user hadoop       2101 2016-11-04 14:24 /dataflow/apps/task/app/servers.yml
-rwxrwxrwx   3 user hadoop   60198804 2016-11-04 14:24 /dataflow/apps/task/app/spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.RELEASE.jar
drwxrwxrwx   - user hadoop          0 2016-11-04 14:25 /dataflow/artifacts
drwxrwxrwx   - user hadoop          0 2016-11-07 15:06 /dataflow/artifacts/cache
-rwxrwxrwx   3 user hadoop   12323493 2016-11-04 14:25 /dataflow/artifacts/cache/https-c84ea9dc0103a4754aeb9a28bbc7a4f33c835854-batch-job-1.0.0.BUILD-SNAPSHOT.jar
-rwxrwxrwx   3 user hadoop   22139318 2016-11-07 15:07 /dataflow/artifacts/cache/log-sink-rabbit-1.0.0.BUILD-SNAPSHOT.jar
-rwxrwxrwx   3 user hadoop   12590921 2016-11-07 12:59 /dataflow/artifacts/cache/timestamp-task-1.0.0.BUILD-SNAPSHOT.jar

Solution

  • There's clearly a mix of wrong versions as hdfs has spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.RELEASE.jar and error complains about spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.BUILD-SNAPSHOT.jar.

    Not sure how you got snapshots unless you built distribution manually?

    I'd recommend picking 1.0.2 from http://cloud.spring.io/spring-cloud-dataflow-server-yarn. See "Download and Extract Distribution" from ref docs. Also delete old /dataflow directory from hdfs.