Search code examples
apache-sparkspark-jobserver

CONTEXT_ID added in SJS 0.9.0 is being set as null in the table


Im trying to catch up with the new SJS 0.9.0 in my application. Once after the context is created , I am trying to submit a job -> this happens

19/04/10 21:45:06 ERROR JobDAOActor: About to restart actor due to exception:
org.postgresql.util.PSQLException: ERROR: null value in column "CONTEXT_ID" violates not-null constraint
  Detail: Failing row contains (c144684e-3dad-459d-acff-2ac353709092, SJS_512_MB_shared_prioritized@1, 1, spark.jobserver.DPAASExecutor, 2019-04-10 21:45:06.602, null, null, null, null, null, null).
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2270)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1998)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:570)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:420)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:366)
    at slick.driver.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction$$anonfun$nativeUpsert$1.apply(JdbcActionComponent.scala:560)
    at slick.driver.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction$$anonfun$nativeUpsert$1.apply(JdbcActionComponent.scala:557)
    at slick.jdbc.JdbcBackend$SessionDef$class.withPreparedStatement(JdbcBackend.scala:347)
    at slick.jdbc.JdbcBackend$BaseSession.withPreparedStatement(JdbcBackend.scala:407)
    at slick.driver.JdbcActionComponent$InsertActionComposerImpl.preparedInsert(JdbcActionComponent.scala:498)
    at slick.driver.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction.nativeUpsert(JdbcActionComponent.scala:557)
    at slick.driver.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction.f$1(JdbcActionComponent.scala:540)
    at slick.driver.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction.run(JdbcActionComponent.scala:545)
    at slick.driver.JdbcActionComponent$SimpleJdbcDriverAction.run(JdbcActionComponent.scala:32)
    at slick.driver.JdbcActionComponent$SimpleJdbcDriverAction.run(JdbcActionComponent.scala:29)
    at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.liftedTree1$1(DatabaseComponent.scala:237)
    at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.run(DatabaseComponent.scala:237)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

https://github.com/spark-jobserver/spark-jobserver/pull/1058 . This is the feature that has been added in SJS 0.9.0

Can you please explain me why this happens ? Should I include any extra prop like CONTEXT_ID while submitting the job , because this has been added only in SJS 0.9.0 and I have gone through this content https://github.com/spark-jobserver/spark-jobserver/releases/tag/v0.9.0 .. Or is spark taking care of the CONTEXT_ID thing ?


Solution

  • It was actually because , we were using an earlier version of SJS (0.8.0) jar in the code and used the SJS 0.9.0 jar only in the job_server directory. It used the manager_start.sh which was required only in 0.8.0 and not in 0.9.0. Replaced with 0.9.0 it worked fine !!