Search code examples
snappydata

Lead node fails with /tmp/spark-jobserver/filedao/data/jars.data (Permission denied)


SnappyData v.0-5

I am logged into Ubuntu as a non-root user, 'foo'. SnappyData directory/install is owned by 'foo' user and 'foo' group.

I am starting ALL nodes (locator,lead,server) with a script here:

SNAPPY_HOME/sbin/snappy-start-all.sh

Locator starts. Server starts. Lead dies with this error.

16/07/21 23:12:26.883 UTC serverConnector INFO JobFileDAO: rootDir is /tmp/spark-jobserver/filedao/data 16/07/21 23:12:26.888 UTC serverConnector ERROR JobServer$: Unable to start Spark JobServer: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at spark.jobserver.JobServer$.start(JobServer.scala:69) at io.snappydata.impl.LeadImpl.startAddOnServices(LeadImpl.scala:283) at io.snappydata.impl.LeadImpl$.invokeLeadStartAddonService(LeadImpl.scala:360) at io.snappydata.ToolsCallbackImpl$.invokeLeadStartAddonService(ToolsCallbackImpl.scala:28) at org.apache.spark.sql.SnappyContext$.invokeServices(SnappyContext.scala:1362) at org.apache.spark.sql.SnappyContext$.initGlobalSnappyContext(SnappyContext.scala:1340) at org.apache.spark.sql.SnappyContext.(SnappyContext.scala:104) at org.apache.spark.sql.SnappyContext.(SnappyContext.scala:95) at org.apache.spark.sql.SnappyContext$.newSnappyContext(SnappyContext.scala:1221) at org.apache.spark.sql.SnappyContext$.apply(SnappyContext.scala:1249) at org.apache.spark.scheduler.SnappyTaskSchedulerImpl.postStartHook(SnappyTaskSchedulerImpl.scala:25) at org.apache.spark.SparkContext.(SparkContext.scala:601) at io.snappydata.impl.LeadImpl.start(LeadImpl.scala:129) at io.snappydata.impl.ServerImpl.start(ServerImpl.scala:32) at io.snappydata.tools.LeaderLauncher.startServerVM(LeaderLauncher.scala:91) at com.pivotal.gemfirexd.tools.internal.GfxdServerLauncher.connect(GfxdServerLauncher.java:174) at com.gemstone.gemfire.internal.cache.CacheServerLauncher$AsyncServerLauncher.run(CacheServerLauncher.java:1003) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.FileNotFoundException: /tmp/spark-jobserver/filedao/data/jars.data (Permission denied) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.(FileOutputStream.java:213) at spark.jobserver.io.JobFileDAO.init(JobFileDAO.scala:90) at spark.jobserver.io.JobFileDAO.(JobFileDAO.scala:30) ... 22 more 16/07/21 23:12:26.891 UTC Distributed system shutdown hook INFO snappystore: VM is exiting - shutting down distributed system

Do I need to be a different user to start the Lead node? Use 'sudo'? Configure a property to tell Spark to use a directory 'foo' has permission to? Create this directory myself ahead of time?


Solution

  • It seems that the current owner of /tmp/spark-jobserver is some other user. Check the permissions on that directory and delete it.

    If multiple users will be running leads on the same machine, you can configure the job-server directories to be elsewhere like mentioned here. The relevant properties can be found in application.conf source. This is probably more trouble than worth, so for now it will be easier to just ensure a single user starts the lead nodes on a machine.

    We shall be fixing the default to be inside work/ directory in next release (SNAP-69).