Before redeploying the application war, I checked the xd.lck file from one of the environment path:
Private property of Exodus: 20578@localhost
jetbrains.exodus.io.LockingManager.lock(LockingManager.kt:89)
I'm testing from both Nginx Unit and Payara server to eliminate the possibility that this is an isolated case with Unit.
And process 20578
shows from htop:
20578 root 20 0 2868M 748M 7152 S 0.7 75.8 14:05.75 /usr/lib/jvm/zulu-8-amd64/bin/java -cp /
After redeployment finished successfully, accessing the web application throws:
java.lang.Thread.run(Thread.java:748)
at jetbrains.exodus.log.Log.tryLock(Log.kt:799)
at jetbrains.exodus.log.Log.<init>(Log.kt:120)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:142)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:121)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:10
And checking the same xd.lck
file shows the same content. Meaning to say that "lock is not immediately released" contrary to what is described here.
My assumption is for this specific case with Payara Server (based on Glassfish) is that, the server does not kill the previous process even after redeployment has completed. Maybe perhaps for "zero-downtime" redeployment, not sure, Payara experts can correct me here.
Checking with htop the process 20578
is still running even after the redeployment.
As with Xodus, since most application servers behave this way, what would be the best solution and/or workaround so we don't need to manually delete all lock files of each environment (if can be deleted) every time we redeploy?
Solution is for the Java application to look for the process locking the file then do a kill -15
signal for example to gracefully make the Java handle the signal to be able to close environments:
// Get all PersistentEntityStore's
entityStoreMap.forEach((dir, entityStore) -> {
entityStore.getEnvironment().close();
entityStore.close();
}