I've got a DropWizard service that I've setup so that any Liquibase migrations happen automatically at app startup.
When I first startup my Dropwizard service I run a Liquibase::listLocks()
for informational purposes. This call will also create the DATABASECHANGELOG
and DATABASECHANGELOGLOCK
tables if they don't exist as a side effect.
Once on a new Oracle ATP DB, it threw the following error when creating the Lock table during that listLocks call:
ERROR 2021-07-08 17:59:22,546 [main] com.blah.utils.LiquibaseMigrator: Error listing locks!
liquibase.exception.DatabaseException: ORA-12838: cannot read/modify an object after modifying it in parallel
[Failed SQL: (12838) INSERT INTO DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0)]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:430)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:87)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:159)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:139)
at liquibase.lockservice.StandardLockService.init(StandardLockService.java:128)
at liquibase.Liquibase.checkLiquibaseTables(Liquibase.java:1176)
at liquibase.Liquibase.listLocks(Liquibase.java:1193)
at com.blah.utils.LiquibaseMigrator.dbLocksExist(LiquibaseMigrator.java:xxx)
at com.blah.utils.LiquibaseMigratorExecutor.dbLocksExist(LiquibaseMigratorExecutor.java:xxx)
at com.blah.MyApp.migrateDatabase(MyApp.java:xxx)
at com.blah.MyApp.run(MyApp.java:xxx)
at com.blah.MyApp.run(MyApp.java:xxx)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:59)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:98)
at io.dropwizard.cli.Cli.run(Cli.java:78)
at io.dropwizard.Application.run(Application.java:94)
at com.blah.MyApp.main(MyApp.java:xxx)
Caused by: java.sql.SQLException: ORA-12838: cannot read/modify an object after modifying it in parallel
The service would throw that exception, then the pod would restart, and repeated. I manually added the row into the lock table that it was trying to insert there, and the next startup succeeded.
There was only 1 pod running at the time, so I don't think it's related to this Race Condition on startup bug I found, but I'm at a loss to why/how it could have gotten itself into this cannot read/modify an object after modifying it in parallel
state.
Make sure you are connecting to a LOW service, which does not use parallel DML