I'm developing a web app on Tomcat 8 with Maven, I'm using c3p0 to handle connections on the main thread and on 2 other concurrent threads, my connection manager class is asking a DataSource singleton class I've implemented for synchronized connections, like so
public synchronized Connection getConnection() {
try {
return cpds.getConnection();
} catch (SQLException ex) {
logger.error("Error while issuing a pooled connection", ex);
}
return null;
}
, but when I'm trying to use these connections they start to either interrupt
09:47:17.164 [QuartzScheduler_Worker-4] ERROR com.myapp.providers.DataSource - Error while issuing a pooled connection
java.sql.SQLException: An SQLException was provoked by the following failure: java.lang.InterruptedException
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:62) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:531) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) ~[c3p0-0.9.1.2.jar:0.9.1.2]
or close in mid transaction and breaking any statements and result sets that are being used at that time
I'm configuring the DataSource object like so
cpds = new ComboPooledDataSource();
cpds.setDriverClass(oracle.jdbc.driver.OracleDriver);
cpds.setJdbcUrl(jdbc:oracle:thin:@xx.xxx.xxx.xxx:1521:XE);
cpds.setUser("username");
cpds.setPassword("password");
// database connection properties
cpds.setInitialPoolSize(10);
cpds.setAcquireIncrement(3);
cpds.setMaxPoolSize(100);
cpds.setMinPoolSize(15);
cpds.setMaxStatements(75);
// connection pool preferences
cpds.setIdleConnectionTestPeriod(60);
cpds.setMaxIdleTime(30000);
cpds.setAutoCommitOnClose(false);
cpds.setPreferredTestQuery("SELECT 1 FROM DUAL");
cpds.setTestConnectionOnCheckin(false);
cpds.setTestConnectionOnCheckout(false);
cpds.setAcquireRetryAttempts(30);
cpds.setAcquireRetryDelay(1000);
cpds.setBreakAfterAcquireFailure(false);
I've also written a small test method that runs in a loop and queries the database for n times but that works fine.
c3p0-0.9.1.2 is very, very old; please consider upgrading to 0.9.5.1, the current production version.
The problem is both clear and not so clear. The clear part is that something is calling interrupt()
on client Threads that are waiting to acquire Connections. The not-so-clear part is who is doing that and why.
A guess is that Tomcat itself is doing that because the client Threads are hung too long. If the Threads are hanging at getConnection()
, that could be due to a Connection leak and pool exhaustion. We see above how you acquire Connections. Are you vigilant about ensuring that they are reliably close()
ed in finally blocks?
A thing you might try is to set a checkoutTimeout, e.g.
cpds.setCheckoutTimeout( 5000 ); // 5 secs
This won't actually solve the problem if Connection checkouts are hanging. But rather than a problem provoked by mysterious interrupts, you'll see c3p0 TimeoutExceptions instead. That will verify that the issue is long hangs on checkout, though, which would most likely be due to pool exhaustion, either from a Connection leak (missing calls to close()
), or simply from a maxPoolSize
value too low for your load.
If there does seem to be a Connection leak, please see unreturnedConnectionTimeout and debugUnreturnedConnectionStackTraces for help tracking it down. See also "Configuring to Debug and Workaround Broken Client Applications"