We're using HSQLDB for a bunch of JUnit integration tests. For one particular test I need to load a bunch of data to verify some algorithms we run on the database. Each test in the suite will insert a large batch, test the algorithm and then delete all records. Unfortunately, HSQLDB will eventually throw an OutOfMemoryError, even though all records are cleared each time and the maximum number of records in the database at any given time does not change.
Here's a minimalist JUnit test to reproduce this. As you can see it just inserts and then deletes a bunch of rows. What is HSQLDB keeping in memory after the deletes that causes the Error? What can I change to be able to run the insert-deletes indefinitely (or at least aplenty to execute all tests)?
package mypackage;
import org.junit.Test;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
/**
* Verifying HSQLDB OutOfMemoryError
*
* @author Ricardo van den Broek
*/
public class HsqlDbTest {
@Test
public void test() throws Exception {
executeSql("CREATE TABLE MY_TABLE (MY_COLUMN CLOB)");
String str = "TESTTESTTESTTESTTESTTESTTESTTEST";
String x = String.format("INSERT INTO MY_TABLE VALUES('%S')", str);
for (int i=0;i<1000;i++){
System.out.println("Starting batch: "+i);
for (int j=0; j<10000; j++) {
executeSql(x);
}
System.out.println("Inserted: "+getCount());
executeSql("DELETE FROM MY_TABLE");
System.out.println("After delete: "+getCount());
}
}
private void executeSql(String sql) throws SQLException {
Connection c = DriverManager.getConnection("jdbc:hsqldb:mem:a");
PreparedStatement ps = c.prepareStatement(sql);
ps.executeUpdate();
ps.close();
c.close();
}
private long getCount() throws Exception {
Connection c = DriverManager.getConnection("jdbc:hsqldb:mem:a");
PreparedStatement ps = c.prepareStatement("SELECT COUNT(*) FROM MY_TABLE");
ResultSet resultSet = ps.executeQuery();
if (resultSet.next()) {
long count = resultSet.getLong(1);
ps.close();
c.close();
return count;
}
throw new Exception("Should not happen");
}
}
Output:
Starting batch: 0
Inserted: 10000
After delete: 0
Starting batch: 1
# ...
# Omitting some repetition
# ...
Inserted: 10000
After delete: 0
Starting batch: 10
java.sql.SQLException: java.lang.OutOfMemoryError: Java heap space
Note: I'm aware I could increase the memory limit on the unit tests or shut the database down in between, but until someone can provide me with a reasonable explanation of why it does this, I feel I should be able to expect emptying the in-memory HSQLDB should also actually free the memory it was using.
This phenomenon is present in versions older than 2.5.1.
This is not a bug. In a mem:
(memory-only) database with no files, when the column type is CLOB (as opposed to VARCHAR), the String objects are stored in a separate memory LOB store which is not cleared as rows are deleted. You need to perform a CHECKPOINT from time to time to clear the LOB store. Add executeSQL("CHECKPOINT")
at the end of each run.
You can declare the column as LONGVARCHAR or any VARCHAR(n) with a large enough size to avoid using the CHECKPOINT statement.
This issue does not arise in file:
databases, which use a file for the storage of LOBs, even when the table is a MEMORY table. In these databases, a CHECKPOINT releases the spaces taken up by deleted blocks inside the .lobs
file. The spaces are then reused for new LOBs.
In the latest version 2.5.1, automatic checkpoints are also performed when lobs are deleted from mem:
databases. Here, the hsqldb.log_size
setting applies to the total amount (in MB) of deleted lob data that triggers a checkpoint.