Search code examples
h2cordah2db

Buffer overflow issue when rows in vault is more than 200


We have developed an app, which creates obligations in vault in a table. If the row size of the table is less than 200, the application works fine. However, if the rows are more than 200 then during the execution it gives the following error -

Please specify a PageSpecification as there are more results [201] than the default page size [200]

After increasing the pagination to 400 from the default 200, I am getting java heap out of memory error. I tired to increase the heap size of node from 512m (default) to 1024m. But, still this issue is not resolved and I am getting the error in H2 DB- Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "MVStore background writer nio:C:/Users/Administrator/Desktop/Settlemen t/corda-settlement/build/nodes/Bob/persistence.mv.db"

I tired to look for the solution in H2 DB documentation as well. But, could not find any solution.


Solution

  • Vault queries fail-fast by default when no Pagination size has been specified. See https://docs.corda.net/releases/release-V2.0/api-vault-query.html?highlight=pagination#pagination

    Increasing the heap size in proportion to the query size should scale adequately (ie. you should not see any Out of Memory errors). It would be useful for us to know what size your transactions are (eg. size of the Transaction including contract state which is stored as a blob in the transactions table of your H2 database) so that we can perform some internal benchmarks to see if we can reproduce the same situation.