So I'm on a tiny server with not much ram to spare and when I try to run datomic, it gets angry at me!
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b5a00000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid1662.log
I come across this: https://groups.google.com/forum/#!topic/datomic/5_ZPZBFmCJg which says I need to change more than just
object-cache-max
in my transactor .properties file. Unfortunately it doesn't continue with what exactly I need to change in addition. Help would be appreciated.
You might read the docs on capacity planning for more context on configuring a Datomic transactor. As the group issue mentions, the -Xms1g and -Xmx1g settings are asking for a gig of RAM. The docs I've linked show part of the solution in this case:
You can set the maximum memory available to a JVM process with the -Xmx flag to java (or to bin/transactor).
Micros are not supported for Datomic deployment, though there are some being run with success out in the wild (very low write loads). You might try, for example, a configuration like this:
memory-index-threshold=16m
memory-index-max=64m
object-cache-max=128m
With -Xmx set to 512MB. This may take additional steps in AWS, etc. as reported here. The basic answer, though, is that you'll need to decrease the max heap size and experiment with reduced values for each of the other memory settings to accommodate the lower setting.