What I would like to do
I need to use direct memory to avoid the GC moving things around. I would like to enable huge pages for those.
So far
The flag -XX:+UseLargePages works fine when using heap Buffer (non-direct ByteBuffers), but does not work anymore when using DirectByteBuffers. I have also tried using MappedByteBuffers and a hugetlbfs filesystem. This works but raises a number of issues, so I'm looking for a different solution.
Config CentOS release 6.3, hotspot, jdk1.7
[EDIT]
Looking at hotspot source code, they are using a malloc to allocate memory with Unsafe, were shmat/shmget or mmap would be needed to use huge pages.
[EDIT] Why non heap memory
We are in a NUMA context, for an in memory database, with a lot of long lived objects. The JVM does not partition the old gen when the UseNUMA flag is on. Using direct memory allows us to have the memory stay close to the threads that needs it.
Benchmarking took obviously a huge role in the decision to use DirectByteBuffers. I'm not asking whether I should be using DirectByteBuffer or not, I'm rather looking for an answer to my question.
For those interested, the link to the oracle bug report
The link to the corresponding openJDK ticket. Closed as won't fix so far. On Linux the THP feature might help, even-though it comes with its own bunch of issues.