Search code examples
real-timelow-latencychroniclechronicle-queue

Chronicle: How to optimize memory-mapped files for low-latency?


I'm using Chronicle to transfer vast amounts of data from one JVM to another. Problem is that I notice a lot of jitter on my benchmarks. My knowledge of memory-mapped files is somewhat limited, but I do know that the OS swaps pages back and forth from memory to disk.

How do I configure those pages for maximum performance, in my case, for less jitter and the lowest possible latency, when using Chronicle? Do they need to be big or small? Do they need to be many or few?

Here is what I currently have on my Ubuntu box:

$ cat /proc/meminfo | grep Huge
AnonHugePages:      2048 kB
ShmemHugePages:        0 kB
HugePages_Total:       1
HugePages_Free:        1
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

Solution

  • Assuming you have Linux, you can enable sparse files with useSparseFiles(true) on the builder.

    You can also use a faster drive to reduce outliers or /dev/shm.

    There is an asynchronous mode in the closed source version, however, you can get most outliers well below 80 microseconds without it.

    Chronicle Queue doesn't use Huge pages.

    Here is a chart I created when I was comparing it to Kafka, writing to a Corsair MP600 Pro XT.

    http://blog.vanillajava.blog/2022/01/benchmarking-kafka-vs-chronicle-for.html

    NOTE: This is the latency for two hops writing and reading an object of around 220 bytes (with serialization)

    enter image description here