Search code examples
javamemory-mapped-fileschroniclechronicle-map

How to achieve transaction behaviour with file persisted Chronicle Map


all,

I am trying to store time series in Chronicle Map. The series are split into chunks, and every chunk is a separate map entry. I am wondering if anybody knows what is going to happen if JVM exits while an entry is currently being written to the Chronicle Map (while a BytesMarshaller is serialising)??

Would the memory mapped file end up with corrupt data?? Is there a work around??


Solution

  • When a new entry is put into Chronicle Map, this is commited by a single atomic operation. I. e. if a JVM exits at an arbitrary moment, during put operation, you might catch the following effects:

    • map.size() out if sync with the actual data, +- 1
    • Memory leaked (the memory used to store the entry)

    But you are guaranteed not to have:

    • A corrupted entry (which was being put when JVM exited), wrong key or a wrong value put for the correct key, observable in any way: neither by querying the key nor iteration
    • Any other entries, already present in the map by the moment of JVM exit, corrupted.
    • No contract/behavioral change for any Chronicle Map instance, mapped to the same file, in parallel to the instance in the JVM which exit, or mapped only after that (e. g., when the JVM is started up again). In particular, you will be able to put the entry with the key, that was being put when the JVM exited, without any problems.

    On the other hand, there is an important thing, regarding Chronicle Map versions 3.x, is that after such JVM exit a Chronicle Map segment, to which the entry was being put, is going to be locked. You could erase the lock state manually, or wait when corresponding API is added. This is not the case for Chronicle Map 2.x, it waits for 2 seconds and grabs the lock.