Search code examples
c++multithreadingatomicmemory-mapped-files

C++: Fetch_add on memory mapped file


i openend a file using the boost mapped-file library. Is it possible to use a "fetch_add" (value is read at certain position then added with another and written back to that very same position atomically) on this mapped file?

If multiple threads write to it in parallel there could be problems without atomicity involved

The file is in binary format and contains ints or doubles (depends on specific file).

I also tried locks/mutexes but they always slow my program down when using multiple threads. The time spent in the locked regions is just to big compared to the rest of the algorithm and the threads block each other.

Are there any better ways so that multiple threads can write to a mapped file with high performance?

Thanks. Laz


Solution

  • Are there multiple processes mapping this file, or just multiple threads?

    If multiple processes are accessing this memory mapped file concurrently, you'll have to do your own (inter-process) synchronization.

    If it's only multiple threads, then you can atomically update the memory the same way you'd do it for any other word of memory, with the caveat that you can't use std::atomic (because obviously the bytes correspond directly to a section in the file, and not to std::atomic structures). So, you must resort to using your specific platform's support for atomically modifying memory, namely lock xadd on x86 via, e.g., InterlockedIncrement on Win32 (or __sync_fetch_and_add with g++). Be careful to ensure the memory ordering semantics (and return value!) are as you expect.

    Wrapping the platform-specific functions in a platform-independent way (if you need that) can be a bit of a hassle, though, and so in that case I'd suggest keeping the concurrently-accessed data in separate std::atomic variables, then updating the corresponding file bytes just once at the end.

    Note that all of this is orthogonal to memory mapping -- the OS backs the memory-mapped file with pages that it swaps in and out on demand, and the memory management unit that manages those pages is the same as the one that handles arbitrary other (non-mapped) pages, hence the pages themselves can be modified by multiple threads without having to worry about anything other than the usual (application-level) data races.