I'm facing a design issue regarding mmaping an in-RAM file (created in a tmpfs
folder). The in-RAM file is around 400MB long, which in the worst case is of around 100k pages (unless the kernel chooses to use transparent huge pages of 2MB each, in which case it will drop to 200).
I don't know if keeping the mmap open, or whether to mmap
it, then munmap
it, and the mmap
it again when I need it later. The file will be used in avalanches, like it could be use several times during some minutes, and then not used again for hours.
I'm not really really worried about the performance, but I'm curious to know what is the performance cost of mmaping and munmapping a big section consisting of thousand of pages dozens of times in intervals of milliseconds appart. What is the work that the kernel must do to do the mapping, setting the permissions of each page, etc?
This highly depends on the specific environment your program is operating in.
In theory the kernel is smart and maintains a cache called "page cache" where recently mapped pages can technically sit undisturbed indefinitely after being unmapped from userspace programs. They still keep memory busy, but can be discarded at any time and written back to disk when that memory is needed. If you open htop
or similar tools, you can see how much of your memory is being used for the cache:
The free
command also shows this information in text form:
$ free -h
total used free shared buff/cache available
Mem: 31Gi 5.4Gi 22Gi 164Mi 4.0Gi 25Gi
Swap: 31Gi 0B 31Gi
When you map a file, unless explicitly requested, it is usually not read into memory immediately, but on demand as each page is accessed. The first access on new mappings generates a page fault and only then the kernel loads the actual data into memory. Once this is done, the pages are also cached in the page cache.
Therefore, if you write a program that maps a large file, uses it, unmaps it and then re-maps it again to re-use it in a short amount of time, the second time the pages will most definitely already be in the page cache. In this case the only work that the kernel will need to do is fill the corresponding page table entries, which is fast.
However, the more time it passes between the unmap and the re-map, the higher the chance that the pages will be evicted from the page cache and written to disk. This is because almost any file-backed mapping uses the page cache, and memory (RAM) is usually limited. Programs that run alongside yours also continuously request new memory and the kernel may have to get some of it back by discarding pages in the page cache.
At the end of the day, whether the operation you describe (continuously unmapping and re-mapping) is fast or slow really depends on how much memory you have available and how busy the system currently is. What is definitely fast though, is to always keep the file mapped, and possibly lock it in RAM (mlock(2)
) so that it is not swapped out. Of course, whether you can do this or not depends on the specific case, but that would be the best option purely from a performance point of view.
The kernel behavior regarding cached memory can also be altered through a few sysctl
knobs, in particular:
/proc/sys/vm/dirty_background_bytes
/proc/sys/vm/dirty_background_ratio
/proc/sys/vm/dirty_bytes
/proc/sys/vm/dirty_expire_centisecs
/proc/sys/vm/dirty_ratio
See this documentation page where they are described for more info.