Search code examples
linuxperformancemmapmemory-mapped-files

Why would changing the filesystem type from XFS to JFS increase mmap file write performance?


I have been playing around with different filesystems and comparing the performance of the various filesystems when using mmap.

I am suprised that changing to JFS doubled the write performance straight off. I thought writes were done to the page cache and so when a write is done the app keeps moving on quickly? is it actually a synchronous operation under linux?

A slight increase in read performance, but not as significant.


Solution

  • Writes are done straight to the page cache, but the first time you hit each page with a write will cause a minor fault to mark the page as dirty. At this point the filesystem has the chance to perform some work - in the case of xfs, this involves delayed allocation accounting and extent creation. You could try preallocating the entire file beforehand to see how/if this changes things. (jfs uses the generic mmap operations, which does not supply a callback used when a page is made writeable).

    Note also that once the proportion of dirty pagecache pages exceeds /proc/sys/vm/dirty_ratio, the kernel will switch from background asynchronous writeback to synchronous writeback of dirty pages by the process that dirtied them.