Search code examples
clinuxfilestdio

Should fsync be used after each fclose?


On a Linux (Ubuntu Platform) device I use a file to save mission critical data.

From time to time (once in about 10,000 cases), the file gets corrupted for unspecified reasons. In particular, the file is truncated (instead of some kbyte it has only about 100 bytes).

Now, in the sequence of the software

  1. the file is opened,
  2. modified and
  3. closed.

Immediately after that, the file might be opened again (4), and something else is being done.

Up to now I didn't notice, that fflush (which is called upon fclose) doesn't write to the file system, but only to an intermediate buffer. Could it be, that the time between 3) and 4) is too short and the change from 2) is not yet written to disc, so when I reopen with 4) I get a truncated file which, when it is closed again leads to permanent loss of those data?

Should I use fsync() in that case after each file write?

What do I have to consider for power outages? It is not unlikely that the data corruption is related to power down.


Solution

  • fwrite is writing to an internal buffer first, then sometimes (at fflush or fclose or when the buffer is full) calling the OS function write.

    The OS is also doing some buffering and writes to the device might get delayed.

    fsync is assuring that the OS is writing its buffers to the device.

    In your case where you open-write-close you don't need to fsync. The OS knows which parts of the file are not yet written to the device. So if a second process wants to read the file the OS knows that it has the file content in memory and will not read the file content from the device.

    Of course when thinking about power outage it might (depending on the circumstances) be a good idea to fsync to be sure that the file content is written to the device (which as Andrew points out, does not necessarily mean that the content is written to disc, because the device itself might do buffering).