Search code examples
linuxconcurrencyfile-handlingread-write

Linux Webserver concurrent file handling (read/write)


I hope you can give me some advise regarding my issue.

What I got is a web server running on a Raspberry Pi. On it a C-program writes a JPEG-File in a certain time inteval (1 second) like this:

fout = fopen("/tmp/image1.jpg", "w");
fwrite(jpgBuffer, jpgFileSize, 1, fout);
fclose(fout);

I access the image through my web browser: "192.168.178.xxx/tmp/image1.jpg" Most of the time the image is shown perfectly. However sometimes i see artifacts in the image.

My assumtion is, that the file is writen to during I request the image from the web browser. How can I avoid this behavior? Or how can I ensure, that the file is not written to while it is opened for reading during a request.

I read about file lock, but am not sure if this is the way to go. I know that I can set an exclusive lock with the flock function before writing to the file and unlock it afterwards. But I read also that the corresponding file open function has to set a read or shared lock for this method to work. However I don't know if the http request which I issue through the web browser sets such a read lock.

Any suggestions are appreciated.

Thanks a lot!


Solution

  • Write to another temporary file and then rename it to "/tmp/image1.jpg". Given that rename is pretty atomic, your problem will likely be fixed.

    A possible alternative approach could be to use mandatory locks: the writing process sets the RW lock after opening the file for reading and writing, then the kernel will block the web server from reading the file until the lock is removed (either explicitly or implicitly when the writing process exits). However, this approach involves additional steps (filesystem must be mounted with the "mand" option, group setgid attribute must be set by "g+s", also "g-x"), also, usage of mandatory locks in linux is frowned upon.