Search code examples
phpamazon-ec2amazon-s3s3fs

Saving files to amazons S3 using S3FS with PHP on Red Hat Linux and files being over written with nothing


When writing a file to S3 using S3FS, if that file is accessed while writing to it, the data in the file is deleted.

We had a Red Hat Linux server on which we kept a product we were beta testing when we noticed this issue. When we went to fix the issue, we moved that product to an Ubuntu instance and we no longer have that issue.

We set up a server for a client that wanted Red Hat and moved some code to that server and that server is now having the overwrite issues.


Solution

  • The behavior you describe makes sense. A bit of explanation of how S3 works vs standard volumes is required.

    A standard volume can be read/written by the OS at a block level. Multiple processes can access the file, but some locks are required to prevent corruption.

    S3 treats operations as whole files. Either the file gets uploaded in its entirety or it doesn't exist at all.

    s3fs tries to create a interface to something that isn't a volume so that you can mount it on the file system. But under the covers, it copies each file you access to the local file system and stores it in a temp directory. While you can generally do whole file operations with s3fs (copying, deleting, etc.), trying to open a file directly from s3fs to to block level operations is going to end badly.

    There are other options. If you can rework your code to pull and push files from s3 can work, but it sounds like you needs something that behaves more like NFS.