I plan to use sftp protocol to upload files bigger than 1Gb to Azure Blob Storage containers. Does Azure guarantee atomic file upload in such a case, so if a connection is lost, the file will be empty or absent?
As an experiment, I tried to upload a file to a container, and if connection was lost the file with 0 length was created in the container. But I haven't found any documentation to explain that behaviour.
Unfortunately, Azure Blob Storage does not provide any guarantees for atomic file uploads when using the SFTP protocol. If the connection is lost during the file transfer, it is possible that only a part of the file will be uploaded to the Blob Storage container, resulting in a file with a size less than the original file.
Also you need to note that resuming uploads is also an unsupported operation in SFTP for Azure Blob Storage. See here.
As an alternative, to make sure that your file uploads are atomic, you can consider using Azure Blob Storage's block blob upload mechanism. Then you can leverage the Azure Blob Storage SDKs to perform the blob operation. This mechanism allows you to upload large files in small chunks and commit all the blocks as a single transaction. If any of the blocks fail to upload, the entire transaction will be rolled back, and the file will not be partially uploaded. Hope this helps.