I'm using Azure Blob Storage to cache the intermediate results of some calculations. It all works great, except for very occasionally, the Azure blob storage client returns an error like this:
The remote server returned an error: (400) Bad Request.
The C# code in question looks like this:
public void Upload(string fileName, T entity)
{
try
{
var blockBlob = _blobContainer.GetBlockBlobReference(fileName);
using (var stream = _serializer.Serialize(entity))
{
blockBlob.UploadFromStream(stream);
}
}
catch (Exception ex)
{
var json = JsonConvert.SerializeObject(entity).SubstringSafe(0, 500);
_logger.Error("Error uploading object '{0}' of type '{1}' to blob storage container '{2}'; entity='{3}'; error={4}",
fileName,
typeof(T).Name,
_containerName,
json,
ex.CompleteMessage());
throw;
}
}
The fileName
might be something like "4110\GetNodesForPathAnalysis" (which works in other circumstances), and the _containerName
might be "segmentedids" (which also works in other circumstances). I know that the usual cause of this 400 error - one that has bit me several times - is a container or blob name that violates The Rules, but that doesn't seem to be the case here.
The error is transient - if I refresh the page on which it shows up, the object (with the same container and file name) gets uploaded to Azure Blob Storage correctly.
Any suggestions?
You can setup BlobRequestOptions class details as follows with retry strategy and timeout -
//set the blob upload timeout and retry strategy
BlobRequestOptions options = new BlobRequestOptions();
options.ServerTimeout = new TimeSpan(0, 180, 0);
options.RetryPolicy = new ExponentialRetry(TimeSpan.Zero, 20);
Then we can specify this options variable as part of upload operation in say PutBlock. The detailed post is here which talks about creating blocks of data to be uploaded and then upload to blob storage in parallel and async way. Here is the link -
http://sanganakauthority.blogspot.com/2014/07/upload-large-files-to-azure-block-blob.html
If async is not required then you can make it work in sync way. Hope this helps.