I have rather large files to retrieve from blob storage and I'm thinking of resiliency. If it fails to download, e.g. the network connection suffers a temporary interruption, it needs to retry x number of times. However I'm noticing behaviour that I don't know how to deal with.
I'm replicating a network fault by simply unplugging my network cable in the middle of a transfer to my machine. What then happens is that the transfer stops at x% and never moves on. The session stays active, the transfer task is still open and the destination file remains locked. Even after I plug the cable back in it just stays like this until I close down my PowerShell session (in this case ISE).
This is the command I'm using:
Get-AzureStorageBlobContent -blob $file.Name -Container $containerName -Destination $localPath -Context $Context -clientTimeoutPerRequest 60 -ErrorAction Stop -Force
It never throws an error, it just gets stuck. Is there anyway I can get this to timeout and return an error to throw to a catch block?
It never throws an error, it just gets stuck. Is there anyway I can get this to timeout and return an error to throw to a catch block?
As you said, if network issue cause interruption during download blob via Get-AzureStorageBlobContent cmdlet, the error will not reach in catch block. If you’d like to restart the transfer and make the transfer start from the point of interruption, you could try to use AZCopy. And AZCopy with in re-startable mode is supported for interruption caused by network or other issues during file transfer, which could help restart the transfer from the point of interruption.