I'm currently using Rackspace cloud files for backing up files, some that can be rather large, and I would like to avoid having to start from the beginning every time there is a failure in the network. For example, some time ago my log showed a 503 error happening with the server being unavailable which caused the upload to stop.
Is there anyway the .Net SDK can handle this? If not, is there another possible solution working around the SDK? I've been searching for a solution, but have not yet come across anything.
Thank you.
EDIT: I've tried solving this in the meantime by creating my own method for segmentation for files as big as 2 GB, even though the SDK does that for you. By dealing with smaller pieces of files, this helps, but it will result in take=ing up a lot of room in the container( 1000 object limit), so I'd still like to see if there is a better way to prevent this problem.
I can't really speak for the .Net SDK, but I can give you some tips as far as Cloud Files goes.
is there another possible solution working around the SDK?
We usually recommend segmenting large objects yourself. This will allow you to upload multiple segments in parallel. Then if a segment fails while uploading, you can just re-upload that single segment. As a general rule we usually recommend ~100MB segments.
If you need to be able to access your file as a single object, you can use the segments to create a Static Large Object aka SLO.
will result in take=ing up a lot of room in the container( 1000 object limit),
Containers don't have a hard limit on the number of objects they can contain, however if you expect to have a million objects you may consider spreading them across multiple containers. If you are talking about a SLO's 1000 segment limit, you could always create nested SLOs.