Does the blobstore guarantee read consistency without a write limit?
I know Google Cloud SQL does
and the datastore does (but then imposes a 1 second write limit)
However I can't find any info on the blobstore
I've always found the Blobstore to perform really fast and without consistency issues. If you have a process that writes 1MB at a time perpetually, and another that starts a few milliseconds later and starts reading that blob, you will actually get results back in real-time as long as the initial post didn't fail. In other words, you can read blob parts (aka byte ranges) as fast as they are written.
Not sure how useful that is as a real usage of the Blobstore, especially if you need to read all parts to create an image or binary, but it's just to illustrate the answer a bit. You are correct, there is no clear throughput/consistency documentation because it doesn't quite work like NDB and Google even claims that the only true limit is an HTTP connection:
Google App Engine includes the Blobstore service, which allows applications to serve data objects limited only by the amount of data that can be uploaded or downloaded over a single HTTP connection
(from https://cloud.google.com/appengine/docs/python/blobstore)
Not sure if this is useful, but wanted to share Twitter's version of the blobstore: https://blog.twitter.com/2012/blobstore-twitter’s-house-photo-storage-system. I'm sure they have made improvements, but the core philosophy is shared by Google's Blobstore implementation.