Search code examples
azureazure-storageazure-table-storage

Azure Table - benchmark shows performance above and beyond official targets


We've written an Azure Table benchmarks/stress-test using the official Java SDK. All the benchmark does is download entire partitions from an Azure Table as fast as possible.

Each partition contains between ~5 and ~60K entities. On average, the entities are about 1KB in size. And the code is being executed from an Azure VM in the same region as the table. According to the official documentation, each partition is limited to fetching 2000 entities per second:

Target throughput for single table partition (1 KB entities) - Up to 2000 entities per second

So far, we managed to read as many as 18K entities per second per partition on some tests. We saw such numbers for even some cold partitions which were untouched for months.

I expected being throttled back once we hit 2000 entities per second... but we aren't. How is this possible? And can we rely on the numbers we see in practice, or is this a fluke?


Solution

  • Azure Storage scalability targets are not exact. There's no guarantee of hitting exactly 2,000 transactions per second. That said: The scalability target for Azure Storage (specifically tables) is 2,000 transactions per second, per partition. Or 20,000 transactions per second, per storage account.

    Transaction = REST call, not the same as a database transaction.

    I saw the docs you pointed out, which mention 2,000 entities per second. That's not really how it works. It's possible you're receiving multiple entities per REST call (GET), so this is likely the explanation for you seeing more than 2,000 entities per second, for your partitions.

    Per documentation here:

    A query against the Table service may return a maximum of 1,000 entities at one time and may execute for a maximum of five seconds.

    So, highly likely you're getting more than one entity per transaction.