Search code examples
bigtablegoogle-cloud-bigtable

Google Bigtable under usage "performance"


I have seen the warnings of not using Google Big Table for small data sets.

Does this mean that a workload of 100 QPS could run slower (total time; not per query) than a workload of 8000 QPS?

I understand that 100 QPS is going to be incredibly inefficient on BigTable; but could it be as drastic as 100 inserts takes 15 seconds to complete; where-as a 8000 inserts could run in 1 second?

Just looking for a "in theory; from time to time; yes" vs "probably relatively unlikely" type answer to be a rough guide for how I structure my performance test cycles.

Thanks


Solution

  • There's a flat start up cost to running any Cloud Bigtable operations. That start up cost generally is generally less than 1 second. I would expect 100 operations should take less than 8000 operations. When I see extreme slowness, I usually suspect network latency or some other unique condition.