Search code examples
cachingazureazure-caching

Is there a maximum object count in Azure Caching (Preview)


I know there's a maximum memory limit in Azure Caching but is there a maximum object count as well? It feels like the cache would get slower as the number of keys is increasing.

Background:

I need to keep some numbers in memory for each user (summaries which is expensive to calculate from db but cheap to increment in memory on the fly). As the concurrent users grows I'm worried I might outgrow the cache if there's a limit.

My intended solution:

Let's say i have to keep the Int64 'value1' and 'value2' in memory for each user.

Cache items as userN_value1, userN_value2, [...] and call DataCache.Increment to update the value of each counter when changed like this:

DataCache.Increment("user1_value1", 2500, 0, "someregion");

As the amount of users grow this may result in a lot of items. Is this something I should worry about? Is there a better approach I haven't thought of?


Solution

  • In practice the limit is imposed by the number of instances and size of the VM selected for the cluster.

    The Capacity Planning Guide spreadsheet is very interesting, I used it to compare our current usage of the Shared Cache Service in order to find the matching configuration (and then compare cost).

    If you adapt the settings Max Number of Active Objects and Average Object Size (Post-Serialization) to your scenario you can notice how the proposed configuration rises.

    There seems to be a limitation: if you increare the requirement you can encounter "Cluster Size greater than 32 Not Supported. Consider splitting into multiple clusters". I assume that if you need more than 32 nodes in the cluster, each as a ExtraLarge VM, you reached the limit.