A caching issue for you cache gurus.
Context
We have used OpenSymphony's OsCache for several years and consider moving to a better/stronger/faster/actively-developed caching product.
Problem
We have used OsCache's "group entry" feature and have not found it elsewhere.
In short, OsCache allows you to specify one or more groups at 'entry insertion time'. Later you can invalidate a "group of entries", without knowing the keys for each entry.
OsCache Example
Here is example code using this mechanism:
Object[] groups = {"mammal", "Northern Hemisphere", "cloven-feet"}
myCache.put(myKey, myValue , groups );
// later you can flush all 'mammal' entries
myCache.flushGroup("mammal")
// or flush all 'cloven-foot'
myCache.flushGroup("cloven-foot")
Alternative: Matcher Mechanism
We use another home-grown cache written by a former team member which uses a 'key matcher' pattern for invalidating entries
In this approach you would define your 'key' and matcher' class as follows:
public class AnimalKey
{
String fRegion;
String fPhylum;
String fFootType;
..getters and setters go here
}
Matcher:
public class RegionMatcher implements ICacheKeyMatcher
{
String fRegion;
public RegionMatcher(String pRegion)
{
fRegion=pRegion;
}
public boolean isMatch(Obect pKey)
{
boolean bMatch=false;
if (pKey instanceof AnimalKey)
{
AnimalKey key = (AninmalKey) pKey);
bMatch=(fRegion.equals(key.getRegion());
}
}
}
Usage:
myCache.put(new AnimalKey("North America","mammal", "chews-the-cud");
//remove all entries for 'north america'
IKeyMatcher myMatcher= new AnimalKeyMatcher("North America");
myCache.removeMatching(myMatcher);
This mechanism has simple implementation, but has a performance downside: it has to spin through each entry to invalidate a group. (Though it's still faster than spinning through a database).
The question
thanks
will
I too implemented a matcher approach when trying to scale a legacy system with an ad hoc invalidation process. The O(n) nature wasn't a problem since the caches were small, the invalidation was performed on a non-user facing thread, and it didn't hold the locks so there wasn't a contention penalty. This was needed for matching against keys that cross cut caches, such as to invalidate all data for a company in caches spread across the application. This was really a problem of having no design centers so the application was monolithic and poorly decomposed.
When we rewrote it based on domain services, I adopted a different strategy. We now had the domain for specific data centralized into specific caches, such as for configurations, so it became a desire for multi-lookup. In this case we realized that the key was just a subset of the value, so we could extract all of the keys after load from metadata (e.g. annotations). This allowed for fine grained grouping and a convenient programming model through our cache abstraction. I published the core data structure, IndexMap, in a tutorial on the idea. Its not meant for direct usage outside of an abstraction, but better solves the grouping problem we faced.
http://code.google.com/p/concurrentlinkedhashmap/wiki/IndexableCache