Why ehcache doesn't propagate expiration events ? How do you deal with it ?
Here is my situation. I have two nodes that syncronize with each other using RMI. My config is below:
<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual, rmiUrls=//localhost:51001/sessionCache|//localhost:51002/sessionCache"/>
<cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=0.0.0.0, port=51002, socketTimeoutMillis=2000"/>
<diskStore path="java.io.tmpdir"/>
<cache name="sessionCache"
maxEntriesLocalHeap="20000"
maxEntriesLocalDisk="100000"
eternal="false"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="60"
memoryStoreEvictionPolicy="LRU"
transactionalMode="off">
<persistence strategy="localTempSwap"/>
<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" properties="replicateAsynchronously=true" />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
properties="bootstrapAsynchronously=false"
propertySeparator=","/>
</cache>
Now imagine following scenario:
timeToIdleSeconds
counter for it.RegisteredEventListeners.internalNotifyElementExpiry
which ends up calling RMISynchronousCacheReplicator.notifyElementExpired
.In step 4 I expect server B to send RmiEventType.REMOVE
to server A. Instead RMISynchronousCacheReplicator.notifyElementExpired
has following body
public final void notifyElementExpired(final Ehcache cache, final Element element) {
/*do not propagate expiries. The element should expire in the remote cache at the same time, thus
preseerving coherency.
*/
}
It looks like creator of RMICacheReplicatorFactory
never accounted for timeToIdleSeconds
algorithm of eviction.
Is manual call of cache.replace()
right after each cache.get()
only way to reset TTI (timeToIdle) in cluster ?
Is additional cacheEventListener with cache.remove()
call in notifyElementExpired
only way to remove expired element from cluster ?
Well, I ended up making hack. Let me know guys if there is more elegant solution.
I created additional cache event listener which looks like this
class MyCacheEventListener(properties: Properties) extends CacheEventListener {
override def notifyElementExpired(cache: Ehcache, element: Element): Unit = {
cache.getCacheEventNotificationService.notifyElementRemoved(element, false)
}
override def notifyElementRemoved(cache: Ehcache, element: Element): Unit = {}
override def notifyElementEvicted(cache: Ehcache, element: Element): Unit = {}
override def notifyRemoveAll(cache: Ehcache): Unit = {}
override def notifyElementPut(cache: Ehcache, element: Element): Unit = {}
override def notifyElementUpdated(cache: Ehcache, element: Element): Unit = {}
override def dispose(): Unit = {}
}
which is responsible for sending fake RmiEventType.REMOVE
to cluster. It's specified with scope local
in config
<cacheEventListenerFactory class="com.zzzz.MyCacheEventListenerFactory" listenFor="local" />
For refreshing I had to do this after successful cache.get()
cache.getCacheEventNotificationService.notifyElementUpdated(element, false)
It's ugly but not sure what else I could do. Frankly I think Terracotta could easily implement it in their RMICacheReplicatorFactory
.