Search code examples
c#asp.net-corememorycache

IMemoryCache, MemoryCache, Key Space, Dependency Injection, and Clear()


I have what looks like a simple problem that blew up. We have:

public class MyService
    private readonly IMemoryCache cache;
    private ConcurrentDictionary<int, bool> cacheKeys;
    public MyController(IMemoryCache m) { cache = m; }

    public UserModel GetUser(int userId) {
       // Gets the user from the cache if possible, adds it if not
       if (cache.TryGetValue(userId, out var result)) return result;
       result = new UserModel(userId);
       cache.Set(userId, result); // Absolute and Sliding expiration omitted for brevity
       cacheKeys.Add(userId, true);
       return result;
    }

    public void InvalidateCache() {
        // Existing comment reads something along the lines of
        // Since IMemoryCache has no clear, we do this
        foreach (var entry in cacheKeys.ToList())
            cache.Remove(entry.Key);
    }

    // RegisterCacheEvictionHandler omitted for brevity
}

The service is added to dependency injection by AddSingleton() and there is no reference to IMemoryCache or MemoryCache in Startup.cs.

So the problems are several fold.

  1. Adding to the IMemoryCache and adding to the ConcurrentDictionary can be split leading to the dictionary being out of date with the cache. This can cause keys to build up in the ConcurrentDictionary.
  2. Calling MemoryCache.Clear() will break other users of IMemoryCache. One of the other users has non-evictable objects in the cache.
  3. Key space is shared; this code works by accident of all other users using Guid keys.
  4. The existing code sets a time to live; but nowhere does anything set a cache size or compaction or anything like that.
  5. We have multiple heavyweight processes on the server. This has already forced us to switch to workstation GC. Not sure what this means for MemoryCache.

(MemoryCache is the ASP.NET Core version not the backwards compatibility one)


Solution

  • The ability to invalidate a known set of objects in a MemoryCache is almost built-in; just not in a way that's easily discovered.

    If we do:

       private readonly IMemoryCache cache;
       private Tuple<MemoryCacheEntryOptions, CancellationTokenSource> options; // Trust me a ValueTuple will not work here.
    
       public UserService(IMemoryCache m) {
            cache = m;
            options = CreateOptions();
       }
    
       private Tuple<MemoryCacheEntryOptions, CancellationTokenSource> CreateOptions() {
            var source = new CancellationTokenSource();
            var options = new MemoryCacheEntryOptions()
                  .SetSlidingExpiration(new TimeSpan(...))
                  .SetAbsoluteExpiration(new TimeSpan(...))
                  // This is the heart of the solution. We can signal immediate expiration of a set of objects with one call.
                  .AddExpirationToken(new CancellationChangeToken(source.Token));
            return Tuple.Create<options, source>();
        }
    
        public UserModel GetUser(int userId) {
           // Gets the user from the cache if possible, adds it if not
           if (cache.TryGetValue(userId, out var result)) return result;
           var cacheoptions = options.Item1;
           result = new UserModel(userId);
           cache.Set(userId, result, cacheoptions);
           return result;
        }
    
        public void InvalidateCache() {
            var newOptions = CreateOptions();
            newOptions = Interlocked.Exchange(ref options, newOptions);
            newOptions.Item2.Cancel();
            newOptions.Item2.Dispose();
        }
    

    Then InvalidateCache() works with no locking. Since there's no second keys collection there's nothing to get out of synchronization. This interacts correctly with cache size constraints of its own; only the .Set() call may be revisited.