Search code examples
asp.net-core.net-corememorycache

2nd call to MemoryCache.Set() with the same key erases entry if cache is full


This is a bit of an edge case, and I would submit this as a bug in the repo if I could find it...

Consider the following LINQPad snippet:

void Main()
{
    var memoryCache = new MemoryCache(new MemoryCacheOptions
    {
        SizeLimit = 1 // <-- Setting to 2 fixes the issue
    });

    Set(memoryCache);
    memoryCache.Get("A").Dump(); // Yields 1
    Set(memoryCache);
    memoryCache.Get("A").Dump(); // Yields null
}

private void Set(MemoryCache memoryCache)
{
    //memoryCache.Remove("A"); // <-- Also fixes the issue

    memoryCache.Set("A", 1, new MemoryCacheEntryOptions
    {
        AbsoluteExpirationRelativeToNow = TimeSpan.FromDays(1), 
        SlidingExpiration = TimeSpan.FromDays(1), 
        Size = 1
    });
}

My question is, when using .Set(), is a new entry added, then the old removed, thus requiring extra space allocated in the cache?


Solution

  • This seems to relate to a bug I logged (which was rejected). You can see the source for this here, and from what I read the logic in your (and my) case works like this:

    • Add item requested.
    • Size would be exceeded if the item was to be added (UpdateCacheSizeExceedsCapacity()) therefore reject request (silently, which is what I objected to).
    • Also, because this condition was detected, kick off OvercapacityCompaction(), which will remove your item.

    There looks to be a race-condition, as the compaction work is queued to a background thread; perhaps occasionally your test finds the item still there?

    To answer your specific question - no, it's not first added then the excess removed.

    Edit:

    Re the second Get() always returning null... I missed some extra handling of the existing entry, if found (it doesn't improve the outcome though). Prior to checking whether the size would be exceeded by adding the item, there is this:

    if (_entries.TryGetValue(entry.Key, out CacheEntry priorEntry))
    {
        priorEntry.SetExpired(EvictionReason.Replaced);
    }
    

    I.e. if it finds your existing entry it marks it as evicted. It then applies the UpdateCacheSizeExceedsCapacity() test, without factoring in that it's just evicted the existing entry (which arguably it could).

    Later on, still in Set()/SetEntry(), in the exceeds-capacity case, it does this:

    if (priorEntry != null)
    {
        RemoveEntry(priorEntry);
    }
    

    ...which immediately removes the previous entry. So whether and when OvercapacityCompaction() (or ScanForExpiredItems()) would have got it doesn't matter, it's gone before returning from the Set()/SetEntry().

    (I've also updated the source link above to the current value; doesn't change the logic).