Search code examples
javalru

Why is initial capacity set to (MAX_ENTRIES + 1) in LRUCache?


While searching for LRU Cache implementations for Java, came across two separate posts with similar implementations, and both are initializing LinkedHashMap with initial capacity = MAX_ENTRIES+1 [ e.g. new LinkedHashMap(MAX_ENTRIES+1, .75F, true) ]

What is/are the reason(s) for initial capacity being set to MAX_ENTRIES+1 ?

Referenced posts:

How would you implement an LRU cache in Java?

Easy, simple to use LRU cache in java


Solution

  • Because they don't know what they're doing, frankly.

    The documentation for LinkedHashMap specifies that the details of the capacity and the load factor are exactly the same as HashMap, and HashMap specifies

    An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.

    The "capacity" of the map is the number of hash table buckets, not the number of entries allowed in the map.

    Therefore, the hash table for the LinkedHashMap you describe will be resized when (MAX_ENTRIES + 1) * 0.75 entries have been added, which will be essentially three-quarters of MAX_ENTRIES.

    I suspect they were trying to ensure that the map had room for exactly one extra entry so the map didn't get resized between the insertion of a new entry and the eviction of the oldest entry, but that's not actually how this works.