Search code examples
operating-systemhashmapfragmentation

why 2D hashmap is memory inefficient?


I was told by friends that using 2D hashmap is strongly discouraged due to fragmentation problem? Could anyone tell if that's the case and why?


Solution

  • Personally I don't see any reason to discourage the use if there is a legitimate need for a 2d hashmap.

    What they may be referring to is how the system deals with collisions. If two different values end up with the same hash value position, what do we do? We still need to store them both. There are a few different techniques used to handle this issue and one of them is to aim to start with a very large set of possible hash value positions which could potentially lead to a lot of wasted space. A better method is to just check the next available position until it finds a free spot.

    It has been a while since I studied the storage of these types, but that seems like what they may be talking about. It is not a major issue and certainly not a reason to never use hashmaps (including 2d ones). I'm not sure on this but I think the issues above compound when used in more dimensions (hence more of an issue with a 2d hashmap).