Does anyone know why buckets for HashMaps were chosen to be implemented via LinkedList rather than another Hashmap. It seems that contains or get would be O(1) and not amortized O(1) if buckets became HashMaps themselves.
Imagine the buckets are implemented using HashMap and then there are a lot of collisions. A good hash function should try to eliminate collisions but we are thinking about a very large number of elements. The collisions will now be stored in the HashMap implementation of the bucket. This HashMap should have space to hold it's own buckets. What if there were such a large number of elements that this HashMap had several collisions within it's buckets as well? Even with a very good hash function, somewhere down the line, the buckets of the bucket HashMap (say B for example) will start interfering with the buckets of the HashMap A which has several buckets, one of which is implemented by B.
LinkedList will not have this problem. Also keep in mind that once a bucket is assigned to a HashMap it is memory that is blocked to the outside - even if it is empty. LinkedList will grow dynamically and will not need more memory than the number of entries.
To summarize, you can of course implement whatever you want. You can use an ArrayList or just an array. But they have their own problems too (deleting leads to O(n) resizing time complexity and arrays have to have a fixed size). So considering all trade offs, LinkedList has been found to be the best bet.