Sourcing javadoc for ConcurrentHashMap
, I read the below statement that is bothering me about thread safety of the said collection.
From : Class ConcurrentHashMap
Retrieval operations (including get) generally do not block, so may overlap with update operations (including
put
andremove
). Retrievals reflect the results of the most recently completed update operations holding upon their onset. For aggregate operations such asputAll
andclear
, concurrent retrievals may reflect insertion or removal of only some entries.
I find this paragraph self-contradictory. To be precise, statement 2 says retreivals reflect most recently completed operation , while statement 3 almost says that such a behaviour is not guaranteed for aggregate functions .
Does this mean aggregate operations like putAll
and clear
are still a risky bet ?
Does this mean aggregate operations like putAll and clear are still a risky bet ?
Their promise that "retrieval operations...do not block" puts some major restrictions on what else they can promise. For example, a map.get(k)
call must immediately return either null
or some v
that was earlier put(k,v)
with the same k
. The get(k)
call can't wait for some other thread to complete a map.putAll(someEnormousOtherMap)
call. They promised that it would not block!
Basically they can't keep that promise, unless the only operations that appear to be atomic are the insertions/removals/replacements of individual key/value pairs. The only way that aggragate operations can be implemented without breaking the non-blocking-get() promise is to implement them as non-atomic sequences of calls to the atomic primitives that operate on one key/value pair at a time.