public class Test {
static ConcurrentHashMap<Integer, Integer> map = null;
final static ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
public static void open() {
lock.writeLock().lock();
try {
if (map != null) {
return;
}
map = new ConcurrentHashMap<>();
} finally {
lock.writeLock().unlock();
}
}
public static void close() {
final ConcurrentHashMap<Integer, Integer> concurrentHashMap;
lock.writeLock().lock();
try {
if (map == null) {
return;
}
concurrentHashMap = map;
map = null;
} finally {
lock.writeLock().unlock();
}
// deal with concurrentHashMap data
}
public static boolean put(final int key, final int value) {
lock.readLock().lock();
try {
if (map == null) {
return false;
}
if (map.putIfAbsent(key, value) != null) {
return false;
}
} finally {
lock.readLock().unlock();
}
return true;
}
public static boolean remove(final int key) {
lock.readLock().lock();
try {
if (map == null) {
return false;
}
if (map.remove(key) == null) {
return false;
}
} finally {
lock.readLock().unlock();
}
return true;
}
}
In the code above, when put() and remove(), use readLock instead of writeLock, they are most frequently used.When open() and close(), both use writeLock, they are less frequently used. The target is to improve concurrency. I am not sure about:
I think both is yes. I know ConcurrentHashMap is thread safe. I want to know if this implementation is good or bad and why.
It is thread-safe in one sense. Once close
has been called, further calls to put
and remove
won't affect the state of the map that concurrentHashMap
refers to.
However, calls to put
and remove
before the next open
will lead to lost updates. That strikes me as poor design ... given that the ostensible point of open
and close
is to avoid losing updates. This could be a thread-safety issue at another level.
On the one hand: I observe that all updates to the map are performed while you hold the lock. Given that, I don't think there is any point in using a ConcurrentHashMap
. Using a regular HashMap
would be thread-safe and more efficient.
On the other hand, since all updates are performed while holding the lock, the lock is a concurrency bottleneck, and the potential concurrency advantages of using a ConcurrentHashMap
are moot.
I think I would implement this using an AtomicReference
(javadoc) ... and no lock. The trick is to use ref.getAndSet(new ConcurrentHashMap())
to "switch" the existing map for a new empty one.
The AtomicReference
will still be a concurrency bottleneck, but to a much lesser degree, and you can avoid the "close ... open" hole by doing the two actions as a single atomic operation.
See @Holger's answer for an example solution using AtomicReference
... noting that his version doesn't address the "close ... open hole" problem.