Search code examples
multithreadingcachinglockingatomiclock-free

Do atomic operations become slower as more CPUs are added?


x86 and other architectures provide special atomic instructions (lock, cmpxchg, etc.) that allow you to write 'lock free' data structures. But as more and more cores are added, it seems as though the work these instructions will actually have to do behind the scenes will grow (at least to maintain cache coherency?). If an atomic add takes ~100 cycles today on a dual core system, might it take significantly longer on the 80+ core machines of the future? If you're writing code to last, might it actually be a better idea to use locks even if they're slower today?


Solution

  • You are right that topology constraints will, one way or another, increase latency of communication between cores, once the counts start going higher than a couple dozen. I don't really know what the intentions are of the x86 companies for dealing with that sort of scaling.

    But locks are implemented in terms of atomic operations. So you don't really win by trying to switch to them, unless they are implemented in a more scalable way than what you would be attempted with your own hand-rolled atomic operations. I think that generally, for single token-like contentions, atomic primitives will always still be the fastest way, regardless of how many cores you have.

    As Cray discovered long time ago, there's no free lunch here. High level software design, where you try to use potentially contentious resources in as infrequent as possible will always lead to the biggest payout in massively parallelized applications. This means doing as much work as possible as the result of a lock acquisition, but as quickly as possible as well. In extreme situations, this can mean pre-calculating your work on the assumption of a successfully acquired lock, trying to grab it, and just completing as fast as possible on success, otherwise throwing away your work and retrying on fail.