Lock-free data structures are fast and scalable. When would be the case to choose to implement locking over lock-free data structures?
Lock free data structures are only scalable when contention is low; when contention is high, lock free rapidly devolves into negative scaling.
In the spectrum of contention, low contention is best handled by lock free; medium contention by locking; high contention by lock avoidance. These are rules of thumb, at best.
Lock avoidance can cover anything from programming models like dispatch or actors up to functional. The latter is still somewhat elusive, but has its successes, particularly if correctness is of concern.
An age ago I worked on a successful commercial micro kernel, which used optimistic locking as its basic mechanism. A kernel call would proceed in read-only mode through the kernel data structures until it determined what the outcome would be. At that point, it asserted a flag (comp&swap), and proceeded to update the data structures. If a competing entity entered the kernel the two proceeded to race to the assertion point. The loser would be re-wound to restart the call, the winner would continue.
This mechanism permitted a very low latency kernel, especially on not very impressive hardware; however pathological cases could emerge where a given kernel call would never complete, or would require a large amount of redundant processing to eventually compete. [ If this seems vaguely similar to linux's rcu (or ibms rcu that it was based upon), yes, although it predated linux & aix by a decade and half a decade respectively. ]
The key is in measurement, but the rules of thumbs are still valuable.