Search code examples
iosswiftmultithreadinggrand-central-dispatch

Why doesn't customSerialQueue.sync inside DispatchQueue.main.async deadlock, while DispatchQueue.main.sync does?


I have two code blocks, one using DispatchQueue.main.sync inside DispatchQueue.main.async, and another using a customSerialQueue.sync instead of DispatchQueue.main.sync. The first block causes a deadlock, while the second doesn't. I'm wondering why this is the case.

Here's the first block of code that deadlocks:

DispatchQueue.main.async {
    DispatchQueue.main.sync {
        print("this won't print")
    }
}

The DispatchQueue.main.sync call blocks the current thread it's executing on (which is the main thread), until the print statement executes on the main thread. However, the main thread is already blocked by the DispatchQueue.main.sync call, resulting in a deadlock.

And here's the second block of code that doesn't deadlock:

let customSerialQueue = DispatchQueue(label: "com.example.serialqueue")

DispatchQueue.main.async {
    // main thread
    customSerialQueue.sync {
        print("this will print")  // main thread
    }
}

As for my understanding, calling customSerialQueue.sync reuses the thread (which happens to be the main thread) to run the block. I confirmed it really does by using breakpoints in Xcode. So I assumed this would also lead to deadlock, similar to the case in the first code block.

I wonder what's the difference between the two, and why the second one doesn't deadlock.


Solution

  • As you noted (and as I have noted in a previous answer), dispatching synchronously from a serial queue to itself will always deadlock. And this makes perfect sense. Because it is synchronous, you are blocking the caller’s thread (and because the caller was on a serial queue, blocking that entire serial queue) waiting for additional code dispatched to that very same blocked serial queue to run.

    The sync documentation sums it up concisely:

    Calling this function and targeting the current queue results in deadlock.

    That is, admittedly, an over simplification (because it only guarantees a deadlock if the current queue is a serial queue or you exhaust the worker thread pool), but it confirms your experience.

    Before we get to the question as to why the second example does not deadlock, let’s explain (for readers unfamiliar with the underlying optimization) what is going on. When you dispatch synchronously with sync, the documentation tells us:

    As a performance optimization, this function executes blocks on the current thread whenever possible …

    Effectively, what is going on, is that libDispatch is smart enough to say, “hey, if the current thread is blocked anyway, whenever possible, why don’t we just avoid the costly context switch from one thread to another and instead just run the dispatched code on the current thread.” There are admittedly exceptions to this clever little optimization, but they aren’t relevant here, so I won’t belabor them.

    So, I might rephrase your characterization that sync “reuses the thread.” I think it is more accurate to say that it just never leaves that thread at all if it knows it’s going to be blocked/idle in the interim, anyway. That’s the whole point of the optimization, namely to avoid a costly context switch.

    In short, the answer to your original question is that in your first example, you are dispatching synchronously to a blocked queue, whereas in the second example, you are dispatching synchronously to a different, unblocked queue. It is mildly interesting that the sync optimization lets this latter example continue execution on the current thread, but that is not relevent here. The queue to which you are dispatching in the second example is not blocked.