Search code examples
operating-systemdeadlock

How Resource Allocation Graph Algorithm can prevent deadlocks?


According to Operating System Concepts book, Resource-Allocation-Graph Algorithm can prevent deadlocks as follow:

If we have the following allocation graph https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/images/Chapter7/7_07_DeadlockAvoidance.jpg

And P1 tried to allocate resource R2, the system prevents it and makes it wait, because that will lead to an unsafe state.

My question is as shown from the graph, P2 is waiting for P1 to release R1, and P1 is now waiting to allocate R2 and that leads to a deadlock. How this algorithm can prevent this type of deadlocks ?


Solution

  • I don't have a copy of your book, but I suspect a typo. The idea is to return an error (EDEADLOCK) to the resource allocation request that would complete the cycle; thus detecting pending deadlock rather than actively avoiding it. It is still up to the process with the failed request to take some corrective action, like dropping all its resources and trying to re-acquire them.

    If you replace resources with semaphore or mutex, it should be clear that waiting isn't going to help anything.

    To actively avoid deadlock, you pretty much need to either use semaphore sets -- that is acquire all the locks that a particular code path will need in one place (see system V semaphores) -- or arrange your code to use a particular ordering of locks. An example of the latter is to allocate locks by increasing address, thus all actors will attempt the allocation in the same order. Neither is practical for finely grained general purpose code, but possible for transaction processing applications.