Changes for page Deadlock Prevention

Last modified by chrisby on 2023/11/28 19:17

From version 4.4
edited by chrisby
on 2023/11/26 19:44
Change comment: There is no comment for this version
To version 3.6
edited by chrisby
on 2023/11/26 19:34
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -1,12 +1,16 @@
1 1  Deadlocks can only occur if all four specific conditions are met. Therefore, strategies to prevent deadlocks focus on negating one of these conditions.
2 2  
3 -| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
4 -| **Condition** | **Description** | **Solutions** | **Dangers** |
5 -| Mutual Exclusion / Mutex | When resources can't be shared between threads and there are fewer resources than threads. | 1) Use concurrently accessible resources such as AtomicInteger. 2) Increase the number of resources until it is greater than or equal to the number of competing threads. 3) Check if each required resource is accessible before starting the task. | |
6 -| Lock & Wait | Once a thread has acquired a resource, it will not release it until it has acquired all the other resources it needs and has completed its work. | Before reserving a resource, check its availability. If a resource is unavailable, release all resources and start over. | 1) Starvation: A thread never manages to reserve all the resources it needs. 2) Livelock: The thread gets tangled up. →These two approaches are always applicable, but inefficient because they cause bad performance. |
7 -| No Preemption | A thread is unable to steal a resources reserved by another thread. | A thread is allowed to ask another thread to release all of its resources (including the required one) and starting from anew. This approach is similar to the 'Lock & Wait' solution but has a better performance. | |
8 -| | | | |
3 +| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
4 +| **Condition** | **Description** | **Solutions** | **Dangers** |
5 +| Mutual Exclusion / Mutex | When resources can't be used by mutual thread and there are less resources than threads. | 1) Use concurrently accessible resources like AtomicInteger. 2) Increase the number of resources until its greater or equal to the number of competing threads. 3 ) Check if every required resource is accessible before the task starts. | |
6 +| Lock & Wait | Once a thread acquires a resource, it will not release the resource until it has acquired all of the other resources it requires and has completed its work. | Before reservation of a resource, check its accessibility. If a resource is not accessible, release every resource and start from anew. |
9 9  
8 +1) Starvation: A thread never achieves to reserve all required resources. 2) Livelock: Thread gets tangled up.
9 +
10 +These two approach are always applicable but inefficient as it causes a bad performance. |
11 +| | | | |
12 +| | | | |
13 +
10 10  #### Lock & Wait
11 11  
12 12  * Description: