Changes for page Concurrency
Last modified by chrisby on 2024/06/02 15:15
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -4,29 +4,27 @@ 4 4 5 5 ### Why concurrency? 6 6 7 -* Concurrency is a decoupling strategy. The what is decoupled from the when. Concurrency is important and can improve the throughput and structure of an application. On the other hand, it is hard to write clean concurrent code and it is harder to debug. -> own: And to test, right? 8 -* Concurrency doesn't always improve performance behavior and but it always changes the design of the program. When working with containers, you should know exactly what you are doing. 9 -* Concurrency demands a certain amount of management effort, which degrades performance behavior and requires additional code. Proper concurrency is complex, even for simple problems. Concurrency bugs are usually not reproducible; therefore, they are often written off as one-time occurrences (cosmic rays, glitches, etc.) rather than treated as true defects, as they should be. Concurrency requires a fundamental change in design strategy. 10 -* Challenges: When threads access out-of-sync data, incorrect results may be returned. 7 +* **Concurrency is a decoupling strategy**. The what is decoupled from the when. 8 +* **Concurrency is can improve the throughput and structure** of an application. 11 11 12 -### Principlesofdefensiveconcurrencyprogramming10 +### Why not concurrency? 13 13 14 -* SRP 15 - * Changes to concurrent code should not be mixed with changes to the rest of the code. So you should separate the two cleanly. 16 - * Concurrency has its own life cycle with development, modification, and polish. 17 - * Concurrent code is associated with special problems that have a different form and often a higher degree of difficulty than non-concurrent code. 18 -* Constrain the range of validity of data 19 - * Take data encapsulation to heart; restrict access to all shared resources. 20 - * So you should keep the mass of shared code low, and shared resources should only be claimed by threads that need them. That means one should divide the blocks and resources into smaller blocks if necessary. 21 -* Work with copies of the data 22 - * One can sometimes get around data sharing by: 23 - * working with copies of objects and treating them as read-only objects. 24 - * making multiple copies of an object, having multiple threads compute results on it, and merging those results into a single thread. 25 - * It is often worth creating multiple objects to avoid concurrency issues. 26 -* Threads should be as independent of each other as possible. 27 - * Threads should not share data or know anything about each other. Instead, they should prefer to work with their own local variables. 28 - * Try to decompose data into independent subsets that can be processed by independent threads, possibly in different processes. 12 +* **Unclean**: It is hard to write clean concurrent code, and it is harder to test and debug. 13 +* **Design Changes**: Concurrency doesn't always improve performance behavior and but it always requires fundamental design changes. 14 +* **Extra Management**: Concurrency demands a certain amount of management effort, which degrades performance behavior and requires additional code. 15 +* **Complexity**: Proper concurrency is complex, even for simple problems. 16 +* **Unreproducible**: Concurrency bugs are usually not reproducible; therefore, they are often written off as one-time occurrences (cosmic rays, glitches, etc.) rather than treated as true defects, as they should be. 17 +* **Side-Effects**: When threads access out-of-sync data, incorrect results may be returned. 29 29 19 +### Principles of Defensive Concurrency Programming 20 + 21 +* **Single-Responsibility Principle** 22 + * **Separation of code**: Changes to concurrent code should not be mixed with changes to the rest of the code. So you should separate the two cleanly. 23 + * **Separation of change**: Concurrent code has special problems that are different, and often more serious, than sequential code. This means that concurrent and sequential code should be changed separately, not within the same commit, or even within the same branch. 24 +* **Principle of Least Privilege**: Limit concurrent code to the resources it actually needs to avoid side effects. Minimize the amount of shared resources. Divide code blocks and resources into smaller blocks to apply more granular, and therefore more restrictive, resource access. 25 +* **Data Copies**: You can sometimes avoid shared resources by either working with copies of data and treating them as read-only objects, or by making multiple copies of data, having multiple threads compute results on them, and merging those results into a single thread. It is often worth creating multiple objects to avoid concurrency problems. 26 +* **Independence**: Threads should be as independent as possible. Threads should not share their data or know anything about each other. Instead, they should prefer to work with their own local variables. Try to break data into independent subsets that can be processed by independent threads, possibly in different processes. 27 + 30 30 ### Things to learn before working with concurrency 31 31 32 32 * Get to know your library ... ... @@ -59,9 +59,56 @@ 59 59 * Cloning Data to avoid side effects 60 60 * Side effects 61 61 * Producer-consumer 62 - * Reader -recordervs ReaderWriter??60 + * Reader Writer 63 63 * Philosopher problem → Study algorithms and their application in solutions. 64 64 63 +A few more suggestions from ChatGPT: 64 + 65 + Learning about concurrency algorithms and common concurrency problems is a great way to deepen your understanding of concurrent programming. Here's a list of key algorithms and problems, along with their typical solutions: 66 + Concurrency Algorithms: 67 + 68 + Producer-Consumer: 69 + Problem: How to handle scenarios where one or more threads (producers) are producing data and one or more threads (consumers) are consuming it. 70 + Solution: Use buffers, queues, semaphores, or condition variables to synchronize producers and consumers. 71 + 72 + Readers-Writers: 73 + Problem: How to manage access to a shared resource where some threads (readers) only read data, and others (writers) write data. 74 + Solution: Implement mechanisms to ensure that multiple readers can access the resource simultaneously, but writers have exclusive access. 75 + 76 + Dining Philosophers: 77 + Problem: A classic synchronization problem dealing with resource allocation and avoiding deadlocks. 78 + Solution: Strategies include resource hierarchy, arbitrator, or limiting the number of philosophers. 79 + 80 + Barriers: 81 + Problem: Synchronizing a group of threads to wait until they have all reached a certain point in their execution. 82 + Solution: Use barrier constructs that block threads until all have reached the barrier. 83 + 84 + Concurrency Problems and Solutions: 85 + 86 + Deadlocks: 87 + Problem: Occurs when multiple threads or processes are waiting on each other to release resources, and none of them can proceed. 88 + Solution: Deadlock prevention techniques (like resource ordering), deadlock avoidance (like Banker’s algorithm), and deadlock detection and recovery. 89 + 90 + Race Conditions: 91 + Problem: Occurs when the outcome of a program depends on the relative timing of threads or processes. 92 + Solution: Use mutual exclusion (mutexes), atomic operations, or transactional memory to ensure that only one thread can access the shared resource at a time. 93 + 94 + Livelocks: 95 + Problem: Threads or processes are actively performing concurrent operations, but these operations do not progress the state of the program. 96 + Solution: Careful algorithm design to ensure progress and avoid situations where processes continuously yield to each other. 97 + 98 + Starvation: 99 + Problem: A thread or process does not get the necessary resources to proceed, while others continue to be serviced. 100 + Solution: Implement fair locking mechanisms, priority scheduling, or resource allocation strategies that ensure all processes get a chance to proceed. 101 + 102 + Priority Inversion: 103 + Problem: A lower-priority thread holds a resource needed by a higher-priority thread, leading to the higher-priority thread waiting unexpectedly. 104 + Solution: Priority inheritance protocols where the lower-priority thread temporarily inherits the higher priority. 105 + 106 + Thread Interference: 107 + Problem: When multiple threads are accessing and modifying shared data, causing unexpected results. 108 + Solution: Ensure that critical sections of code that access shared resources are protected using synchronization mechanisms like locks. 109 + 65 65 ### Watch out for dependencies between synchronized methods 66 66 67 67 * Dependencies between synchronized methods in concurrent code cause subtle bugs.