Changes for page Concurrency

Last modified by chrisby on 2024/06/02 15:15

From version 1.8
edited by chrisby
on 2023/11/26 20:59
Change comment: There is no comment for this version
To version 1.15
edited by chrisby
on 2023/11/30 21:05
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -1,72 +1,45 @@
1 -## Concurrency
1 +Objects are abstractions of processing, **threads are abstractions of timing**.
2 2  
3 -* Objects are abstractions of processing, threads are abstractions of timing.
3 +### Why Concurrency?
4 4  
5 -### Why concurrency?
5 +* **Concurrency is a decoupling strategy**. The what is decoupled from the when.
6 +* **Concurrency is can improve the throughput and structure** of an application.
6 6  
7 -* Concurrency is a decoupling strategy. The what is decoupled from the when. Concurrency is important and can improve the throughput and structure of an application. On the other hand, it is hard to write clean concurrent code and it is harder to debug. -> own: And to test, right?
8 -* Concurrency doesn't always improve performance behavior and but it always changes the design of the program. When working with containers, you should know exactly what you are doing.
9 -* Concurrency demands a certain amount of management effort, which degrades performance behavior and requires additional code. Proper concurrency is complex, even for simple problems. Concurrency bugs are usually not reproducible; therefore, they are often written off as one-time occurrences (cosmic rays, glitches, etc.) rather than treated as true defects, as they should be. Concurrency requires a fundamental change in design strategy.
10 -* Challenges: When threads access out-of-sync data, incorrect results may be returned.
8 +### Why Not Concurrency?
11 11  
12 -### Principles of defensive concurrency programming
10 +* **Unclean**: It is hard to write clean concurrent code, and it is harder to test and debug.
11 +* **Design Changes**: Concurrency doesn't always improve performance behavior and but it always requires fundamental design changes.
12 +* **Extra Management**: Concurrency demands a certain amount of management effort, which degrades performance behavior and requires additional code.
13 +* **Complexity**: Proper concurrency is complex, even for simple problems.
14 +* **Unreproducible**: Concurrency bugs are usually not reproducible; therefore, they are often written off as one-time occurrences (cosmic rays, glitches, etc.) rather than treated as true defects, as they should be.
15 +* **Side-Effects**: When threads access out-of-sync data, incorrect results may be returned.
13 13  
14 -* SRP
15 - * Changes to concurrent code should not be mixed with changes to the rest of the code. So you should separate the two cleanly.
16 - * Concurrency has its own life cycle with development, modification, and polish.
17 - * Concurrent code is associated with special problems that have a different form and often a higher degree of difficulty than non-concurrent code.
18 -* Constrain the range of validity of data
19 - * Take data encapsulation to heart; restrict access to all shared resources.
20 - * So you should keep the mass of shared code low, and shared resources should only be claimed by threads that need them. That means one should divide the blocks and resources into smaller blocks if necessary.
21 -* Work with copies of the data
22 - * One can sometimes get around data sharing by:
23 - * working with copies of objects and treating them as read-only objects.
24 - * making multiple copies of an object, having multiple threads compute results on it, and merging those results into a single thread.
25 - * It is often worth creating multiple objects to avoid concurrency issues.
26 -* Threads should be as independent of each other as possible.
27 - * Threads should not share data or know anything about each other. Instead, they should prefer to work with their own local variables.
28 - * Try to decompose data into independent subsets that can be processed by independent threads, possibly in different processes.
17 +### Defensive Concurrency Programming
29 29  
30 -### Things to learn before working with concurrency
19 +* **Single-Responsibility Principle**
20 + * **Separation of code**: Changes to concurrent code should not be mixed with changes to the rest of the code. So you should separate the two cleanly.
21 + * **Separation of change**: Concurrent code has special problems that are different, and often more serious, than sequential code. This means that concurrent and sequential code should be changed separately, not within the same commit, or even within the same branch.
22 +* **Principle of Least Privilege**: Limit concurrent code to the resources it actually needs to avoid side effects. Minimize the amount of shared resources. Divide code blocks and resources into smaller blocks to apply more granular, and therefore more restrictive, resource access.
23 +* **Data Copies**: You can sometimes avoid shared resources by either working with copies of data and treating them as read-only, or by making multiple copies of the data, having multiple threads compute results on them, and merging those results into a single thread. It is often worth creating multiple objects to avoid concurrency problems.
24 +* **Independence**: Threads should be as independent as possible. Threads should not share their data or know anything about each other. Instead, they should prefer to work with their own local variables. Try to break data into independent subsets that can be processed by independent threads, possibly in different processes.
31 31  
32 -* Get to know your library
33 - * Use the thread-safe collections provided.
34 - * Use the executor framework to execute disjointed tasks.
35 - * Use non-blocking solutions if possible.
36 - * Multiple library classes are not thread-safe.
37 -* Thread-safe collections
38 - * So you should use ConcurrentHashMap instead of HashMap.
39 - * Author's recommendations: java.util.concurrent, java.util.concurrent.atomic, java.util.concurrent.locks.
40 -* Get to know execution models
41 - * Basic definitions
42 - * Bound Resources
43 - * Mutual Exclusion
44 - * Starvation
45 - * Deadlock
46 - * Livelock
47 - * Thread Pools
48 - * Future
49 - * Synchronization: General term for techniques that control the access of multiple threads to shared resources.
50 - * Race Condition: A situation where the system's behavior depends on the relative timing of events, often leading to bugs.
51 - * Semaphore: An abstract data type used to control access to a common resource by multiple threads.
52 - * Locks: Mechanisms to ensure that only one thread can access a resource at a time.
53 - * Atomic Operations: Operations that are completed in a single step relative to other threads.
54 - * Thread Safety
55 - * Producer-consumer
56 - * Reader-recorder vs Reader Writer??
57 - * Philosopher problem → Study algorithms and their application in solutions.
26 +### Basic Knowledge
58 58  
28 +Before starting to write concurrent code, get familiar with the following basics:
29 +
30 +* **Libraries**: Use the thread-safe collections provided. Use non-blocking solutions if possible. Be aware multiple library classes are not thread safe.
31 +* **Concepts**: Mutual Exclusion, Deadlock, Livelock, Thread Pools, Semaphore, Locks, Race Condition, Starvation,
32 +* **Patterns**: Producer-consumer, Reader-Writer
33 +* **Algorithms**: Study common algorithms and their use in solutions. For example, the Dining Philosophers problem.
34 +
59 59  ### Watch out for dependencies between synchronized methods
60 60  
61 -* Dependencies between synchronized methods in concurrent code cause subtle bugs.
62 -* Avoid applying more than one method to a shared object. If this is not possible, you have three options:
63 - * Client-based locking: the client should lock the server before the first method is called and ensure that the lock includes the code that calls the last method.
64 - * Server-based locking: Create a method in the server that locks the server, calls all methods, and then unlocks the server. Have the client call the new method.
65 - * Adapted Server: Create an intermediate component that performs the lock. This is a variant of server-based locking if the original server cannot be changed.
66 -* Keep synchronized sections small.
67 - * Locks are expensive because they add administrative overhead to delays. On the other hand, critical sections must be protected.
68 - * Critical sections, are parts of the code that are only executed correctly if several threads do not access it at the same time.
69 - * Keep synchronized sections as small as possible.
37 +* **Avoid dependencies between synchronized methods**: Synchronized means that only one thread can access a method at a time. In concurrent code, such dependencies, such as when one synchronized method calls another, can cause subtle bugs like deadlocks and performance issues.
38 +* **Avoid applying more than one method to a shared object.** If this is not possible, you have three options:
39 + * **Client-based locking**: The client locks the server, calls all the server methods, and then releases the lock.
40 + * **Server-based locking**: Create a method in the server that locks the server, calls all the methods, and then unlocks the server. A client can now safely call this new method.
41 + * **Adapted Server**: Create an intermediate component to perform the lock. This is a variant of server-based locking when the original server cannot be changed.
42 +* **Keep synchronized sections small. **Locks are expensive because they add administrative overhead and delay. On the other hand, critical sections need to be protected. Critical sections are pieces of code that will only run correctly if they are not accessed by multiple threads at the same time. Keeping synchronized sections small avoids both problems.
70 70  
71 71  ### Writing correct shutdown code is difficult
72 72