Bagus' OS202 Page!

Logo

Join my adventure in exploring Operating Systems!

View the Project on GitHub baguspr/os202

HOME


TOP 10 LIST WEEK 07

1. Synchronization
Process Synchronization is a way to coordinate processes that use shared data. It occurs in an operating system among cooperating processes. Cooperating processes are processes that share resources. While executing many concurrent processes, process synchronization helps to maintain shared data consistency and cooperating process execution.

2. Critical Section
When more than one processes access a same code segment that segment is known as critical section. Critical section contains shared variables or resources which are needed to be synchronized to maintain consistency of data variable.
In concurrent programming, if one thread tries to change the value of shared data at the same time as another thread tries to read the value (i.e. data race across threads), the result is unpredictable

3. Peterson’s Solution
Peterson’s solution provides a good algorithmic description of solving the critical-section problem and illustrates some of the complexities involved in designing software that addresses the requirements of mutual exclusion, progress, and bounded waiting.

4. Semaphores
Semaphore is simply a variable that is non-negative and shared between threads. A semaphore is a signaling mechanism, and a thread that is waiting on a semaphore can be signaled by another thread. It uses two atomic operations, wait and signal for the process synchronization.

5. Deadlocks
A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to some another process. In this situation, none of the process gets executed since the resource it needs, is held by some other process which is also waiting for some other resource to be released.

6. Banker’s Algorithm
Banker’s algorithm is a deadlock avoidance algorithm. It is named so because this algorithm is used in banking systems to determine whether a loan can be granted or not. Whenever a new process is created, it must specify the maximum instances of each resource type that it needs, exactly.

7. Deadlock Prevention
Deadlock prevention algorithms ensure that at least one of the necessary conditions (Mutual exclusion, hold and wait, no preemption and circular wait) does not hold true. However most prevention algorithms have poor resource utilization, and hence result in reduced throughputs.

8. Deadlock Avoidance
Deadlock avoidance merely works to avoid deadlock; it does not totally prevent it.The basic idea here is to allocate resources only if the resulting global state is a safe state. In other words, unsafe states are avoided, meaning that deadlock is avoided as well. One famous algorithm for deadlock avoidance in the uniprocessor case is the Banker’s Algorithm.

9. Ways to Handle Deadlocks

10. Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously: