Shared Resource
A shared resource refers to resources or variables within a system that multiple processes or threads can access concurrently — such as monitors, printers, memory, files, or data.
When two or more processes attempt to read or write to a shared resource simultaneously,
this situation is called a race condition — where the timing or order of access affects the outcome.
Critical Section
A critical section is a segment of code where two or more processes or threads access shared resources, and the result may vary depending on the order of execution.
To resolve critical section problems, there are three main techniques:
- Mutex
- Semaphore
- Monitor
These techniques all satisfy the conditions of:
- Mutual exclusion
- Bounded waiting
- Progress (or flexibility)
All of them are fundamentally based on the concept of a lock.
Mutex
A mutex (mutual exclusion) is an object used by a process or thread to lock a shared resource with lock() and unlock it with unlock().
When locked, no other process or thread can access the locked region until it is unlocked.
A mutex only has two states: locked or unlocked.
Semaphore
A semaphore is a generalized form of mutex.
It consists of a simple integer value and two functions — wait() (also called the P operation) and signal() (also called the V operation) — to control access to the shared resource.
- wait(): the process waits until it’s its turn.
- signal(): passes control to the next process.
When a process or thread accesses a shared resource, it performs wait() on the semaphore;
when it releases the resource, it performs signal().
Semaphores have no condition variables, and while one process modifies the semaphore’s value, others cannot modify it at the same time.
Monitor
A monitor provides a higher-level abstraction to allow processes or threads to safely access shared resources.
It hides the shared resource and provides an interface to access it.
The monitor uses an internal queue to process access requests sequentially.
It is easier to implement than a semaphore, and mutual exclusion is automatic in a monitor,
whereas in a semaphore it must be implemented explicitly.
Deadlock
A deadlock occurs when two or more processes are waiting indefinitely for resources held by each other.
It wastes system resources and severely degrades overall performance.
Four Necessary Conditions for Deadlock
Deadlock can occur only if all of the following conditions are met:
- Mutual Exclusion - At least one resource cannot be shared and can only be used by one process at a time.
- Hold and Wait - A process is holding at least one resource while waiting to acquire additional resources.
- No Preemptio - Resources already allocated to a process cannot be forcibly taken away by another process.
- Circular Wait - Two or more processes form a circular chain where each process is waiting for a resource held by the next.
Methods to Handle Deadlock
- Prevention
- Prevent at least one of the four conditions from holding.
- Techniques include resource allocation graphs and the banker’s algorithm.
- Avoidance
- Continuously monitor the system state and only allocate resources if it will not lead to a deadlock.
- The banker’s algorithm is a classic example.
- Detection and Recovery
- Allow deadlocks to happen but detect them periodically by examining resource allocation.
- Recover by terminating processes or forcibly reclaiming resources.
- Ignoring
- If deadlocks occur rarely and have minimal impact, the system may simply ignore the problem.
Summary
Prevention and avoidance are ideal but often complex to implement.
If not feasible, detection and recovery may be used.
While deadlocks cannot be completely eliminated in all scenarios,
these strategies help minimize their impact on system performance.
'Computer Science > Operating System' 카테고리의 다른 글
Structure of CPU scheduling algorithm and each algorithm (0) | 2025.08.29 |
---|---|
Threads and Multithreading (0) | 2025.08.11 |
PCB & Context Switching (0) | 2025.08.03 |
Process (0) | 2025.07.30 |
How memory manages data (0) | 2025.07.27 |