Concurrency: Mutual Exclusion and Synchronization - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

Concurrency: Mutual Exclusion and Synchronization

Description:

Concurrency: Mutual Exclusion and Synchronization. – PowerPoint PPT presentation

Number of Views:136
Avg rating:3.0/5.0
Slides: 42
Provided by: Patty168
Category:

less

Transcript and Presenter's Notes

Title: Concurrency: Mutual Exclusion and Synchronization


1
Concurrency Mutual Exclusion and Synchronization
  • .

2
Concurrency
  • Multiple applications
  • Multiprogramming
  • Structured application
  • Application can be a set of concurrent processes
  • Operating-system structure
  • Operating system is a set of processes or threads

3
Difficulties with Concurrency
  • Sharing global resources
  • Management of allocation of resources
  • Programming errors difficult to locate

4
Concurrent Execution Problems
  • Concurrent processes (or threads) often need to
    access shared data and shared resources
  • Uncontrolled access to shared data will lead to
    an inconsistent view of the data or data races
  • Data races are also called race conditions

5
Example
  • Thread T1 and T2 are running procedure
    printtid()
  • Assume both have access to the same variable
    a and to a function call mytid() which returns
    the thread id of the current thread
  • Assume the thread can be interrupted at any
    point in the execution of printtid()
  • If T1 is first interrupted after it sets a and
  • T2 executes in its entirety (both statements)
  • then the character echoed by T1 will be the tid
    assigned by T2
  • The output will be two outputs of T2s TID
    instead of the two distinct values expected
  • static int a
  • void printtid()
  • a mytid()
  • cout ltlt a

6
.
7
Why the above problem ?
  • Thread/process scheduling is done at regular
    intervals using timer
  • Does not bother about what part of the code is
    executed by which thread
  • So what is the solution ?
  • Implement a mechanism which will produce expected
    output even in the case of concurrency across
    the critical section
  • The above mechanism is called synchronization

8
Synchronization
  • Threads cooperate in multithreaded programs by
    sharing resources, and data structures
  • For correctness, we need to control this
    cooperation
  • Threads interleave executions arbitrarily and at
    different rates
  • Scheduling is not under program control
  • We control cooperation using synchronization
  • Synchronization enables us to restrict the
    possible interleavings of thread executions
  • Discuss in terms of threads, also applies to
    processes

9
Shared Resources
  • We will initially focus on coordinating access to
    shared resources
  • Basic problem
  • If two concurrent threads (processes) are
    accessing a shared variable, and that variable is
    read/modified/written by those threads, then
    access to the variable must be controlled to
    avoid erroneous behavior
  • Over the next couple of lectures, we will look at
  • Mechanisms to control access to shared resources
  • Locks, mutexes, semaphores, monitors, condition
    variables,
  • Patterns for coordinating accesses to shared
    resources
  • Bounded buffer, producer-consumer, etc.

10
Classic Example
  • Suppose we have to implement a function to handle
    withdrawals from a bank account
  • withdraw (account, amount)
  • balance get_balance(account)
  • balance balance amount
  • put_balance(account, balance)
  • return balance
  • Now suppose that you and your significant other
    share a bank account with a balance of 1000.
  • Then you each go to separate ATM machines and
    simultaneously withdraw 100 from the account.

11
Example Continued
12
Interleaved Schedules
13
What is a race condition ?
  • The problem is that two concurrent threads (or
  • processes) accessed a shared resource (account)
  • without any synchronization
  • Known as a race condition (memorize this
    buzzword)
  • We need mechanisms to control access to these
    shared resources in the face of concurrency
  • So we can reason about how the program will
    operate
  • Our example was updating a shared bank account
  • Also necessary for synchronizing access to any
    shared data structure
  • Buffers, queues, lists, hash tables, etc.

14
When Are Resources Shared?
  • Local variables are not shared (private)
  • Refer to data on the stack
  • Each thread has its own stack
  • Never pass/share/store a pointer to a local
    variable on another threads stack
  • Global variables and static objects are shared
  • Stored in the static data segment, accessible by
    any thread
  • Dynamic objects and other heap objects are
    shared
  • Allocated from heap with malloc/free or
    new/delete

15
Mutual Exclusion
  • We want to use mutual exclusion to synchronize
    access to shared resources
  • Code that uses mutual exclusion to synchronize
    its execution is called a critical section
  • Only one thread at a time can execute in the
    critical section
  • All other threads are forced to wait on entry
  • When a thread leaves a critical section, another
    can enter

16
Critical Section Requirements
  • Critical sections have the following
    requirements
  • 1) Mutual exclusion
  • If one thread is in the critical section, then
    no other is
  • 2) Progress
  • If some thread T is not in the critical section,
    then T cannot prevent some other thread S from
    entering the critical section
  • 3) Bounded waiting (no starvation)
  • If some thread T is waiting on the critical
    section, then T will eventually enter the
    critical section
  • 4) Performance
  • The overhead of entering and exiting the critical
    section is small with respect to the work being
    done within it

17

Mechanisms For Building Critical Sections
  • Locks
  • Very primitive, minimal semantics, used to build
    others Semaphores
  • Basic, easy to get the hang of, but hard to
    program with
  • Monitors
  • High-level, requires language support,
    operations implicit
  • Messages
  • Simple model of communication and
    synchronization based on atomic transfer of
    data across a channel
  • Direct application to distributed systems
  • Messages for synchronization are straightforward
    (once we
  • see how the others work)

18
(No Transcript)
19
(No Transcript)
20
(No Transcript)
21
(No Transcript)
22
(No Transcript)
23
(No Transcript)
24
Problems with Spinlocks
  • The problem with spinlocks is that they are
    wasteful
  • If a thread is spinning on a lock, then the
    thread lock cannot make progress
  • How did the lock holder give up the CPU in the
    first place?
  • Lock holder calls yield or sleep
  • Involuntary context switch
  • Only want to use spinlocks as primitives to
    build higher-level synchronization constructs

25
Disabling Interrupts
  • Another implementation of acquire/release is to
  • disable interrupts
  • Note that there is no state associated with the
    lock
  • Can two threads disable interrupts
    simultaneously?
  • struct lock
  • void acquire (lock)
  • disable interrupts
  • void release (lock)
  • enable interrupts

26
On Disabling Interrupts
  • Disabling interrupts blocks notification of
    external events that could trigger a context
    switch (e.g., timer)
  • Disabling interrupts is insufficient on a
    multiprocessor
  • In multiprocessor scenario back to atomic
    instructions
  • Like spinlocks, only want to disable interrupts
    to
  • implement higher-level synchronization
    primitives

27
Summarize Where We Are
28
Higher-level synchronization
  • We looked at using locks to provide mutual
    exclusion
  • Locks work, but they have some drawbacks when
    critical sections are long
  • Spinlocks inefficient
  • Disabling interrupts can miss or delay
    important events
  • Instead, we want synchronization mechanisms
    that
  • Block waiters
  • Leave interrupts enabled inside the critical
    section
  • Look at two common high-level mechanisms
  • Semaphores binary (mutex) and counting
  • Monitors mutexes and condition variables
  • Use them to solve common synchronization
    problems

29
Blocking in semaphore
  • Associated with each semaphore is a queue of
    waiting processes, a counter and methods wait( )
    signal( )
  • When wait() is called by a thread
  • Decrement the counter and
  • If semaphore is open thread continues
  • If semaphore is closed, thread blocks on queue
  • signal() opens the semaphore
  • Increment the count and
  • If a thread is waiting on the queue, the thread
    is unblocked
  • If no threads are waiting on the queue, the
    signal is remembered for the next thread
  • In other words, signal() has history
  • This history is a counter
  • Wait and signal operations cannot be interrupted
  • Queue is used to hold processes waiting on the
    semaphore

30
Semaphore types
  • Semaphores come in two types
  • Mutex semaphore
  • Represents single access to an resource
  • Guarantees mutual exclusion to a critical
    section
  • Counting semaphore
  • Represents a resource with many units available
  • I.e. A resource that allows certain kinds of
    unsynchronized
  • concurrent access (e.g., reading)
  • Multiple threads can pass the semaphore
  • Number of threads determine by the semaphore
    count
  • mutex has count 1, counting has count N

31
Definition of semaphore primitives
  • struct semaphore
  • int count
  • queueType queue
  • void wait(semaphore s)
  • s.count--
  • if (s.count lt 0)
  • place this process in s.queue
  • block this process
  • void signal(semaphore s)
  • s.count
  • if (s.count lt 0)
  • remove a process P from s.queue
  • place process P on ready list

32
Semaphore details
  • Semaphore can be defined as an integer variable
    upon which three operations are defined
  • Semaphores counter may be initialized to a
    non-negative value (Generally 1)
  • Wait operation decrements the semaphore value.
  • If the value becomes negative ,then the process
    executing the wait is blocked
  • The signal operation increments the semaphore
    value
  • If the value is not positive ,then a process
    blocked by a wait operation is unblocked
  • Other than these three operations,there is no way
    to inspect or manipulate semaphores
  • Wait and signal primitives are assumed to be
    atomic
  • They can not be interrupted and each routine can
    be treated as an indivisible step

33
Binary semaphore structure
  • struct binary_semaphore
  • enum (zero, one) value
  • queueType queue
  • void waitB(binary_semaphore s)
  • if (s.value 1)
  • s.value 0
  • else
  • place this process in s.queue
  • block this process
  • void signalB(semaphore s)
  • if (s.queue.is_empty())
  • s.value 1
  • else

34
Using semaphores
35
Readers/Writers problem
  • An object is shared among several threads
  • Some threads only read the object, others only
    write it
  • We can allow multiple readers
  • But only one writer
  • How can we use semaphores to control access to
    the
  • object to implement this protocol?
  • Use three variables
  • int readcount number of threads reading object
  • Semaphore mutex control access to readcount
  • Semaphore w_or_r exclusive writing or reading

36
(No Transcript)
37
Readers/writers Notes
  • If there is a writer
  • First reader blocks on w_or_r
  • All other readers block on mutex
  • Once a writer exits, all readers can fall
    through
  • Which reader gets to go first?
  • The last reader to exit signals a waiting writer
  • If no writer, then readers can continue
  • If readers and writers are waiting on w_or_r,
    and a
  • writer exits, who goes first?
  • Again, depends on scheduler

38
(No Transcript)
39
(No Transcript)
40
(No Transcript)
41
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com