Chapter 7 Interprocess Communication Patterns - PowerPoint PPT Presentation


PPT – Chapter 7 Interprocess Communication Patterns PowerPoint presentation | free to download - id: 4cfb5-ZDE0Z


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Chapter 7 Interprocess Communication Patterns


Chapter 7 - Interprocess Communication Patterns. Why study IPC? Not typical programming style, but ... One interesting tidbit: readers & writers problem. ... – PowerPoint PPT presentation

Number of Views:416
Avg rating:3.0/5.0
Slides: 15
Provided by: JeffB9
Learn more at:


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Chapter 7 Interprocess Communication Patterns

Chapter 7 - Interprocess Communication Patterns
  • Why study IPC?
  • Not typical programming style, but growing.
  • Typical for operating system internals, though.
  • Typical for many network-based services.
  • Will take a process level view of IPC (Figure

  • Recall three methods used for IPC
  • Message passing
  • File I/O (via pipes)
  • Shared memory
  • Processes either compete (e.g., for CPU cycles,
    printers, etc.) or cooperate (e.g., a pipeline)
    as they run.
  • Competing processes can lead to incorrect data.
  • Example multiple simultaneous edits on the same
    file by different users
  • Each user loads up a copy of the file
  • Last user to write() wins the competition

  • Figure 7.2 table on page 234 demonstrate how
    the ordering of the read() and write() events
    between the users can potentially cause the loss
    of one of the users editing sessions.
  • This is called the mutual exclusion problem -
    either one of the two can edit, but not both at
    the same time (think of Exclusive OR Boolean
  • Critical section - sequence of actions that must
    be done one at a time.
  • Serially reusable resource - a resource that can
    only be used by one process as a time.

  • Race condition - when the order of process
    completion makes a difference in the outcome
    (like in the two edit problem its a race to see
    who comes in last!).
  • Potential for race condition exists if two or
    more processes are allowed to run in parallel
    serializing their execution prevents race
    condition (but is infeasible).
  • Figure 7.4 demonstrates a race condition at the
    atomic instruction level -- two processes
    incrementing a shared memory counter.
  • Race condition can be prevented by use of
    critical sections.

  • Figure 7.5 -- create a critical section for each
  • The critical section represents a code segment
    that must be serialized (that is, only allow one
    process at a time to be in the critical section).
    Its better to serialize just one segment than
    the execution of the entire process!
  • How to enforce a critical section?
  • One solution uses a hardware ExchangeWord()
    mechanism introduced in chapter 6, which we
    arent going to consider now.
  • A more likely solution is to use the operating
    system as the serializer enforcer.

  • So, we will look at different ways to solve the
    mutual exclusion problem.
  • First technique use SOS messages! First,
    though, lets create two new system calls that
    allow us to name the queues, rather than relying
    on the parent passing qIDs to children.
  • int AttachMessageQueue(char msg_q_name) - look
    up the message queue name in the file name space
    create it if it doesnt exist and set its attach
    count 1 (one process has it attached). If it
    already exists, increment the attach count (one
    more process has it attached). Return the qID
    for later SendMessage()/ReceiveMessage() calls
    (or -1).

  • int DetachMessageQueue(int msg_q_id) - Decrement
    the attach count by one if the attach count
    reaches zero then delete the message queue file
    from the file system. Return -1 if msg_q_id is
  • Extend Exit() so that any attached queues are
    automatically detached when the process ends, to
    keep things straight (just like auto-closing of
    any opened files).
  • We will surround the critical section of code
    with a ReceiveMessage() call, which will block
    the calling process until a message is actually
    in the queue. The SendMessage() call will be
    used after the critical section to signal the
    other waiting process.

  • Two-process Mutual Exclusion using a Message
    Queue (page 239 Figure 7.6)
  • Note that the message content is not used.
  • One process has to start the ball rolling (seed
    the first ReceiveMessage() to not block).
  • The messages are like a ticket or token that
    permits the ticket holder to access the file.
    Notice that the sender and receiver can be the
    same, depending on which process gets scheduled
    by the O/S.
  • This algorithm works for more than 2 processes,
    due to the serial nature of the message queue
  • This algorithm also nicely avoids busy waiting,
    since the waiting processes are blocked.

  • IPC as a signal (Figure 7.7)
  • We want a way to synchronize code between two
  • Use ReceiveMessage() to wait for the signal and
    SendMessage() to send the signal.
  • An example The Wait() call of a parent will
    block until a signal is received from the
    Exit() of a child.
  • Rendezvous
  • Want a way to know that two processes are
    synchronized so they can begin a common task at
    roughly the same time.
  • Use a two-way symmetric signaling method with
    SendMessage()/ReceiveMessage() pairs (Figure 7.8)

  • Producer-Consumer IPC pattern
  • Most basic pattern of IPC combines signaling
    with data exchange.
  • Easily visualized as a pipeline (Figure 7.10)
  • xlsfonts grep adobe
  • Note that the producer can get ahead of the
    consumer by generating more output than the
    consumer can consume (vice versa) the O/S must
    perform buffering.
  • The O/S does not have access to infinite disk
    and/or memory, so buffering is limited the
    producer- consumer code should be written with
    this in mind. This also helps keep slack on the
    line if process scheduling is bursty.

  • The Producer-Consumer pattern with limited
    buffering (page 248-249)
  • Makes use of two message queues one for the data
    being produced/consumed and another to throttle
    the producer.
  • This is a symmetric signaling example.
  • The consumer fills the throttle queue with (in
    this case) 20 messages -- in other words, the
    producer has 20 signals already queued up and can
    then send up to 20 messages before receiving
    another throttle up signal.
  • Note if the buffer limit is set to one this
    algorithm is equivalent to the rendezvous (except
    we continuously rendezvous while
  • The Producer-Consumer model is versatile many
    variations (Figures 7.12 7.13)

  • Client-Server IPC pattern
  • Many resources lend themselves well to having a
    single centralized server that responds to
    requests from multiple clients. Examples abound
    - print server, file server, web server, telnet
    server, email server, etc.
  • Server will wait on a public message queue,
    waiting for requests and servicing them as they
  • Example Squaring server (page 251) -- yes,
    this is the same rendezvous/consumer-producer
    code that pairs up SendMessage()/ReceiveMessage().
    The difference is in the client vs server

  • Client-Server IPC pattern
  • Another difference - servers are usually always
    running, waiting for requests clients get in
    and get out.
  • Usually servers provide multiple services and are
    written to handle such Figure 7.14 is an example
    of a file servers algorithm.
  • Multiple Servers/Client Database Access/Update
    (Individually review 7.11 7.12).
  • One interesting tidbit readers writers
  • Can have parallel readers but only one writer
    accessing database at at time.

  • Nice summary of the IPC Patterns on page
  • Mutual Exclusion (mutex)
  • Signaling
  • Rendezvous
  • Producer-Consumer
  • Client-Server
  • Multiple Servers Clients
  • Database Access Update