Distributed%20Operating%20Systems%20CS551 - PowerPoint PPT Presentation

About This Presentation
Title:

Distributed%20Operating%20Systems%20CS551

Description:

... Send and Receive Primitives with Buffer. (Galli, p.58) 7 February 2001 ... 'implemented by a finite-size, FIFO byte-stream buffer maintained by the kernel' ... – PowerPoint PPT presentation

Number of Views:136
Avg rating:3.0/5.0
Slides: 56
Provided by: scha67
Category:

less

Transcript and Presenter's Notes

Title: Distributed%20Operating%20Systems%20CS551


1
Distributed Operating SystemsCS551
  • Colorado State University
  • at Lockheed-Martin
  • Lecture 3 -- Spring 2001

2
CS551 Lecture 3
  • Topics
  • Real Time Systems (and networks)
  • Interprocess Communication (IPC)
  • message passing
  • pipes
  • sockets
  • remote procedure call (RPC)
  • Memory Management

3
Real-time systems
  • Real-time systems
  • systems where the operating system must ensure
    that certain actions are taken within specified
    time constraints Chow Johnson, Distributed
    Operating Systems Algorithms, Addison-Wesley
    (1997).
  • systems that interact with the external world in
    a way that involves time When the answer is
    produced is as important as which answer is
    produced. Tanenbaum, Distributed Operating
    Systems, Prentice-Hall (1995).

4
Real-time system examples
  • Examples of real-time systems
  • automobile control systems
  • stock trading systems
  • computerized air traffic control systems
  • medical intensive care units
  • robots
  • space vehicle computers (space ground)
  • any system that requires bounded response time

5
Soft real-time systems
  • Soft real-time systems
  • missing an occasional deadline is all right
    Tanenbaum, Distributed Operating Systems,
    Prentice-Hall (1995).
  • have deadlines but are judged to be in working
    order as long as they do not miss too many
    deadlines Chow Johnson, Distributed Operating
    Systems Algorithms, Addison-Wesley (1997).
  • Example multimedia system

6
Hard real-time systems
  • Hard real-time systems
  • only judged to be correct if every task is
    guaranteed to meet its deadline Chow Johnson,
    Distributed Operating Systems Algorithms,
    Addison-Wesley (1997).
  • a single missed deadline is unacceptable, as
    this might lead to loss of life or an
    environ-mental catastrophe Tanenbaum,
    Distributed Operating Systems, Prentice-Hall
    (1995).

7
Hard/Soft real-time systems
  • A job should be completed before its deadline to
    be of use (in soft real-time systems) or to avert
    disaster (in hard real-time systems). The major
    issue in the design of real-time operating
    systems is the scheduling of jobs in such a way
    that a maximum number of jobs satisfy their
    deadlines. Singhal Shivaratri, Advanced
    Concepts in Operating Systems, McGraw-Hill (1994).

8
Firm real-time systems
  • Firm real-time systems
  • similar to a soft real-time system, but tasks
    that have missed their deadlines are discarded
    Tanenbaum, Distributed Operating Systems,
    Prentice-Hall (1995).
  • where missing a deadline means you have to kill
    off the current activity, but the consequence is
    not fatal Chow Johnson, Distributed Operating
    Systems Algorithms, Addison-Wesley (1997).
  • E.g. partially filled bottle on assembly line

9
Types of real-time systems
  • Re-active
  • interacts with environment
  • Embedded
  • controls specialized hardware
  • Event-triggered
  • unpredictable, asynchronous
  • Time-Triggered
  • predictable, synchronous, periodic

10
Myths of real-time systems (Tanenbaum)
  • Real-time systems are about writing device
    drivers in assembly code.
  • involve more than device drivers
  • Real-time computing is fast computing.
  • computer-powered telescopes
  • Fast computers will make real-time systems
    obsolete.
  • no -- only more will be expected of them

11
Message passing
  • A form of communication between two processes
  • A physical copy of message is sent from one
    process to the other
  • Blocking vs Non-blocking

12
Blocking message passing
  • Sending process must wait after send until an
    acknowledgement is made by the receiver
  • Receiving process must wait for expected message
    from sending process
  • A form of synchronization
  • Receipt determined
  • by polling common buffer
  • by interrupt

13
Figure 3.1 Blocking Send and Receive
Primitives No Buffer. (Galli, p.58)
14
Figure 3.2 Blocking Send and Receive Primitives
with Buffer. (Galli, p.58)
15
Non-blocking message passing
  • Asynchronous communication
  • Sending process may continue immediately after
    sending a message -- no wait needed
  • Receiving process accepts and processes message
    -- then continues on
  • Control
  • buffer -- receiver can tell if message still
    there
  • interrupt

16
Process Address
  • One-to-one addressing
  • explicit
  • implicit
  • Group addressing
  • one-to-many
  • many-to-one
  • many-to-many

17
One-to-one Addressing
  • Explicit address
  • specific process must be given as parameter
  • Implicit address
  • name of service used as parameter
  • willing to communicate with any client
  • acts like send_any and receive_any

18
Figure 3.3 Implicit Addressing for Interprocess
Communication. (Galli, p.59)
19
Figure 3.4 Explicit Addressing for Interprocess
Communication. (Galli,p.60)
20
Group addressing
  • One-to-many
  • one sender, multiple receivers
  • broadcast
  • Many-to-one
  • multiple senders, but only one receiver
  • Many-to-many
  • difficult to assure order of messages received

21
Figure 3.5 One-to-Many Group Addressing.
(Galli, p.61)
22
Many-to-many ordering
  • Incidental ordering
  • least structured, fastest
  • acceptable if all related messages received in
    any order
  • Uniform ordering
  • all receivers receive messages in same order
  • Universal ordering
  • all messages must be received in exactly the same
    order as sent

23
Figure 3.6 Uniform Ordering. (Galli, p.62)
24
Pipes
  • interprocess communication APIs
  • implemented by a finite-size, FIFO byte-stream
    buffer maintained by the kernel
  • serves as a unidirectional communication link so
    that one process can write data into the tail end
    of the pipe while another process may read from
    the head end of the pipe
  • Chow Johnson, Distributed Operating Systems
    Algorithms, Addison-Wesley (1997).

25
Pipes, continued
  • created by a pipe system call, which returns two
    pipe descriptors (similar to a file descriptor),
    one for reading and the other for writing using
    ordinary write and read operations (CJ)
  • exists only for the time period when both reader
    and writer processes are active (CJ)
  • the classical producer and consumer IPC
    problem (CJ)

26
Figure 3.7 Interprocess Communication Using
Pipes. (Galli, p.63)
27
Unnamed pipes
  • Pipe descriptors are shared by related
    processes (e.g. parent, child)
  • Chow Johnson, Distributed Operating Systems
    Algorithms, Addison-Wesley (1997).
  • Such a pipe is considered unnamed
  • Cannot be used by unrelated processes
  • a limitation

28
Named pipes
  • For unrelated processes, there is a need to
    uniquely identify a pipe since pipe descriptors
    cannot be shared. One solution is to replace the
    kernel pipe data structure with a special FIFO
    file. Pipes with a path name are called named
    pipes. (CJ)
  • Since named pipes are files, the communicating
    processes need not exist concurrently (CJ)

29
Named pipes, continued
  • Use of named pipes is limited to a single domain
    within a common file system. (CJ)
  • a limitation
  • . Therefore, sockets.

30
Sockets
  • a communication endpoint of a communication link
    managed by the transport services (CJ)
  • created by making a socket system call that
    returns a socket descriptor for subsequent
    network I/O operations, including file-oriented
    read/write and communication-specific
    send/receive (CJ)

31
Figure 1.4 The ISO/OSI Reference Model. (Galli,
p.9)
32
Sockets, continued
  • A socket descriptor is a logical communication
    endpoint (LCE) that is local to a process it
    must be associated with a physical communication
    endpoint (PCE) for data transport. A physical
    communication endpoint is specified by a network
    host address and transport port pair. The
    association of a LCE with a PCE is done by the
    bind system call. (CJ)

33
Types of socket communication
  • Unix
  • local domain
  • a single system
  • Internet
  • world-wide
  • includes port and IP address

34
Types , continued
  • Connection-oriented
  • uses TCP
  • a connection-oriented reliable stream transport
    protocol (CJ)
  • Connectionless
  • uses UDP
  • a connectionless unreliable datagram transport
    protocol (CJ)

35
Connection-oriented socket communication
Client
Server
socket
socket
bind
listen
rendezvous
connect
accept
request
read
write
reply
write
read
  • Adapted from Chow Johnson

36
Connectionless socket communication
Peer Processes
Peer Processes
socket
socket
LCE
LCE
bind
bind
PCE
PCE
Sendto /recvfrom
  • Adapted from Chow Johnson

37
Socket support
  • Unix primitives
  • socket, bin, connect, listen, send, receive,
    shutdown
  • available through C libraries
  • Java classes
  • Socket
  • ServerSocket

38
Figure 3.8
  • Socket
  • Analogy
  • (Galli,p.66)

39
Figure 3.9  Remote Procedure Call Stubs.
(Galli,p.73)
40
Figure 3.10  Establishing Communication for
RPC. (Galli,p.74)
41
Table 3.1 Summary of IPC Mechanisms and Their Use
in Distributed System Components.
  • (Galli, p.77)

42
Memory Management
  • Review
  • Simple memory model
  • Shared memory model
  • Distributed shared memory
  • Memory migration

43
Virtual memory (pages, segments)
  • Virtual memory
  • Memory management unit
  • Pages - uniform sized
  • Segments - of different sizes
  • Internal fragmentation
  • External fragmentation

44
Figure 4.1  Fragmentation in Page-Based Memory
versus a Segment-Based Memory. (Galli, p.83)
45
Page replacement algorithms
  • Page fault
  • Thrashing
  • First fit
  • Best fit
  • LRU (NRU)
  • second chance
  • Worst fit

46
Figure 4.2 
  • Algorithms
  • for
  • Choosing
  • Segment
  • Location
  • (Galli,p.84)

47
Simple memory model
  • Parallel UMA systems
  • thrashing can occur when parallel processes each
    want its own pages in memory
  • time to service all memory requests expensive,
    unless memory large
  • virtual memory expensive
  • caching can be expensive

48
Shared memory model
  • NUMA
  • Memory bottleneck
  • Cache consistency
  • Snoopy cache
  • enforce critical regions
  • disallow caching shared memory data

49
Figure 4.3 
  • Snoopy
  • Cache
  • (Galli, p.89)

50
Distributed shared memory
  • BBN Butterfly
  • Reader-writer problems

51
Figure 4.4  Centralized Server for Multiple
Reader/Single Writer DSM. (Galli,p.92)
52
Figure 4.5  Partially Distributed Invalidation
for Multiple Reader/Single Writer DSM. (Galli,
p.92)
53
Ffigure 4.6  Dynamic Distributed Multiple
Reader/Single Writer DSM. (Galli, P.93)
54
Figure 4.7  Dynamic Data Allocation for Multiple
Reader/Single Writer DSM.(Galli,p.96)
55
Figure 4.8  Stop-and-Copy Memory Migration.
(Galli,p.99)
Write a Comment
User Comments (0)
About PowerShow.com