Final Review - PowerPoint PPT Presentation

1 / 49
About This Presentation
Title:

Final Review

Description:

Algorithms: FIFO, Shortest-Job-First, Priority-based, Round-robin, Multilevel ... timers and clock handler. Priority inversion. 27. Histogram of CPU-burst Times ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 50
Provided by: Fre58
Category:
Tags: final | first | review | timers

less

Transcript and Presenter's Notes

Title: Final Review


1
Final Review
  • Fred Kuhns

2
Computer Systems
  • Computer Systems
  • Hardware and software combine to solve specific
    problems
  • Two software categories 1) Application software,
    2) System software
  • Goals of an OS
  • User Environment Execution environment, Error
    detection and handling, Protection and security,
    Fault tolerance and failure recovery
  • Resource Management Time and space management
    Synchronization and deadlock handling Accounting
    and status information
  • Driving Concepts
  • Abstraction manage complexity, create an
    extended machine.
  • Virtualization Permits controlled sharing and
    isolation (enables process model and isolation)
  • Resource management implicit/explicit allocation
    and policy control. Performance critical,
    competing goals of maximizing utilization,
    minimizing overhead, providing fair/predictable
    access to system resources.
  • Resource Sharing apparent (logical) concurrency
    or true (parallel) concurrency
  • abstract machines transparently share resources
  • concurrent programs may also elect to explicitly
    share resources
  • Space-multiplexed orTime-multiplexed

3
Common Abstractions Process and Resource
An application consists of one or more processes,
a set of resources and state
Resources
Processes
...
...
memory Ri
...
CPU
display Rj
Operating System
4
Computations, Processes and Threads
  • Computation sequential execution, implements
    algorithm, applied to data set.
  • Program encoding of an algorithm using a
    programming language (i.e. C or C).
  • program requests resources from OS, example
    system call interface.
  • Process program in execution and embodies all
    resources and state of a running program (CPU,
    memory, files,...)
  • Modern process model separates execution context
    from other resources.
  • Thread Dynamic object representing an execution
    path and computational state. Remaining resources
    are shared amongst threads in a process

5
Implementing Operating Systems
  • Common functions
  • Device management
  • Process, thread and resource management
  • Memory management
  • File management
  • Issues in OS design
  • Performance should be low overhead
  • Exclusive use of resources
  • protection and security sharing, authorizing,
    authenticating
  • correctness does what it is supposed to do
  • maintainability universal goal
  • commercial they are used by people and
    organizations
  • standards and open systems conforming to
    standards
  • Common implementation mechanisms
  • Protection and processor modes
  • Trusted control program kernel
  • Method for processes to request services of the
    OS system calls

6
Policy and Mechanism
  • A recurring theme in OS design is ensuring
    exclusive use of system resources (real and
    abstract)
  • Administrators and developers define policies to
    define resource sharing and isolation
  • Mechanisms implement these policies
  • Common mechanisms
  • hardware enforced processor modes to control
    execution of privileged instructions (such as for
    I/O or memory)
  • Core OS modules are trusted and have access to
    protected operations and all resources. This
    kernel of the OS is responsible for enforcing
    policies.
  • Well defined and protected mechanism for
    requesting services of the OS. There are two
    approaches system calls or message passing.

7
OS Techniques
  • Controlling access to hardware resources is done
    using hardware supported privilege levels,
    typically two user and system
  • Privileged operations, resource management,
    protection enforcement (isolation), and sharing
    performed by a trusted control program the
    Kernel
  • Users request OS services using a well defined
    interface that validates user and notifies OS of
    request two common methods
  • System calls trap changes mode and executes
    privileged code in context of calling process
  • Message passing interface constructs message and
    sends to another system (i.e. privileged) process

8
An Example an OS Kernel
  • Trusted program that runs directly on the
    hardware
  • Loaded at boot time and initializes system,
  • creates some initial system processes.
  • remains in memory and manages the system
  • Resource manager/mediator - process is key
    abstraction.
  • Time share (time-slice) the CPU,
  • coordinate access to peripherals,
  • manage virtual memory.
  • Synchronization primitives.
  • Well defined entry points
  • syscalls, exceptions or interrupts.
  • Performs privileged operations.
  • Kernel Entry
  • Synchronous - kernel performs work on behalf of
    the process
  • System call interface (UNIX API) central
    component of the UNIX API
  • Hardware exceptions - unusual action of process
  • Asynchronous - kernel performs tasks that are
    possibly unrelated to current process.
  • Hardware interrupts - devices assert hardware
    interrupt mechanism to notify kernel of events
    which require attention (I/O completion, status
    change, real-time clock etc)
  • System processes - scheduled by OS (swapper and
    pagedaemon)

9
Zooming in on Computer Architecture
CPU
Memory
0
PC
1
MAR
IR
Instruction
Reg N
MBR
Reg 1
Reg 0
Instruction
I/O AR
Instruction
execution unit
I/O BR
Data
Devices
Data
Data
Data
.
.
Buffers
N
MBR - Memory Buffer Register I/O AR -
Input/Output Address Register I/O BE -
Input/Output Buffer Register
PC - Program Counter IR - Instruction
Register MAR - Memory Address Register
10
Instruction Cycle with Interrupts
Fetch Cycle
Execute Cycle
Interrupt Cycle
Interrupts Disabled
Fetch Next Instruction
Execute Instruction
Check for Process Int
START
Interrupts Enabled
HALT
11
Interrupts devices notify CPU of some event
Processor
Device table
dispatcher (interrupt handler)
X
Bus
command
status
rt-counter
Timer
What about multiple interrupts?
12
OS and I/O Hardware Management
  • Two fundamental operations 1) Processing data,
    2) Perform I/O
  • Large diversity in I/O devices, capabilities and
    performance
  • Goal provide simple/consistent interface,
    efficient use, max concurrency
  • Mechanisms device drivers provide standard
    interface to I/O devices
  • Kernel I/O tasks Scheduling, Buffering, Caching,
    Spooling, Reservations
  • Common concepts Port, Bus, Controller
  • Accessing a device Direct I/O instructions and
    Memory-mapped I/O
  • Common Techniques
  • Synchronous Direct I/O with polling, aka
    Programmed I/O
  • Asynchronous Interrupt Driven I/O
  • Direct memory access (DMA)
  • Memory Mapped I/O
  • Process I/O interface models Blocking,
    Nonblocking and Asynchronous
  • Improving performance Reduce context switch,
    data copy use DMA joint scheduling of multiple
    resources.

13
Process Model
  • Define process ?
  • Traditional unit of control for synchronization,
    sharing, communication and deadlock control
  • Explicit versus implicit resource allocations
  • Know state diagrams
  • Understand how process execution is interleaved
    on the system CPU(s)
  • Be able to describe a context switch
  • Understand process creation and termination
  • Cooperating processes
  • Independent processes cannot affect or be
    affected by one another
  • Cooperating processes can affect or be affected
    by one another
  • Advantages of process cooperation Information
    sharing, Computation speed-up, Modularity,
    Convenience
  • Dangers of process cooperation
  • Data corruption, deadlocks, increased complexity
  • Requires processes to synchronize their processing

14
Example Process Representation
System Memory
Process P2 State
Hardware State (registers)
init P0
Program counter
Kernel Process Table

P0 HW state resources
Process P3
Memory base register

Process State (logical)
P2 HW state resources
Process P1

PN HW state resources
Process P2
15
5 State Model More realistic
dispatch
admit
pause
event
terminate
wait
  • New The process is being created.
  • Running Instructions being executed.
  • Blocked (aka waiting) Must wait for some event
    to occur.
  • Ready Runnable but waiting to be assigned to a
    processor.
  • Exit The process has finished execution.

16
Suspending a Process
suspend
dispatch
admit
preempt
suspend
terminate
event
admit
wait
activate
suspend
activate
17
3 Process Example
entity
P1
P2
P3
System (dispatch)
...
time
18
Process Scheduling
  • Long-term scheduler job scheduler
  • selects which processes should be brought into
    the ready queue.
  • invoked infrequently (seconds, minutes)
  • controls the degree of multiprogramming
  • Medium-term scheduler
  • allocates memory for process.
  • invoked periodically or as needed.
  • Short-term scheduler CPU scheduler
  • selects which process should be executed next and
    allocates CPU.
  • invoked frequently (ms)

19
IPC (Message Passing)
  • Purposes Data Transfer, Sharing Data, Event
    notification, Resource Sharing and
    Synchronization, Process Control
  • Mechanisms message passing and shared memory
  • Message passing systems have no shared variables.
  • Two operations for fixed or variable sized
    message
  • send(message)
  • receive(message)
  • Communicating processes must establish a
    communication link and exchange messages via send
    and receive
  • Communication link physical or logical
  • implementation methods for logical communication
    links
  • Direct or Indirect communications
  • Symmetric or Asymmetric communications
  • Automatic or Explicit buffering
  • Send-by-copy or send-by-reference
  • fixed or variable sized messages

20
Communication Methods
  • Direct Processes must name each other
    explicitly Symmetric v. asymmetric addressing
  • Properties of communication link automatic link
    establishment, at most one link between processes
    and must know peers ID.
  • Disadvantages a process must know the name or ID
  • Indirect Use mailboxes (aka ports) with unique
    ID
  • Properties of communication link must share
    mailbox, gt2 processes per mailbox or gt 1 mailbox
    between processes.
  • Ownership process versus system mailbox,
    ownership and read permissions.
  • Synchronous versus asynchronous
  • blocking send, nonblocking send, blocking
    receive, nonblocking receive
  • Buffering Messaging system must temporarily
    buffer messages. Zero capacity, Bounded capacity,
    Unbounded capacity

21
Multiprocessors
  • Advantages performance and fault tolerance
  • Classifications tightly or loosely coupled
  • Memory access schemes UMA, NUMA and NORMA
  • Cache consistency problem

22
Typical SMP System
CPU
CPU
CPU
CPU
500MHz
cache
MMU
cache
MMU
cache
MMU
cache
MMU
System/Memory Bus
  • Issues
  • Memory contention
  • Limited bus BW
  • I/O contention
  • Cache coherence

I/O subsystem
50ns
Bridge
INT
ether
System Functions (timer, BIOS, reset)
scsi
  • Typical I/O Bus
  • 33MHz/32bit (132MB/s)
  • 66MHz/64bit (528MB/s)

video
23
Threads
  • Dynamic object representing an execution path and
    computational state.
  • Effectiveness of parallel computing depends on
    the performance of the primitives used to express
    and control parallelism
  • Useful for expressing the intrinsic concurrency
    of a program regardless of resulting performance
  • Benefits and drawbacks for the models we studied
    User threads, Kernel threads and Scheduler
    Activations

24
Threading Models
  • User threads Many-to-One
  • kernel threads One-to-One
  • Mixed user and kernel Many-to-Many

25
Solaris Threading Model (Combined)
Process 2
Process 1
user
Int kthr
kernel
hardware
26
CPU Scheduling
  • Understand CPU-I/O burst cycle
  • Components short-term scheduler, dispatcher
  • Criteria CPU utilization, Throughput, Turnaround
    time, Waiting time, Response time
  • Algorithms FIFO, Shortest-Job-First,
    Priority-based, Round-robin, Multilevel Queue,
    Multilevel feedback queue
  • Priority-based scheduling, static versus dynamic
    priorities
  • Solution to starvation problem?
  • Preemptive versus nonpreemptive schemes
  • Typical goals Interactive Systems, Batch
    Systems, Real-time Systems
  • timers and clock handler
  • Priority inversion

27
Histogram of CPU-burst Times
From Silberschatz, Galvin and Gagne, Operating
Systems Concepts, 6th eidtion, Wiley, 2002.
28
Concurrency Origins and problems
  • Context
  • Processes need to communicate
  • Kernel communication/controlling hardware
    resources (for example I/O processing)
  • Issues
  • How is information exchanged between processes
    (shared memory or messages)?
  • How to prevent interference between cooperating
    processes (mutual exclusion)?
  • How to control the sequence of process execution
    (conditional synchronization)?
  • How to manage concurrency within the OS kernel?
  • Problems
  • Execution of the kernel may lead to concurrent
    access to state
  • Deferred processing pending some event
  • Processes explicitly sharing resource (memory)
  • Problems with shared memory
  • Concurrent programs may exhibit a dependency on
    process/thread execution sequence or processor
    speed (neither are desirable!)
  • race condition whos first, and who goes
    next. affects program results.
  • There are two basic issues resulting from the
    need to support concurrency
  • Mutual exclusion ensure that processes/threads
    do not interfere with one another, i.e. there are
    no race conditions. In other words, program
    constraints (assumptions) are not violated.
  • Conditional synchronization. Processes/threads
    must be able to wait for the data to arrive or
    constraints (assertions) to be satisfied.

29
Race Conditions - Example
  • There are 4 cases for x
  • case 1 task A runs to completion first loading
    y0, z0. x 0 0 0
  • case 2 Task B runs and sets y to 1, then Task A
    runs loading y1 and z0. x 1 0 1
  • case 3 Task A runs loading y0, then Task B runs
    to completion, then Task A runs loading z2. x
    0 2 2
  • case 4 Task B runs to completion, then Task A
    runs loading y1, z2,x 1 2 3

Example 1 int y 0, z 0 thread A x y z
thread B y 1 z 2 Results x 0, 1, 2,
3 load y into R0 load z into R1 set R0 R0
R1 set R0 -gt x
30
Critical Section Problem
  • Entry/exit protocol satisfies
  • Mutual Exclusion At most one process may be
    active within the critical section CS.
  • Absence of deadlock (livelock) if two or more
    processes attempt to enter CS, one will
    eventually succeed.
  • Absence of Unnecessary delay Processes neither
    within nor competing for CS (terminated or
    executing in Non-CS) can not block another
    process from entering CS.
  • Eventual entry (no starvation, more a function of
    scheduling) if a processes is attempting to
    enter CS it will eventually be granted access.

Task A while (True) entry protocol
critical section exit protocol
non-critical section
31
Necessary Conditions for Deadlock
  • 1) Mutual exclusion
  • One process holds a resource in a non-sharable
    mode.
  • Other processes requesting resource must wait for
    resource to be released.
  • 2) Hold-and-wait
  • A process must hold at least one allocated
    resource while awaiting one or more resources
    held by other processes.
  • 3) No preemption
  • Resources not forcibly removed from a process
    holding it, the holding process must voluntarily
    released it.
  • 4) Circular wait
  • a closed chain of processes exists, such that
    each process holds at least one resource needed
    by the next process in the chain.

32
Example of a Resource Allocation Graph
R1
R2
P1
P2
P3
R3
R4
Initial No Deadlock
P2 releases R1, P3 requests R3 (No Deadlock Why
not?)
P3 requests R3 (Deadlock)
  • If graph contains no cycles ?
  • no deadlock.
  • If graph contains a cycle ?
  • if only one instance per resource type, then
    deadlock.
  • if several instances per resource type,
    possibility of deadlock.

33
Approaches to Deadlock Handling
  • Ensure system never enters a deadlocked state
    Use protocol to prevent or avoid deadlock
  • deadlock prevention scheme - ensure that at least
    one of the necessary conditions cannot hold.
  • deadlock avoidance scheme - requires the OS know
    in advance the resource usage requirements of all
    processes. Then for each request the OS decides
    if it could lead to a deadlock before granting.
  • Allow the system to enter a deadlock state and
    then recover (detection and recovery).
  • system implements an algorithm for deadlock
    detection, if so then recover
  • Assume deadlocks never occur in the system
  • used by most operating systems, including UNIX.

34
Recall the Von Neumann Architecture
Central Processing Unit (CPU)
Primary Memory
35
Memory Management
  • Understand the principle of locality (temporal
    and spatial locality)
  • Understand impact of Stride-k reference patterns
  • Understand cache
  • Memory Manager Functions Allocate memory to
    processes, Map process address space to allocated
    memory, Minimize access times while limiting
    memory requirements
  • Understand program creation steps and typical
    process address map compile, link and load
  • Partitioning schemes and fragmentation
  • varia ble sizes and placement algorithms
    best-fit, worst-fit, next-fit, first-fit.
  • Placement and relocation
  • Paging and segmentation

36
Cache/Primary Memory Structure
Cache
Set Number
Memory Address
Set 0
0
1
...
2
Block
3
Set S-1
E lines per set S 2s sets in the cache B 2b
data Bytes per line M 2m max memory address
t m (sb) tag bits per line 1 valid bit per
line (may also require a dirty bit) Cache size
C B E S m address bits ( t s b)
Block
2n - 1
Word Length
37
Typical Process Address Space
Low Address (0x00000000)
Process Address space
stack (dynamic)
High Address (0x7fffffff)
38
Memory Management Requirements
  • Relocation
  • Why/What
  • programmer does not know where the program will
    be placed in memory when it is executed
  • while the program is executing, it may be swapped
    to disk and returned to main memory at a
    different location
  • Consequences/Constraints
  • memory references must be translated in the code
    to actual physical memory address
  • Protection - Protection and Relocation are
    interrelated
  • Why/What
  • Protect process from interference by other
    processes
  • processes require permission to access memory in
    another processes address space.
  • Consequences/Constraints
  • impossible to check addresses in programs since
    the program could be relocated
  • must be checked at run time
  • Sharing - Sharing and Relocation are interrelated
  • allow several processes to access the same data
  • allow multiple programs to share the same program
    text

39
Variable Partition Placement Algorithm
alloc 16K block
8K
8K
12K
12K
First Fit
22K
6K
Last allocated block (14K)
Best Fit
18K
2K
8K
8K
6K
6K
Allocated block
Free block
14K
14K
Next Fit
36K
20K
Before
After
40
Hardware Support for Relocation
Relative address
Process Control Block
Base Register
Adder
Program
Absolute address
Bounds Register
Comparator
Data
Interrupt to operating system
Stack
Process image in main memory
41
An Example Paged Virtual Memory
Working set
Physical address space
P1 virtual address space
P2 virtual address space
resident
Address Translation
Non-resident
42
Example Paging System
CPU
Unitialized data
Stack and heap
DRAM
Allocated virtual pages
Low Address (0x00000000)
Swap
app1 Address space
Disk
UFS
High Address (0x7fffffff)
Text and initialized data
app1
43
File Mapping - read/write interface
VM Approach
Traditional Approach
Process P1
Process P1
mmap() Address space read/write Copy
Copy
Virtual Memory System
Buffer Cache
P1 pages
Copy
Copy
44
Paging and Segmented Architectures
  • Memory references are dynamically translated into
    physical addresses at run time
  • A process may be swapped in and out of main
    memory such that it occupies different regions
  • A process may be broken up into pieces that do
    not need to be located contiguously in main
    memory
  • All pieces of a process do not need to be loaded
    in main memory during execution
  • Example Program execution
  • Resident set Operating system brings into main
    memory portions of the memory space
  • If process attempts to access a non-resident page
    then the MMU raises an exception causing the OS
    to block process waiting for page to be loaded.
  • Steps for loading a process memory pages
  • Operating system issues a disk I/O Read request
  • Another process is dispatched to run while the
    disk I/O takes place
  • An interrupt is issued when disk I/O complete
    which causes the operating system to place the
    affected process in the Ready state

45
Know the following
  • Define thrashing
  • working set model
  • Three policies used with paging systems fetch,
    placement and replacement
  • Understand the MMU operation
  • Know the static and dynamic paging algorithms
  • static Optimal, Not recently used, First-in,
    First-out and Second chance, Clock algorithm,
    Least Recently Used, Least Frequently Used
  • Dynamic working set algorithm
  • How does page buffering help?

46
Address Translation Overview
MMU
Virtual address
CPU
physical address
cache
TLB
Page tables
Y bits
X bits
Virtual address
virtual page number
offset in page
Page Table Entry (PTE)
frame number
M
R
control bits
Z bits
47
Example 1-level address Translation
Virtual address
DRAM Frames
12 bits
20 bits
offset within frame
Frame X
X
offset
physical address of frame
offset with table
add
PTE
start of page table
frame number
M
R
control bits
(Process) Page Table
current page table register
48
Secondary Storage
  • See homework problems
  • Know the disk scheduling algorithms FCFS, SSTF,
    SCAN (elevator algorithm), C-SCAN, LOOK and
    C-LOOK

49
Remaining material
  • Review Class Notes and Homework Problems
  • FS, Networking, Remote FS and RPC.
Write a Comment
User Comments (0)
About PowerShow.com