I/O Management and Disk Scheduling - PowerPoint PPT Presentation

1 / 50
About This Presentation
Title:

I/O Management and Disk Scheduling

Description:

I/O Management and Disk Scheduling Chapter 8 Advanced Operating System – PowerPoint PPT presentation

Number of Views:254
Avg rating:3.0/5.0
Slides: 51
Provided by: Patrici327
Category:

less

Transcript and Presenter's Notes

Title: I/O Management and Disk Scheduling


1
I/O Management and Disk Scheduling
  • Chapter 8
  • Advanced Operating System

2
Test
  • In your opinion, what multimedia applications or
    tasks, if any, might be candidates for inclusion
    in a near-future operating system

3
Categories of I/O Devices
  • Human readable
  • Used to communicate with the user
  • Printers
  • Video display terminals
  • Display
  • Keyboard
  • Mouse

4
Categories of I/O Devices
  • Machine readable
  • Used to communicate with electronic equipment
  • Disk and tape drives
  • Sensors
  • Controllers
  • Actuators

5
Categories of I/O Devices
  • Communication
  • Used to communicate with remote devices
  • Digital line drivers
  • Modems

6
Differences in I/O Devices
  • Data rate
  • May be differences of several orders of magnitude
    between the data transfer rates

7
(No Transcript)
8
Differences in I/O Devices
  • Application
  • Disk used to store files requires file management
    software
  • Disk used to store virtual memory pages needs
    special hardware and software to support it
  • Terminal used by system administrator may have a
    higher priority

9
Differences in I/O Devices
  • Complexity of control
  • Unit of transfer
  • Data may be transferred as a stream of bytes for
    a terminal or in larger blocks for a disk
  • Data representation
  • Encoding schemes
  • Error conditions
  • Devices respond to errors differently

10
Performing I/O
  • Programmed I/O
  • Process is busy-waiting for the operation to
    complete
  • Interrupt-driven I/O
  • I/O command is issued
  • Processor continues executing instructions
  • I/O module sends an interrupt when done

11
Performing I/O
  • Direct Memory Access (DMA)
  • DMA module controls exchange of data between main
    memory and the I/O device
  • Processor interrupted only after entire block has
    been transferred

12
Programmed I/O
  • No interrupts occur
  • Processor is kept busy checking status

procedure readString (var s) repeat Send I/O
command go read a word repeat Read I/O
status until I/O done Read word from I/O
module Write word into memory until finished
reading
I/O are much slower than the CPU. It is very
inefficient for the CPU to wait for I/O
completion in a tight loop. (busy waiting).
13
Interrupt-Driven I/O
  • No busy waiting. Processor can proceed to do
    other things when I/O is in progress
  • When I/O is done, the CPU is interrupted
  • Still consumes a lot of processor time because
    every word read or written passes through the
    processor

14
Interrupt-Driven I/O
procedure readString (var s) repeat Send I/O
command go read a word now, the CPU
will do something else dont bother checking
I/O status until finished reading
When the I/O module finished reading the word, it
interrupts the CPU. The CPU will execute an
interrupt handler to move the word to memory.
/ interrupt handler / Read word from I/O
module Write word into memory return
Afterwards, control is returned to the program
which continues to read the next word.
Interrupt-driven I/O is still inefficient in data
transfer of large amount because the CPU has to
transfer the data word by word between I/O module
and memory.
15
Direct Memory Access
  • Processor grants I/O module authority to read
    from or write to memory
  • DMA module controls exchange of data between main
    memory and the I/O device
  • processor interrupted only after entire block has
    been transferred
  • The processor is only involved at the beginning
    and end of the transfer

16
DMA
The DMA module starts reading each word of the
data and save them in the memory.
procedure readString (var s) Request the DMA
module to read some data now, the
CPU will do something else dont bother
checking I/O status
DMA I/O is more efficient in large data transfer
because the interaction with the I/O module and
data transfer between I/O module and memory are
performed by the DMA module.
After the DMA has transferred all the data
requested to the memory, it notifies that it has
finished the I/O by sending the CPU an interrupt
17
Direct Memory Access
  • Processor delegates I/O operation to the DMA
    module
  • DMA module transfers data directly to or form
    memory
  • When complete DMA module sends an interrupt
    signal to the processor

18
DMA
19
DMA Configurations
20
DMA Configurations
21
Operating System Design Issues
  • Efficiency
  • Most I/O devices extremely slow compared to main
    memory
  • Use of multiprogramming allows for some processes
    to be waiting on I/O while another process
    executes
  • I/O cannot keep up with processor speed
  • Swapping is used to bring in additional Ready
    processes which is an I/O operation

22
Operating System Design Issues
  • Generality
  • Desirable to handle all I/O devices in a uniform
    manner
  • Hide most of the details of device I/O in
    lower-level routines so that processes and upper
    levels see devices in general terms such as read,
    write, open, close, lock, unlock

23
I/O Buffering
  • Reasons for buffering
  • Processes must wait for I/O to complete before
    proceeding
  • Certain pages must remain in main memory during
    I/O

24
I/O Buffering
  • Block-oriented
  • Information is stored in fixed sized blocks
  • Transfers are made a block at a time
  • Used for disks and tapes
  • Stream-oriented
  • Transfer information as a stream of bytes
  • Used for terminals, printers, communication
    ports, mouse and other pointing devices, and most
    other devices that are not secondary storage

25
Single Buffer
  • Operating system assigns a buffer in main memory
    for an I/O request
  • Block-oriented
  • Input transfers made to buffer
  • Block moved to user space when needed
  • Another block is moved into the buffer
  • Read ahead

26
Single Buffer
  • Block-oriented
  • User process can process one block of data while
    next block is read in
  • Swapping can occur since input is taking place in
    system memory, not user memory
  • Operating system keeps track of assignment of
    system buffers to user processes

27
Single Buffer
  • Stream-oriented
  • Used a line at time
  • User input from a terminal is one line at a time
    with carriage return signaling the end of the
    line
  • Output to the terminal is one line at a time

28
I/O Buffering
29
Double Buffer
  • Use two system buffers instead of one
  • A process can transfer data to or from one buffer
    while the operating system empties or fills the
    other buffer

30
Circular Buffer
  • More than two buffers are used
  • Each individual buffer is one unit in a circular
    buffer
  • Used when I/O operation must keep up with process

31
Disk Performance Parameters
  • To read or write, the disk head must be
    positioned at the desired track and at the
    beginning of the desired sector
  • Seek time
  • Time it takes to position the head at the desired
    track
  • Rotational delay or rotational latency
  • Time its takes for the beginning of the sector
    to reach the head

32
Disk Performance Parameters
  • Access time
  • Sum of seek time and rotational delay
  • The time it takes to get in position to read or
    write
  • Data transfer occurs as the sector moves under
    the head

33
Disk Scheduling Policies
  • Seek time is the reason for differences in
    performance
  • For a single disk there will be a number of I/O
    requests
  • If requests are selected randomly, we will poor
    performance

34
Disk Scheduling
  • At runtime, I/O requests for disk tracks come
    from the processes
  • OS has to choose an order to serve the requests

Process A reads tracks 2, 5
OS has to read these tracks 2,3,4,5,8.
Process B reads tracks 3, 5
Process C reads tracks 8, 4
35
Disk Scheduling
  • The order that the read/write head is moved to
    satisfy several I/O requests
  • determines the total seek time
  • affects performance
  • the OS cannot change the rotational delay or
    transfer time, but it can try to find a good
    order that spends less time in seek time.
  • If requests are selected randomly, we will get
    the worst possible performance...

36
Disk Scheduling Policy
  • FIFO fair, but near random scheduling
  • SSTF possible starvation
  • SCAN favor requests for tracks near the ends
  • C-SCAN
  • FSCAN avoid arm stickiness in SSTF, SCAN and
    C-SCAN

37
First-in-first-out, FIFO
  • process requests in the order that the requests
    are made
  • fair to all processes
  • approaches random scheduling in performance if
    there are many processes

8
1
2
7
10
Pending read/write request for a track
38
Shortest Service Time First, SSTF
  • select the disk I/O request that requires the
    least movement of the disk arm from its current
    position
  • always choose the minimum seek time
  • new requests may be chosen before an existing
    request

8
1
2
7
10
39
SCAN
  • arm moves in one direction only, satisfying all
    outstanding requests until there is no more
    requests in that direction. The service
    direction is then reversed.
  • favor requests for tracks near the ends

8
1
2
7
10
40
C-SCAN
  • restrict scanning to one direction only
  • when the last track has been visited in one
    direction, the arm is returned to the opposite
    end of the disk and the scan begins again.

8
1
2
7
10
41
FSCAN
  • Arm stickiness in SSTF, SCAN, C-SCAN in case of
    repetitive requests to one track
  • FSCAN uses two queues. When a scan begins, all
    of the requests are in one of the queues, with
    the other empty. During the scan, all new
    requests are put into the other queue.
  • Service of new requests is deferred until all of
    the old requests have been processed.

42
Disk Cache
  • Buffer in main memory for disk sectors
  • Contains a copy of some of the sectors on the
    disk

43
Disk Cache, Hit and Miss
  • When an I/O request is made for a particular
    sector, the OS checks whether the sector is in
    the disk cache.
  • If so, (cache hit), the request is satisfied via
    the cache.
  • If not (cache miss), the requested sector is read
    into the disk cache from the disk.

44
Least Recently Used
  • The block that has been in the cache the longest
    with no reference to it is replaced
  • The cache consists of a stack of blocks
  • Most recently referenced block is on the top of
    the stack
  • When a block is referenced or brought into the
    cache, it is placed on the top of the stack

45
Least Recently Used
  • The block on the bottom of the stack is removed
    when a new block is brought in
  • Blocks dont actually move around in main memory
  • A stack of pointers is used

46
Least Frequently Used
  • The block that has experienced the fewest
    references is replaced
  • A counter is associated with each block
  • Counter is incremented each time block accessed
  • Block with smallest count is selected for
    replacement
  • Some blocks may be referenced many times in a
    short period of time and the reference count is
    misleading

47
  • Hybrid
  • and
  • Open source

48
Windows
  • Supports a wide range of computer hardware
  • Large library software
  • Unstable blue screen of death (Except windows
    XP)
  • Requires a system reboot when facing errors
  • Secure operating system
  • Target of Viruses and spyware

49
Windows
  • Microsoft s reaction
  • Windows TCO and Security
  • Windows Genuine Advantage Notifications
  • Internet Explorer has lost market share to
    Firefox and Opera
  • Internet Explorer Windows
  • Firefox Windows, Linux
  • Opera Windows, Linux

50
Windows
  • Total cost of ownership (TCO)
  • Cost of computer and software
  • Maintenance
  • Training
  • Technical support
  • Hardware and software upgrade
  • Windows has a lower TCO Get the Facts

51
Windows
  • Security
  • Closed-source is invisible for crackers
  • Windows Vista (Longhorn)
  • High security
  • Easy to management
  • Detecting hardware feature problem
  • Faster start up time

52
Linux
  • Cost less
  • More secure Open source
  • OS for networking
  • Not viruses target

53
Technical Comparison
  • Kernel space The central module of an operating
    system. It is the part of the operating system
    that loads first, and it remains in main memory.
    Because it stays in memory, it is important for
    the kernel to be as small as possible while still
    providing all the essential services required by
    other parts of the operating system and
    applications. Typically, the kernel is
    responsible for memory management, process and
    task management, and disk management.
  • Windows file system, Internet explorer, windows
    media player, GUI
  • Linux only file system
  • Memory management
  • Windows Swap file
  • Linux avoiding swapping and allocating memory

54
Technical Comparison
  • Swapping
  • To replace pages or segments of data in memory.
  • Swapping is a useful technique that enables a
    computer to execute programs and manipulate data
    files larger than main memory.
  • The operating system copies as much data as
    possible into main memory, and leaves the rest on
    the disk.
  • When the operating system needs data from the
    disk, it exchanges a portion of data (called a
    page or segment ) in main memory with a portion
    of data on the disk.

55
Technical Comparison
  • Stability
  • Windows blue screen of death (instability)
  • Linux More stability
  • Common source of instability is due to bugs in
    various device drivers
  • Device Driver
  • Windows run in kernel space
  • Linux run in user space
Write a Comment
User Comments (0)
About PowerShow.com