CPU Scheduling - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

CPU Scheduling

Description:

The Gantt Chart for the schedule is: Waiting time for P1 = 0; ... in average waiting time. ... to 1, each successive term has less weight than its predecessor. ... – PowerPoint PPT presentation

Number of Views:96
Avg rating:3.0/5.0
Slides: 66
Provided by: sinc150
Category:

less

Transcript and Presenter's Notes

Title: CPU Scheduling


1
CPU Scheduling
  • Chapter 6

2
Chapter 6 CPU Scheduling
  • 6.1 Basic Concepts
  • 6.2 Scheduling Criteria
  • 6.3 Scheduling Algorithms
  • 6.4 Multiple-Processor Scheduling
  • 6.5 Real-Time Scheduling
  • 6.6 Algorithm Evaluation

3
Basic Concepts
  • Maximum CPU utilization obtained with
    multiprogramming
  • CPUI/O Burst Cycle Process execution consists
    of a cycle of CPU execution and I/O wait.
  • CPU burst distribution

6.1, Page 151
4
Ready Queue I/O Device Queues
4.2.1, Figure 4.4, Page 100
5
Process Scheduling Queues
  • Job queue
  • set of all processes in the system.
  • Ready queue
  • set of all processes residing in main memory,
    ready and waiting to execute.
  • Device queues
  • set of processes waiting for an I/O device.
  • Process migration between the various queues.

6
Schedulers
  • Long-term scheduler (or job scheduler)
  • selects which processes should be brought into
    the ready queue.
  • Short-term scheduler (or CPU scheduler)
  • selects which process should be executed next and
    allocates CPU.

6.1.2, Page 153 4.2.2, Page 101
7
Addition of Medium Term Scheduling
6.1.2, Figure 4.6, Page 102
8
Scheduler Duration Types
  • Short-term scheduler is invoked very frequently
    (milliseconds) ? (must be fast).
  • Long-term scheduler is invoked very infrequently
  • (seconds, minutes)
  • ? (may be slow).
  • The long-term scheduler controls the degree of
    multiprogramming.
  • Processes can be described as either
  • I/O-bound process spends more time doing I/O
    than computations, many short CPU bursts.
  • CPU-bound process spends more time doing
    computations few very long CPU bursts.

4.2.2, Page 102
9
Alternating Sequence of CPU and I/O Bursts
6.1.1, Figure 6.1, Page 152
10
Histogram of CPU-burst Times
6.1.1, Figure 6.2, Page 153
11
CPU Scheduler
  • Selects from among the processes in memory that
    are ready to execute, and allocates the CPU to
    one of them.
  • CPU scheduling decisions may take place when a
    process
  • 1. Switches from running to waiting state.
  • 2. Switches from running to ready state.
  • 3. Switches from waiting to ready.
  • 4. Terminates.
  • Scheduling under 1 and 4 is nonpreemptive.
  • All other scheduling is preemptive.

6.1.2, Page 153
12
Diagram of Process States
6.1.2, Page 153 Figure 4.1, Page 97
13
Representation of Process Scheduling
6.1.2, Page 153 Figure 4.5, Page 101
14
Scheduler Types
  • Nonpreemptive
  • a process retains control of the CPU until
    blocked or terminated.
  • Preemptive
  • the scheduler may preempt a process before it
    blocks or terminates, in order to allocate the
    CPU to another process.
  • Preemptive scheduling is necessary on
    user-interactive systems

6.1.3, Page 154
15
Dispatcher
  • Dispatcher module gives control of the CPU to the
    process selected by the short-term scheduler
    this involves
  • switching context
  • switching to user mode
  • jumping to the proper location in the user
    program to restart that program
  • Dispatch latency time it takes for the
    dispatcher to stop one process and start another
    running.

6.1.4, Page 153
16
CPU Process Switching
6.1.3, Page 154 Figure 4.3, Page 98
17
Scheduling Criteria
  • CPU utilization the percentage of time the CPU
    is executing a process
  • Throughput number of processes that complete
    their execution per time unit
  • Turnaround time amount of time to execute a
    particular process
  • Waiting time amount of time a process has been
    waiting in the ready queue
  • Response time the time it takes the system to
    start responding to user inputs

6.2, Pages 155 156
18
Scheduling Criteria
  • Balanced Utilization the percentage of time
    that all resources memory, I/O, CPU are
    utilized
  • Predictability lack of variability, e.g. 2 sec
    regular response better than 10 sec on occasion
  • Fairness the degree to which all processes are
    given equal opportunity
  • Priority give preferential treatment to
    processes with higher priorities
  • Proportionality response time matches user
    expectations, e.g. expect short time to stop job
  • Optimize minimum, maximum, average or variance

6.2, Page 156 Tanenbaum, Modern Operating
Systems, 2nd Edition, Pages 136 138
19
Scheduling Criteria for Different Systems
  • All systems
  • Fairness equal opportunity for all processes
  • Balance keeping all parts of the system busy
  • Policy enforcement seeing that stated policy is
    carried out
  • Batch Systems
  • Throughput maximize jobs per hour
  • Turnaround time minimize time between
    submission
  • and termination
  • Waiting time minimize time in the waiting queue
  • CPU utilization keep the CPU busy all the time
  • e.g. varies 40 90 utilization

6.2, Page 156 Tanenbaum, Page 137
20
Scheduling Criteria for Different Systems
  • Interactive Systems
  • Response time respond to requests quickly
  • Proportionality meet users expectations
  • Real-time Systems
  • Meeting deadlines respond to requests quickly
  • Predictability avoid quality degradation
  • e.g. in multimedia systems

6.2, Page 156 Tanenbaum, Page 137
21
Scheduling Algorithms
  • First-come, first-served FCFS
  • Shortest job first Nonpreemptive SJF
  • Shortest remaining time SRTF Preemptive SJF
  • Priority scheduling
  • Round robin scheduling RR
  • Multilevel queue scheduling
  • Multilevel feedback queue MRQ

6.3
22
Scheduling Algorithm Characteristics
  • First-come, first-served Variable waiting convoy
    effect
  • Shortest job first Shortest waiting for same
    start time
  • Shortest remaining time Shortest waiting, with
    good predict
  • Priority scheduling Enforces policy beware
    starvation
  • Round robin scheduling Best response for time
    share
  • Multilevel queue scheduling
  • Multilevel feedback queue

6.3
23
Scheduling Criteria vs Application
24
First-Come, First-Served (FCFS) Scheduling
  • Process Burst Time
  • P1 24
  • P2 3
  • P3 3
  • Suppose that the processes arrive in the order
    P1 , P2 , P3 The Gantt Chart for the schedule
    is
  • Waiting time for P1 0 P2 24 P3 27
  • Average waiting time (0 24 27)/3 17

6.3.1, Page 157
25
First-Come, First-Served (FCFS) Scheduling
  • Process Burst Time
  • P1 24
  • P2 3
  • P3 3
  • Suppose that the processes arrive in the order P2
    , P3 , P1 .
  • Waiting time for P1 6 P2 0 P3 3
  • Average waiting time (6 0 3)/3 3

6.3.1, Page 157
26
Problems with FCFS Scheduling
  • Big variations in average waiting time.
  • Convoy effect as short processes follow a long
    process from CPU to I/O and back

CPU
I/O
P1
P2
P3
0
30
6.3.1, Page 158
27
Shortest-Job-First (SJR) Scheduling
  • Associate with each process the length of its
    next CPU burst. Use these lengths to schedule
    the process with the shortest estimated time.
  • Two schemes
  • Nonpreemptive once CPU given to the process it
    cannot be preempted until completes its CPU
    burst.
  • Preemptive if a new process arrives with CPU
    burst length less than remaining time of current
    executing process, preempt. This scheme is known
    as the Shortest-Remaining-Time-First (SRTF).
  • SJF is optimal gives minimum average waiting
    time for a given set of processes.
  • Depends on good prediction of the burst length

6.3.2, Page 158
28
Example of Nonpreemptive SJF
  • Process Arrival Time Burst Time
  • P1 0 7
  • P2 2 4
  • P3 4 1
  • P4 5 4
  • Average waiting time (0 6 3 7)/4 4

6.3.2, Pages 158 159
29
Example of Preemptive SJF
  • Process Arrival Time Burst Time
  • P1 0 7
  • P2 2 4
  • P3 4 1
  • P4 5 4
  • Average waiting time (9 1 0 2)/4 3

6.3.2, Pages 160 161
30
Prediction of the Length of the Next CPU Burst
6.3.2, Figure 6.3, Page 160
31
Determining Length of Next CPU Burst
  • Can only estimate the length.
  • Can be done by using the length of previous CPU
    bursts, using exponential averaging.

6.3.2, Pages 159 160
32
Examples of Exponential Averaging
  • If ? 0
  • ?n1 ?n
  • Prediction based on stored values, ignores recent
    history.
  • If ? 1
  • ?n1 tn
  • Only the actual last CPU burst counts.
  • If we expand the formula, we get
  • ?n1 ? tn (1 - ?) ? tn-1 (1 - ?
    )j ? tn - j (1 - ? )n1 ?0
  • Since both ? and (1 - ?) are less than or equal
    to 1, each successive term has less weight than
    its predecessor.

6.3.2, Pages 159 160
33
Problems in SJF Scheduling
  • SJF favours short jobs over long jobs.
  • In extreme cases, the constant arrival of short
    jobs can lead to starvation of a long job
  • Nonpreemptive SJF works best when jobs arrive
    simultaneously
  • Short jobs arriving later can be delayed by an
    earlier long job
  • Preemptive SJF depends on correct prediction of
    burst times

6.3.2, Page 159
34
Priority Scheduling
  • A priority number (integer) is associated with
    each process
  • The CPU is allocated to the process with the
    highest priority (say, smallest integer ? highest
    priority).
  • Preemptive
  • Nonpreemptive
  • SJF is a priority scheduling scheme in which
    priority is the predicted next CPU burst time.
  • Problem ? Starvation, or indefinite blocking
  • low priority processes may never execute.
  • Solution ? Aging as time progresses increase
    the priority of the process.

6.3.3, Pages 161 163
35
Priority Dispatch Queues
UNIX The Magic Garden Explained
36
Priority Dispatch Queues
  • The kernel maintains a priority dispatch queue
    for each priority value in the system.
  • Dispatch queues are placed in array dispq
  • Each element in dispq is a dispq structure
  • Each dispq structure consists of a list of
    runnable proc structures, each linked via their
    p_link member.
  • The scheduling algorithms select the process with
    the highest priority to use the CPU next.

UNIX The Magic Garden Explained
37
Round Robin (RR) Scheduling
  • Each process gets a small unit of CPU time (time
    quantum), usually 10-100 milliseconds. After
    this time has elapsed, the process is preempted
    and added to the end (tail) of the ready queue.
  • If there are n processes in the ready queue and
    the time quantum is q, then each process gets 1/n
    of the CPU time in chunks of at most q time units
    at once. No process waits more than (n-1)q time
    units.
  • Performance
  • q large ? FIFO
  • q small ? q must be large with respect to context
    switch, otherwise overhead is too high.

6.3.4, Pages 163 164
38
Example of RR with Time Quantum 4
  • Process Burst Time
  • P1 24
  • P2 3
  • P3 3
  • Average waiting time (0 0 17)/3 5.66
  • (Average waiting under RR is quite long)

6.3.4, Pages 164 164
39
Example of RR with Time Quantum 20
  • Process Burst Time
  • P1 53
  • P2 17
  • P3 68
  • P4 24
  • Typically, higher average turnaround than SJF,
    but better response time.

6.3.4, Pages 164 165
40
Time Quantum and Context Switch Time
Smaller time quantum means more time spent in
context switching.
6.3.4, Figure 6.4, Page 164
41
Turnaround Time Varies With The Time Quantum
6.3.4, Figure 6.5, Page 165
42
Turnaround Time Varies With The Time Quantum
  • Generally, larger time quantum
  • gt shorter average turnaround time
  • If time quantum too large, get FCFS scheduling
  • Rule of thumb
  • 80 of CPU bursts should be shorter
  • than the time quantum

6.3.4, Page 165
43
Multilevel Queue
  • Ready queue is partitioned into separate
    queues e.g. foreground (interactive)
    background (batch)
  • Each queue has its own scheduling algorithm,
    e.g. foreground RR background FCFS
  • Scheduling must be done between the queues.
  • Fixed priority scheduling (i.e., serve all from
    foreground then from background). Possibility of
    starvation of background tasks.
  • Time slice each queue gets a certain amount of
    CPU time which it can schedule amongst its
    processes
  • (e.g., 80 to foreground in RR, 20 to
    background in FCFS)

6.3.5, Page 166
44
Multilevel Queue Scheduling
6.3.5, Figure 6.6, Page 166
45
Multilevel Feedback Queue (MFQ)
  • A process can move between the various queues
  • e.g. a process can be promoted as it ages
  • Multilevel-feedback-queue scheduler defined by
    the following parameters
  • number of queues
  • scheduling algorithms for each queue
  • method used to determine when to upgrade a
    process
  • method used to determine when to demote a process
  • method used to determine which queue a process
    will enter when that process needs service

6.3.6, Page 167
46
Example of Multilevel Feedback Queue
6.3.6, Figure 6.7, Page 168
47
Example of Multilevel Feedback Queue
  • Three queues
  • Q0 time quantum 8 ms
  • Q1 time quantum 16 ms
  • Q2 FCFS
  • Scheduling
  • A new job enters queue Q0 which is served FCFS.
    When it gains CPU, job receives 8 milliseconds.
    If it does not finish in 8 milliseconds, job is
    moved to queue Q1.
  • At Q1 job is again served FCFS and receives 16
    additional milliseconds. If it still does not
    complete, it is preempted and moved to queue Q2.

6.3.6, Page 168
48
Typical Rules for MFQ
  • Higher priority queues have shorter quanta
  • If a job uses up its time slice, it is demoted.
  • If a job blocks before using its time slice, it
    is promoted
  • The process selected by MFQ is the next process
    in the highest-level non-empty queue
  • If a job is waiting for some time, it is promoted

6.3.6, Page 168
49
Multiple-Processor Scheduling
  • CPU scheduling more complex when multiple CPUs
    are available.
  • Homogeneous processors within a multiprocess
    any processor for any task
  • vs. Heterogeneous each processor runs only
    programs compiled specially for it.
  • Load sharing common ready queue for all
    processors, allows a program to be scheduled onto
    any available processor.

6.4, Page 169
50
Multiple-Processor Scheduling
  • Symmetrical multiprocessing each processor is
    self-scheduling, accessing and updating a common
    data structure
  • Ensure two processors do not choose same process
  • Ensure that no processes are lost
  • Asymmetric multiprocessing only one processor
    the master server accesses the system data
    structures, makes all scheduling decisions and
    I/O processing, alleviating the need for data
    sharing.
  • But the one processor can be blocked by I/O

6.4, Pages 169 170
51
Multiprocessor Systems
CPU 3
CPU 1
CPU 2
Memory
  • Symmetric Multiprocessing (SMP)
  • Each has a copy of the OS
  • Linux one CPU at a time runs kernel mode
    (enforced by kernel spinlock)

6.4, Page 169 1.4, Page 13
52
Multiprocessor Systems
Master
CPU 3
CPU 1
CPU 2
Memory
  • Asymmetric Multiprocessing
  • One processor runs OS
  • Other processors run user code

6.4, Page 169 1.4, Page 13
53
Real-Time Scheduling
  • Hard real-time systems required to complete a
    critical task within a guaranteed amount of time.
  • Usually, special-purpose hardware OS
  • Soft real-time computing requires that critical
    processes receive priority over non-critical
    ones.
  • Requires priority scheduling,
  • with low dispatch latency

6.5, Page 170
54
Dispatch Latency
6.5, Figure 6.8, Page 172
55
Real-Time Scheduling
  • Ways to keep latency low
  • Put safe preemption points in long-duration
    system calls
  • Make the entire kernel preemptable (RTLinux)
  • Use priority inheritance protocol to ensure all
    tasks necessary to the high priority one are
    carried out

6.5, Page 171
56
Near Real-Time Scheduling with Windows 2000
  • Set multimedia timer for callbacks at 1 and 2
    milliseconds
  • timeSetEvent(period,0,callback_function,..)
  • Check time lapse with high resolution counter
  • if (!QueryPerformanceFrequency(m_lnTicksPerSecond
    ))
  • F6S_ERROR ("High resolution timers not
    available.")
  • else
  • QueryPerformanceCounter(m_lnLastTickCount)
  • //determine the actual time step
  • LARGE_INTEGER tickCount
  • QueryPerformanceCounter(tickCount)
  • timeStep (float) (tickCount.QuadPart
  • m_lnLastTickCount.QuadPart) /
  • m_lnTicksPerSecond.QuadPart
  • m_lnLastTickCount tickCount

MPBT Freedom 6S 6-DOF force feedback hand
controller, demo source code
57
Attempt at 500 Hz Win2000 Real-Time
Time (seconds)
MPBT Freedom 6S test result
58
Attempt at 1 kHz Win2000 Real-Time
Time (seconds)
MPBT Freedom 6S test result
59
Algorithm Evaluation
  • Decide what criteria are important
  • e.g. Waiting time, turnaround, response time
  • Deterministic modelling find values for
    predetermined workloads,
  • eg. Process P with burst time m arrives at t
  • But specific, assumes exact knowledge
  • Queueing models statistical analysis of the
    interaction between different queues
  • Queing network analysis, with arrival and service
    rates
  • But complex, limited, relies on estimates

6.6.1, Page 173 6.6.2, Page 174
60
Evaluation of CPU Schedulers by Simulation
6.6.3, Figure 6.9, Page 176
61
Algorithm Evaluation
  • Simulation a model of the computer system
  • Assume event pacing, or
  • Randomly generated events, burst times, or
  • use trace tapes to ensure realistic event
    sequencing
  • But complex, long to code, debug, run
  • Implementation Code it, install it, test it
  • Actual operation in a real system
  • But expensive, can stress both users and
    developers

6.6.3, Pages 175 176 6.6.4, Pages 176
177
62
Solaris 2 Scheduling
  • User tasks feedback queues, higher priority,
    shorter time slice
  • System tasks fixed priority queues
  • Realtime tasks highest priority

6.7.1, Figure 6.10, Page 179
63
Windows 2000 Priorities
Priority Class
Relative Priority within Class
6.7.2, Figure 6.11, Page 181
64
Rules for Windows 2000 Priorities
  • Default relative priority within each class is
    normal
  • Real-time class has variable priorities 16-31
  • Lower priority if time slice exceeded.
  • Raise priority if released from a wait operation
  • Keyboard and mouse tasks have higher priority
  • Foreground process, currently selected on screen,
    gets higher priority, vs. background process not
    currently selected

6.7.2, Page 182
65
Linux Priorities
  • Two algorithms
  • Time-sharing for fair preemptive scheduling
  • Credit-based task with most credits runs
  • A task overrunning its time slice loses credits
  • When all credits reach zero, reassign credits at
    priority
  • Interactive user jobs get more credits than
    background jobs
  • Real-time run process with highest priority
  • If same priority, take job waiting the longest
  • Apply choice of FCFS and RR scheduling
  • (FCFS nonpreemptive RR preemptive)

6.7.3, Pages 182 184
Write a Comment
User Comments (0)
About PowerShow.com