ECE7995 Computer Storage and Operating System Design - PowerPoint PPT Presentation

Loading...

PPT – ECE7995 Computer Storage and Operating System Design PowerPoint presentation | free to download - id: 83d9f5-ZTdjN



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

ECE7995 Computer Storage and Operating System Design

Description:

Title: What and Why Author: Chengzhong Xu Last modified by: Song Created Date: 5/31/1998 11:29:00 PM Document presentation format: On-screen Show Company – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 46
Provided by: Cheng180
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: ECE7995 Computer Storage and Operating System Design


1
ECE7995 Computer Storage and Operating System
Design
Lecture 3 Processes and threads (I)
2
Why Multiprogramming and Timesharing? (revisit)
3
Protection (revisit)
  • OS must protect/isolate applications from each
    other, and OS from applications
  • Three techniques
  • Preemption granted resources can be revoked
  • Interposition must go through OS to access
    resources
  • Privilege user/kernel modes

4
Mode Switching (revisit)
  • User ? Kernel mode
  • For reasons external or internal to CPU
  • External (aka hardware) interrupt
  • timer/clock chip, I/O device, network card,
    keyboard, mouse
  • asynchronous (with respect to the executing
    program)
  • Internal interrupt (aka software interrupt, trap,
    or exception)
  • are synchronous
  • System Call (process wants to enter kernel to
    obtain services) intended
  • Fault/exception (division by zero, privileged
    instruction in user mode) usually unintended
  • Kernel ? User mode switch on iret instruction

5
Mode Switch (revisit)
Process 1
Process 2
Kernel
6
Overview
  • Process concept
  • Process Scheduling
  • Thread concept

7
Process
  • These are all possible definitions
  • A program in execution, and process execution
    must progress in sequential fashion
  • An instance of a program running on a computer
  • Schedulable entity ()
  • Unit of resource ownership
  • Unit of protection
  • Execution sequence () current state () set
    of resources
  • () can be said of threads as well

8
Process in Memory
  • A process includes
  • text section
  • program counter
  • stack
  • data section
  • heap

9
Data placement, seen from C/C
int a static int b int c 5 struct S int t struct S s void func(int d) static int e int f struct S w int g new int10
  • Q. which of these variables are stored?

A. On stack d, f, w (including w.t), g On
(global) data section a, b, c, s (including
s.t), e On the heap g0g9
10
Process State
  • As a process executes, it changes state
  • new The process is being created
  • running Instructions are being executed
  • waiting The process is waiting for some event
    to occur
  • ready The process is waiting to be assigned to
    a processor
  • terminated The process has finished execution

11
Process Control Block (PCB)
  • PCB records information associated with each
    process
  • Process identifier (pid)
  • Value of registers, including stack pointer
  • Program counter
  • CPU registers
  • Information needed by scheduler
  • Process state
  • Other CPU scheduling information
  • Resources held by process
  • Memory-management information
  • I/O status information
  • Accounting information

12
Context Switching
  • Multiprogramming switch to another process if
    current process is (momentarily) blocked
  • Time-sharing switch to another process
    periodically to make sure all process make equal
    progress
  • this switch is called a context switch.
  • Understand how it works
  • how it interacts with user/kernel mode switching
  • how it maintains the illusion of each process
    having the CPU to itself (process must not notice
    being switched in and out!)

13
CPU Switch From Process to Process
  • Save the current processs execution state to its
    PCB
  • Update currents PCB as needed
  • Choose next process N
  • Update Ns PCB as needed
  • Restore Ns PCB execution state
  • May involve reprogramming MMU

14
Process Scheduling Queues
  • Processes are linked in multiple queues
  • Job queue set of all processes in the system
  • Ready queue set of all processes residing in
    main memory, ready and waiting to execute
  • Device queues (wait queue ) set of processes
    waiting for an I/O device or other events
  • Processes migrate among the various queues

15
Ready Queue And Various I/O Device Queues
16
Representation of Process Scheduling
17
Schedulers
  • Long-term scheduler (or job scheduler) selects
    which processes should be brought into the ready
    queue
  • The long-term scheduler controls the degree of
    multiprogramming
  • A good mix of processes can be described as
    either
  • I/O-bound process spends more time doing I/O
    than computations, many short CPU bursts
  • CPU-bound process spends more time doing
    computations few very long CPU bursts
  • Short-term scheduler (or CPU scheduler)
    selects which process should be executed next and
    allocates CPU

18
CPU Scheduling
  • Selects from among the processes in memory that
    are ready to execute, and allocates the CPU to
    one of them
  • CPU scheduling decisions may take place when a
    process
  • 1. Switches from running to waiting state
  • 2. Switches from running to ready state
  • 3. Switches from waiting to ready
  • 4. Terminates
  • Scheduling under 1 and 4 is nonpreemptive
  • All other scheduling is preemptive

19
Preemptive vs Nonpreemptive Scheduling
  • Q. when is scheduler asked to pick a thread from
    ready queue?
  • Nonpreemptive
  • Only when RUNNING? BLOCKED transition
  • Or RUNNING ? EXIT
  • Or voluntary yield RUNNING ? READY
  • Preemptive
  • Also when BLOCKED? READY transition
  • Also on timer

20
Dispatcher
  • Dispatcher module gives control of the CPU to the
    process selected by the short-term scheduler
    this involves
  • switching context
  • switching to user mode
  • jumping to the proper location in the user
    program to restart that program
  • Dispatch latency time it takes for the
    dispatcher to stop one process and start another
    running

21
Static vs Dynamic Scheduling
  • Static scheduling
  • The arrival and execution times of all jobs are
    known in advance. It create a schedule, execute
    it
  • Used in statically configured systems, such as
    embedded real-time systems
  • Dynamic/online scheduling
  • Jobs are not known in advance, scheduler must
    make online decision whenever jobs arrives or
    leaves
  • Execution time may or may not be known
  • Behavior can be modeled by making assumptions
    about nature of arrival process

22
Alternating Sequence of CPU And I/O Bursts
23
CPU Scheduling Model
  • Process alternates between CPU burst and I/O burst

P1
I/O Bound Process
P2
CPU Bound Process
I/O
CPU
Waiting
Scheduling on the same CPU
P1
P2
24
CPU Scheduling Terminology
  • A job (sometimes called a task, or a job
    instance)
  • Activity thats schedulable process/thread and a
    collection of processes that are scheduled
    together
  • Arrival time time when job arrives
  • Start time time when job actually starts
  • Finish time time when job is done
  • Completion time (aka Turn-around time)
  • Finish time Arrival time
  • Response time
  • Time when user sees response Arrival time
  • Execution time (aka cost) time a job need to
    execute

25
CPU Scheduling Terminology (contd)
  • Waiting time time when job was ready-to-run
  • didnt run because CPU scheduler picked another
    job
  • Blocked time time when job was blocked
  • while I/O device is in use
  • Completion time
  • Execution time Waiting time Blocked time

26
CPU Scheduling Goals
  • Minimize latency
  • Can mean completion time
  • Can mean response time
  • Maximize throughput
  • Throughput number of finished jobs per time-unit
  • Implies minimizing overhead (for
    context-switching, for scheduling algorithm
    itself)
  • Requires efficient use of non-CPU resources
  • Fairness
  • Minimize variance in waiting time/completion time

27
Scheduling Constraints
  • Reaching those goals is difficult, because
  • Goals are conflicting
  • Latency vs. throughput
  • Fairness vs. low overhead
  • Scheduler must operate with incomplete knowledge
  • Execution time may not be known
  • I/O device use may not be known
  • Scheduler must make decision fast
  • Approximate best solution from huge solution space

28
Round Robin (RR)
  • Each process gets a small unit of CPU time (time
    quantum), usually 10-100 milliseconds. After
    this time has elapsed, the process is preempted
    and added to the end of the ready queue.
  • If there are n processes in the ready queue and
    the time quantum is q, then each process gets 1/n
    of the CPU time in chunks of at most q time units
    at once. No process waits more than (n-1)q time
    units.
  • No more unfairness to short jobs or starvation
    for long jobs!
  • Performance
  • q large ? FIFO
  • q small ? q must be large with respect to context
    switch, otherwise overhead is too high

29
Example of RR with Time Quantum 20
  • Process CPU Burst Time
  • P1 53
  • P2 17
  • P3 68
  • P4 24
  • The schedule is

30
Round Robin Cost of Time Slicing
  • Context switching incurs a cost
  • Direct cost (execute scheduler context switch)
    indirect cost (cache TLB misses)
  • Long time slices ? lower overhead, but approaches
    FCFS if processes finish before timeslice expires
  • Short time slices ? lots of context switches,
    high overhead
  • Typical cost context switch lt 10 us
  • Time slice typical around 100 ms
  • Linux 100ms default, adjust to between 10ms
    300ms
  • Note time slice length ! interval between timer
    interrupts
  • Timer frequency usually 1000Hz

31
Multi-Level Feedback Queue Scheduling
  • Objectives
  • preference for short jobs (tends to lead to good
    I/O utilization)
  • longer timeslices for CPU bound jobs (reduces
    context-switching overhead)
  • Challenge
  • Dont know type of each process algorithm needs
    to figure out
  • Solutions use multiple queues
  • queue determines priority
  • usually combined with static priorities (nice
    values)
  • many variations of this

32
MLFQS
Processes start in highest queue
Process that use up their time slice move down
Processes that starve move up
  • Higher priority queues are served before
    lower-priority ones - within highest-priority
    queue, round-robin
  • Only ready processes are in this queue - blocked
    processes leave queue and reenter same queue on
    unblock

33
Case Study Linux Scheduler
  • Variant of MLFQS
  • 140 priorities
  • 1-99 realtime
  • 100-140 nonrealtime
  • Dynamic priority computed from static priority
    (nice) plus interactivity bonus

nice19
139
Processes scheduled based on dynamic priority SCH
ED_NORMAL
nice0
120
Default
nice-20
100
Realtime processes scheduled based on static
priority SCHED_FIFO SCHED_RR
0
34
Linux Scheduler (contd)
  • Instead of recomputation loop, recompute priority
    at end of each timeslice
  • dyn_prio nice interactivity bonus (-55)
  • Interactivity bonus depends on sleep_avg
  • measures time a process was blocked
  • 2 priority arrays (active expired) in each
    runqueue (Linux calls ready queues runqueue)
  • Finds highest-priority ready thread quickly
  • Switching active expired arrays at end of epoch
    is simple pointer swap

35
Linux Timeslice Computation
  • Principle choose a timeslice as long as possible
    while keeping good system response time.
  • Various tweaks
  • interactive processes are reinserted into
    active array even after timeslice expires
  • Unless processes in expired array are starving
  • processes with long timeslices are round-robind
    with other of equal priority at sub-timeslice
    granularity

Q Does a very long quantum duration degrade the
response time of interactive applications?
36
Linux SMP Load Balancing
  • One runqueue is per CPU
  • Periodically, lengths of runqueues on different
    CPU is compared
  • Processes are migrated to balance load
  • Migrating requires locks on both runqueues

37
Scheduling Summary
  • OS must schedule all resources in a system
  • CPU, Disk, Network, etc.
  • CPU Scheduling affects indirectly scheduling of
    other devices
  • Goals (1) Minimize latency (2) Maximize
    throughput (3) Provide fairness
  • In Practice some theory, lots of tweaking

38
Single and Multithreaded Processes
39
Benefits of Multithreading
  • Responsiveness
  • Resource Sharing
  • Economy
  • Utilization of MP Architectures

40
Thread Implementations
  • User threads
  • Thread management done by user-level threads
    library
  • Three primary thread libraries
  • POSIX Pthreads
  • Win32 threads
  • Java threads
  • Kernel threads
  • Supported by the Kernel
  • Examples
  • Windows XP/2000
  • Solaris
  • Linux
  • Tru64 UNIX
  • Mac OS X

41
An example Pthread
  • include ltstdio.hgt include
    ltstdlib.hgt include ltpthread.hgt void
    print_message_function( void ptr )
  • main()
  • pthread_t thread1, thread2 char message1
    "Thread 1" char message2 "Thread 2" int
    iret1, iret2 / Create independent threads each
    of which will execute function / iret1
    pthread_create( thread1, NULL,
    print_message_function, (void) message1) iret2
    pthread_create( thread2, NULL,
    print_message_function, (void) message2) /
    Wait till threads are complete before main
    continues. Unless we / / wait we run the risk
    of executing an exit which will terminate / /
    the process and all threads before the threads
    have completed. / pthread_join( thread1,
    NULL) pthread_join( thread2, NULL)
    printf("Thread 1 returns d\n",iret1) printf("
    Thread 2 returns d\n",iret2) exit(0)
  • void print_message_function( void ptr )
  • char message message (char )
    ptr printf("s \n", message)

cc -lpthread pthread1.c
42
Multithreading Models
  • Many-to-One many user-level threads mapped to
    single kernel thread
  • Examples
  • Solaris Green Threads
  • GNU Portable Threads
  • Pros efficient, no need for OS support for
    threading
  • Cons lost concurrency with blocked in the
    kernel, cannot use MP
  • One-to-One each user-level thread maps to kernel
    thread
  • Examples
  • Windows NT/XP/2000
  • Linux
  • Solaris 9 and later
  • the number of kernel threads must be limited to
    reduce system cost

43
Multithreading Models (contd)
  • Many-to-Many allows many user level threads to
    be mapped to many kernel threads
  • Allows the operating system to create a
    sufficient number of kernel threads
  • Examples
  • Solaris prior to version 9
  • Windows NT/2000 with the ThreadFiber package

44
Thread Pools
  • Create a number of threads in a pool where they
    await work
  • Advantages
  • Usually slightly faster to service a request with
    an existing thread than create a new thread
  • Allows the number of threads in the
    application(s) to be bound to the size of the pool

45
Linux Threads
  • Linux treats thread as lightweight process
  • Each process (or thread) has its own task_struct
    structure (PCB in Linux), and is identified by
    its own Process ID (or PID)
  • However, Unix programmer expects threads in the
    same group (or process) to have a common PID.
  • Linux uses tgid in PCB to record the PID of
    thread grorp leader. So all the threads in a
    group share the same identifier.
  • Thread creation is done through clone() system
    call
  • clone() is a variant of fork() that allows a
    child task to share certain resources with its
    parent via a parameter (CLONE_VM, CLONE_FILES )
About PowerShow.com