Threads - PowerPoint PPT Presentation

1 / 46
About This Presentation
Title:

Threads

Description:

Since there is less information associated with a thread than there is info ... Unit of resource ownership - process is allocated: ... Terminaison de processus et fils ... – PowerPoint PPT presentation

Number of Views:28
Avg rating:3.0/5.0
Slides: 47
Provided by: mario227
Category:
Tags: fils | threads

less

Transcript and Presenter's Notes

Title: Threads


1
Threads
  • Chapter 4
  • Threads are a subdivision of processes
  • Since there is less information associated with a
    thread than there is info associated with a
    process, thread switching is easier than process
    switching

2
Process Characteristics
  • Unit of resource ownership - process is
    allocated
  • a virtual address space to hold the process
    image
  • control of some resources (files, I/O devices...)
  • Unit of execution - process is an execution path
    through one or more programs
  • execution may be interleaved with other processes
  • the process has an execution state and a priority

3
Process Characteristics
  • These 2 characteristics are treated independently
    by some recent OS
  • The unit of execution is usually referred to a
    thread or a lightweight process (book talks of
    unit of dispatching, similar concept)
  • The unit of resource ownership is usually
    referred to as a process or task

4
Multithreading vs. Single threading
  • Multithreading when the OS supports multiple
    threads of execution within a single process
  • Single threading when the OS does not recognize
    the concept of thread
  • MS-DOS support a single user process and a single
    thread
  • UNIX supports multiple user processes but only
    supports one thread per process
  • Solaris, Linux, Windows NT and 2000, OS/2
    support multiple threads

5
Threads and Processes
6
Processes
  • Have a virtual address space which holds the
    process image
  • Protected access to processors, other processes,
    files, and I/O resources

7
Threads
  • Have execution state (running, ready, etc.)
  • Save thread context when not running
  • Have private storage for local variables and
    execution stack
  • Have shared access to the address space and
    resources (files) of their process
  • when one thread alters (non-private) data, all
    other threads (of the process) can see this
  • threads communicate via shared variables
  • a file opened by one thread is available to others

8
Single Threaded and Multithreaded Process Models
Thread Control Block contains a register image,
thread priority and thread state information
(compare with Fig. 3.14)
9
Benefits of Threads vs Processes
  • It takes less time to create a new thread than a
    new process
  • Less time to terminate a thread than a process
  • Less time to switch between two threads within
    the same process than to switch between processes
  • Mainly because a thread is associated with less
    information.

10
Benefits of Threads
  • Example a file server on a LAN
  • It needs to handle several file requests over a
    short period
  • Hence more efficient to create (and destroy) a
    single thread for each request
  • Such threads share files and variables
  • In Symmetric Multiprocessing different threads
    can possibly execute simultaneously on different
    processors
  • Example 2 one thread displays menu and reads
    user input while the other thread executes user
    commands

11
Application benefits of threads
  • Consider an application that consists of several
    independent parts that do not need to run in
    sequence
  • Each part can be implemented as a thread
  • Whenever one thread is blocked waiting for an
    I/O, execution could possibly switch to another
    thread of the same application (instead of
    switching to another process)

12
Benefits of Threads
  • Since threads within the same process share
    memory and files, they can communicate with each
    other without invoking the kernel
  • Therefore necessary to synchronize the activities
    of various threads so that they do not obtain
    inconsistent views of the data (chap 5)

13
Terminaison de processus et fils
  • Suspending a process involves suspending all
    threads of the process
  • Termination of a process terminates all threads
    within the process
  • Since all such threads share the same address
    space in the process

14
Threads States
  • As for processes, three key states running,
    ready, blocked
  • They cannot have suspend state because all
    threads within the same process share the same
    address space (same memory)
  • A thread is created by another thread using a
    command often called Spawn
  • Finish is the state of a process that is
    completing

15
Remote Procedure Call Using one thread only
16
Remote Procedure Call Using Multiple
Threadsless waiting time
17
Thread management
  • Thread management can be done in one of three
    fundamental ways
  • User-level thread threads are entirely managed
    by the application
  • Kernel-level threads threads are entirely
    managed by the OS kernel
  • Combined approaches

18
User-Level Threads (ULT) (ex. Standard UNIX)
  • The kernel is not aware of the existence of
    threads
  • All thread management is done by the application
    using a thread library
  • Thread switching does not require kernel mode
    privileges (no mode switch)
  • Scheduling is application specific

19
Threads library
  • Contains code for
  • creating and destroying threads
  • passing messages and data between threads
  • scheduling thread execution
  • saving and restoring thread contexts

20
Kernel activity for ULTs
  • The kernel is not aware of thread activity but it
    still manages process activity
  • When a thread makes a system call, the whole
    process is blocked
  • But for the thread library that thread is still
    in running state
  • So thread states are independent of process states

21
Advantages and disadvantages of ULT
  • Advantages
  • Thread switching does not involve the kernel no
    mode switching
  • Scheduling can be application specific choose
    the best algorithm.
  • Can run on any OS. Only needs a thread library
  • Disadvantages
  • Most system calls are blocking for processes. So
    all threads within a process will be blocked
  • The kernel can only assign processes to
    processors. Two threads within the same process
    cannot run simultaneously on two processors

22
Kernel-Level Threads (KLT) Ex Windows NT, 2000,
Linux and OS/2
  • All thread management is done by kernel
  • No thread library but an API to the kernel thread
    facility
  • Kernel maintains context information for the
    process and the threads
  • Switching between threads requires the kernel
  • Scheduling on a thread basis

23
Advantages and disadvantages of KLT
  • Advantages
  • the kernel can simultaneously schedule many
    threads of the same process on many processors
  • blocking is done on a thread level
  • kernel routines can be multithreaded
  • Inconveniences
  • thread switching within the same process involves
    the kernel. We have 2 mode switches per thread
    switch
  • this may results in a significant slow down
  • however kernel may be able to switch threads
    quicker than users lib

24
Combined ULT/KLT Approaches (Solaris)
  • Thread creation done in the user space
  • Bulk of scheduling and synchronization of threads
    done in the user space
  • The programmer may adjust the number of KLTs
  • Combines the best of both approaches

25
Solaris
  • Process includes the users address space, stack,
    and process control block
  • User-level threads (threads library)
  • invisible to the OS
  • are the interface for application parallelism
  • Kernel threads
  • the unit that can be dispatched on a processor
  • Lightweight processes (LWP)
  • each LWP supports one or more ULTs and maps to
    exactly one KLT
  • LWPs are visible to applications
  • therefore LWP are the way the user sees the KLT
  • constitute a sort of virtual CPU

26
Process 2 is equivalent to a pure ULT approach (
Unix) Process 4 is equivalent to a pure KLT
approach ( Win-NT, OS/2) We can specify a
different degree of parallelism (process 3 and 5)
27
Solaris versatility
  • We can use ULTs when logical parallelism does not
    need to be supported by hardware parallelism (we
    save mode switching)
  • Ex Multiple windows but only one is active at
    any one time
  • If ULT threads can block then we can add two or
    more LWPs to avoid blocking the whole application
  • Note versatility of SOLARIS that can operate like
    Windows-NT or like conventional Unix

28
Solaris user-level thread execution (threads
library).
  • Transitions among states are under the control of
    the application
  • they are caused by calls to the thread library
  • Its only when a ULT is in the active state that
    it is attached to a LWP (so that it will run when
    the kernel level thread runs)
  • A thread may transfer to the sleeping state by
    invoking a synchronization primitive (chap 5) and
    later transfer to the runnable state when the
    event waited for occurs
  • A thread may force another thread to go to the
    stop state

29
Solaris user-level thread states
(attached to a LWP)
30
Decomposition of user-level Active state
  • When a ULT is Active, it is associated to a LWP
    and thus to a KLP.
  • Transitions among the LWP states is under the
    exclusive control of the kernel
  • A LWP can be in the following states
  • running assigned to CPU executing
  • blocked because the KLT issued a blocking system
    call (but the ULT remains bound to that LWP and
    remains active)
  • runnable waiting to be dispatched to CPU

31
(No Transcript)
32
Solaris Lightweight Process States
LWP states are independent of ULT states (except
for bound ULTs)
33
Multiprocessing Categories of Systems
  • Single Instruction Single Data (SISD)
  • single processor executes a single instruction
    stream to operate on data stored in a single
    memory
  • Single Instruction Multiple Data (SIMD)
  • each instruction is executed on a different set
    of data by the different processors
  • typical application matrix calculations
  • Multiple Instruction Single Data (MISD)
  • several CPUs execute on a single set of data.
    Never implemented, probably not practical
  • Multiple Instruction Multiple Data (MIMD)
  • a set of processors simultaneously execute
    different instruction sequences on different data
    sets

34
(No Transcript)
35
Symmetric Multiprocessing
  • Kernel can execute on any processor
  • Typically each processor does self-scheduling
    form the pool of available processes or threads

36
(No Transcript)
37
Design Considerations
  • Managing kernel routines
  • they can be simultaneously in execution by
    several CPUs
  • Scheduling
  • can be performed by any CPU
  • Interprocess synchronization
  • becomes even more critical when several CPUs may
    attempt to access the same data at the same time
  • Memory management
  • becomes more difficult when several CPUs may be
    sharing the same memory
  • Reliability or fault tolerance
  • if one CPU fails, the others should be able to
    keep the system in function

38
Microkernels a trend
  • Small operating system core
  • Contains only essential operating systems
    functions
  • Many services traditionally included in the
    operating system kernel are now external
    subsystems
  • device drivers
  • file systems
  • virtual memory manager
  • windowing system
  • security services

39
Layered Kernel vs Microkernel
40
Client-server architecture in microkernels
  • the user process is the client
  • the various subsystems are servers

41
Benefits of a Microkernel Organization
  • Uniform interface on request made by a process
  • All services are provided by means of message
    passing
  • Extensibility and flexibility
  • Facilitates the addition and removal of services
    and functionalities
  • Portability
  • Changes needed to port the system to a new
    processor may be limited to the microkernel
    Reliability
  • Modular design
  • Small microkernel can be rigorously tested

42
More Benefits of a Microkernel Organization
  • Distributed system support
  • Message are sent without knowing what the target
    machine is
  • Object-oriented operating system
  • Components are objects with clearly defined
    interfaces

43
Microkernel Elements
  • Low-level memory management
  • elementary mecanisms for memory allocation
  • support more complex strategies such as virtual
    memory
  • Inter-process communication
  • I/O and interrupt management

44
Layered Kernel vs Microkernel
45
Microkernel Performance
  • Microkernel architecture may be less performant
    than traditional layered architecture
  • Communication between the various subsystems
    causes overhead
  • Message-passing mechanisms less performant than
    simple system call

46
Important concepts of Chapter 4
  • Threads and difference wrt processes
  • User level threads, kernel level threads and
    combinations
  • Thread states and process states
  • Different types of multiprocessing
  • Microkernel architecture
Write a Comment
User Comments (0)
About PowerShow.com