Learning Objectives - PowerPoint PPT Presentation

Loading...

PPT – Learning Objectives PowerPoint presentation | free to download - id: 73739e-NWY4Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Learning Objectives

Description:

Learning Objectives Understanding the difference between processes and threads. Understanding process migration and load distribution. Understanding Process Inter ... – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 33
Provided by: Preferr751
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Learning Objectives


1
Learning Objectives
  • Understanding the difference between processes
    and threads.
  • Understanding process migration and load
    distribution.
  • Understanding Process Inter-process Communication
    in Distributed Systems

2
The Kernel
  • The kernel is a (often memory resident) part of
    the operating system that has access rights to
    ALL system resources.
  • The kernel of a distributed system usually has
    access only to local resources.
  • ? What o.s. components should be part of the
    kernel?

3
Kernel Types
  • Monolithic Kernel
  • Unix, VMS.
  • Microkernel
  • Chorus (base technology for JavaOS)
  • Mach
  • more suitable for distributed computing

4
Processes
  • Process is a fundamental concept of computer
    system operation it is a program in execution.
  • ? What are properties of a process?
  • ? What constitutes a process?
  • ? What does a process need in order to accomplish
    its task?

5
Process Management
  • Process management in distributed systems
    involves
  • process creation
  • process type identification
  • process migration
  • process scheduling
  • process status control
  • process termination and clean up

6
Types of processes in distributed and parallel
systems.
  • Indivisible processes (entire process must be
    assign to a single processor).
  • Divisible processes (a process may be subdivided
    into smaller sub-processes, tasks, or threads.)
  • TIG (Task Interaction Graph) represents
    relationships between tasks of a divisible
    process. (see box 2.4 p.42)

7
Load Distribution and Process Migration
  • Process migration involves relocation of a
    process to remote processor (computer).
  • Goal of load distribution
  • balance the job load among computers in the
    system
  • provide better response time of computations

8
Two Components of Load Balancing Algorithms
  • Information Gathering
  • gather info about loads of other processors and
    select a suitable migration partner
  • e.g. identifies idle processors, estimate cost of
    migration to various sites
  • Process Selection
  • select a process to migrate
  • e.g. what is the communication delay for
    migrating a process, how to accommodate
    differences in heterogeneous systems

9
External Data Representation
  • Heterogeneous systems imply different CPUs,
    different software configurations and different
    data representation
  • External Data Representation is a common
    representation of data used in heterogeneous
    systems. External Data Representation greatly
    reduces the amount of time required to perform
    cross-platform process migration.

10
Benefits of External Data Representation
  • Study figures 2.6 and 2.7 and solve problem 2.9
    on page 51.

11
Load balancing decision examples
  • Consider the formula for Total_Weight on page 50.
    Use this formula with Weighting 1 parameters
    (0.25, 0.25, 0.25, 0.25) in order to determine
    which is the best location for process migration
    from exercise 2.6.

12
Threads
  • Thread is a basic unit of process execution.
  • Mini-process (lightweight process)
  • Operating system allowing for multiple threads of
    execution are referred as multi-threaded (Windows
    NT, 2000, Unix)
  • Languages that support multithreading include C,
    C, Java.

13
Thread Properties
  • A process may have multiple threads of control.
  • Each thread has its own program counter and
    stack, register set, child threads, and state.
  • Threads share address space and global variables.

14
Threads versus Processes
  • Per Thread Items
  • PC
  • Stack
  • Register set
  • Child threads
  • State
  • Per Process Items
  • Address space
  • Global variables
  • Open files
  • Child processes
  • Signals/IPC

15
Multithreaded Support
  • Posix.1c specifies a standard for threads and
    their implementation.
  • Posix Threads
  • Java support for multithreaded programming
  • Threads are treated like other objects.
  • Must be instantiated with a parameter that is a
    runable object (runable objects define a run()
    method)
  • See box 2.2 p. 37 for Thread constructor
    prototypes and sample methods

16
Multithreaded Code
  • Process versus threads example
  • Posix Threads Example
  • Multithreaded programming paradigms
  • specialist paradigm
  • client/server paradigm
  • assembly line paradigm
  • ? Read paragraph 2.2.2 p.34 and answer question
    2.2. on page 49.

17
Processes and Threads in Windows98
  • Windows 95/98 NT and 2000 utilize multithreading.
  • Check the following file to processes and threads
    in Windows
  • C\Program Files\DevStudio\Vc\bin\Win95\Pview95

18
Inter-process Communication
  • Inter-process Communication is an essential
    component of distributed computing.
  • WHY?

19
Common IPC Primitives
  • Messages
  • Pipes
  • Sockets
  • RPCs (Remote Procedure Calls)

20
Message Passing
  • Message passing involves copying of data from one
    process address space to another process address
    space.
  • Primitives used
  • msgSent (dest, text)
  • msgReceive (source, text)

21
Blocking and Non-blocking Primitives
  • Blocking msgSend waits (blocks awaiting) for
    acknowledgment
  • Blocking msgReceive waits (block awaiting) for
    the messages
  • Sometimes called synchronous
  • Non-blocking msgSend sends the message and does
    not wait for ack.
  • Non-blocking msgReceive does not wait for a
    messages.
  • Sometimes called asynchronous

22
Message Addressing
  • One-to-many (multiple receivers, single sender)
  • Many-to-one (multiple senders, one receiver)
  • Many-to-many (multiple receivers, multiple
    senders)

23
Pipes
  • Communication takes via a memory buffer.
  • One-to-one type
  • Unnamed pipes
  • allow for communication between related processes
    (e.g. parent and child)
  • Named pipes
  • allow for communication between unrelated
    processes (see Box 3.3 p.64 for details)

24
Sockets
  • Used for communication across the network.
  • Low level primitives.
  • Requires 6-8 steps (create socket, bind, connect,
    listen, send receive, shutdown.)
  • Supported by Unix and Java (see box 3.4 and 3.5
    pp. 70 and 71)

25
Remote Procedure Calls
  • RPC is a high level, blocking primitive.
  • Allows communication between remote computers
    using techniques similar to traditional procedure
    calls.
  • Allows for complex data structures to be passed
    as parameters.

26
RPC features
  • Parameter Type
  • Data Type Support
  • Parameter Marshalling
  • RPC Binding
  • RPC Authentication
  • RPC Semantics

27
RPC Semantics
  • At-Most-Once
  • At-Least-Once
  • Last-of-Many-Call
  • Idempotent

28
Parameter Type
  • Input only
  • per value
  • Output only
  • receive from the server
  • Input/output
  • call by value/results

29
Data Type Support
  • Allow/disallow some data types (e.g. pointers or
    complex structures.)
  • Limit the number of parameters passed

30
Parameter Marshalling
  • Packing of parameters to minimize the amount of
    information sent across the network.
  • Use of stubs on client and server sites.

31
RPC Binding
  • Binding involves
  • port mapping for the server
  • assigning a port handler for the client
  • see fig. 3.10 p.74
  • Binding can be accomplished at
  • compile time
  • link time
  • run time

32
RPC Authentication
  • Client/server authentication might be needed to
    assure security of data transmisison.
About PowerShow.com