Introduction to Parallel Processing - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Introduction to Parallel Processing

Description:

Computation requirements are ever increasing -- visualization, distributed ... Ex: Mach, PARAS, Chorus, etc. Few Popular Microkernel Systems. MACH, CMU. PARAS, C-DAC ... – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 34
Provided by: rajku2
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Parallel Processing


1
Introduction to Parallel Processing
  • CS 147
  • November 12, 2004
  • Johnny Lai

2
Computing Elements
Applications
Programming paradigms
Operating System
Hardware
3
Two Eras of Computing
  • Architectures
  • System Software/Compiler
  • Applications
  • P.S.Es
  • Architectures
  • System Software
  • Applications
  • P.S.Es

Sequential Era
Parallel Era
1940 50 60 70 80
90 2000
2030
4
History of Parallel Processing
  • PP can be traced to a tablet dated around 100 BC.
  • Tablet has 3 calculating positions.
  • Infer that multiple positions
  • Reliability/ Speed

5
Why Parallel Processing?
  • Computation requirements are ever increasing --
    visualization, distributed databases,
    simulations, scientific prediction (earthquake),
    etc.
  • Sequential architectures reaching physical
    limitation (speed of light, thermodynamics)

6
Human Architecture! Growth Performance
Vertical
Horizontal
Growth
5 10 15 20 25 30 35 40
45 . . . .
Age
7
Computational Power Improvement
Multiprocessor
Uniprocessor
C.P.I.
1 2 . . . .
No. of Processors
8
Why Parallel Processing?
  • The Tech. of PP is mature and can be exploited
    commercially significant R D work on
    development of tools environment.
  • Significant development in Networking technology
    is paving a way for heterogeneous computing.

9
Why Parallel Processing?
  • Hardware improvements like Pipelining,
    Superscalar, etc., are non-scalable and requires
    sophisticated Compiler Technology.
  • Vector Processing works well for certain kind of
    problems.

10
Parallel Program has needs ...
  • Multiple processes active simultaneously
    solving a given problem, general multiple
    processors.
  • Communication and synchronization of its
    processes (forms the core of parallel programming
    efforts).

11
Processing Elements Architecture
12
Processing Elements
  • Simple classification by Flynn
  • (No. of instruction and data streams)
  • SISD - conventional
  • SIMD - data parallel, vector computing
  • MISD - systolic arrays
  • MIMD - very general, multiple approaches.
  • Current focus is on MIMD model, using general
    purpose processors.
  • (No shared memory)

13
SISD A Conventional Computer
  • Speed is limited by the rate at which computer
    can transfer information internally.

ExPC, Macintosh, Workstations
14
The MISD Architecture
  • More of an intellectual exercise than a practicle
    configuration. Few built, but commercially not
    available

15
SIMD Architecture
Cilt Ai Bi
  • Ex CRAY machine vector processing, Thinking
    machine cm
  • Intel MMX (multimedia support)

16
MIMD Architecture
Instruction Stream A
Instruction Stream C
Instruction Stream B
Data Output stream A
Data Input stream A
Processor A
Data Output stream B
Processor B
Data Input stream B
Data Output stream C
Processor C
Data Input stream C
  • Unlike SISD, MISD, MIMD computer works
    asynchronously.
  • Shared memory (tightly coupled) MIMD
  • Distributed memory (loosely coupled) MIMD

17
Shared Memory MIMD machine
Processor A
Processor B
Processor C
Global Memory System
  • Comm Source PE writes data to GM destination
    retrieves it
  • Easy to build, conventional OSes of SISD can be
    easily be ported
  • Limitation reliability expandibility. A
    memory component or any processor failure affects
    the whole system.
  • Increase of processors leads to memory
    contention.
  • Ex. Silicon graphics supercomputers....

18
Distributed Memory MIMD
IPC channel
IPC channel
Processor A
Processor B
Processor C
  • Communication IPC on High Speed Network.
  • Network can be configured to ... Tree, Mesh,
    Cube, etc.
  • Unlike Shared MIMD
  • easily/ readily expandable
  • Highly reliable (any CPU failure does not affect
    the whole system)

19
Laws of caution.....
  • Speed of computers is proportional to the square
    of their cost.
  • i.e. cost Speed
  • Speedup by a parallel computer increases as the
    logarithm of the number of processors.
  • Speedup log2(no. of processors)

20
Caution....
  • Very fast development in PP and related area have
    blurred concept boundaries, causing lot of
    terminological confusion concurrent computing/
    programming, parallel computing/ processing,
    multiprocessing, distributed computing, etc.

21
  • Its hard to imagine a field that changes as
    rapidly as computing.

22
Caution....
  • Even well-defined distinctions like shared memory
    and distributed memory are merging due to new
    advances in technolgy.
  • Good environments for developments and debugging
    are yet to emerge.

23
Caution....
  • There is no strict delimiters for contributors to
    the area of parallel processing CA,OS, HLLs,
    databases, computer networks, all have a role to
    play.
  • This makes it a Hot Topic of Research

24
Types of Parallel Systems
  • Shared Memory Parallel
  • Smallest extension to existing systems
  • Program conversion is incremental
  • Distributed Memory Parallel
  • Completely new systems
  • Programs must be reconstructed
  • Clusters
  • Slow communication form of Distributed

25
Operating Systems for PP
  • MPP systems having thousands of processors
    requires OS radically different fromcurrent ones.
  • Every CPU needs OS
  • to manage its resources
  • to hide its details
  • Traditional systems are heavy, complex and not
    suitable for MPP

26
Operating System Models
  • Frame work that unifies features, services and
    tasks performed
  • Three approaches to building OS....
  • Monolithic OS
  • Layered OS
  • Microkernel based OS
  • Client server OS
  • Suitable for MPP systems
  • Simplicity, flexibility and high performance are
    crucial for OS.

27
Monolithic Operating System
  • Better application Performance
  • Difficult to extend

Ex MS-DOS
28
Layered OS
Application Programs
Application Programs
User Mode
Kernel Mode
System Services
Memory I/O Device Mgmt
Process Schedule
Hardware
  • Easier to enhance
  • Each layer of code access lower level interface
  • Low-application performance

Ex UNIX
29
Traditional OS
User Mode
Kernel Mode
OS
Hardware
OS Designer
30
New trend in OS design
Servers
Application Programs
Application Programs
User Mode
Kernel Mode
Microkernel
Hardware
31
Microkernel/Client Server OS(for MPP Systems)
Client Application
Thread lib.
File Server
Network Server
Display Server
Microkernel
Send Reply
Hardware
  • Tiny OS kernel providing basic primitive
    (process, memory, IPC)
  • Traditional services becomes subsystems
  • Monolithic Application Perf. Competence
  • OS Microkernel User Subsystems

Ex Mach, PARAS, Chorus, etc.
32
Few Popular Microkernel Systems
  • MACH, CMU
  • PARAS, C-DAC
  • Chorus
  • QNX,
  • (Windows)

33
Reference
  • http//www.cs.mu.oz.au
  • http//www.whatis.com
  • Computer System Organization Architecture John
    D. Carpinelli
  • http//www.google.com (_)
Write a Comment
User Comments (0)
About PowerShow.com