CSE8380 Parallel and Distributed Processing Presentation - PowerPoint PPT Presentation

1 / 49
About This Presentation
Title:

CSE8380 Parallel and Distributed Processing Presentation

Description:

Characteristics of Parallel Processing and Sequential Processing ... Stephane vialle, 'Past and Future Parallelism Challenges to Encompass sequential ... – PowerPoint PPT presentation

Number of Views:157
Avg rating:3.0/5.0
Slides: 50
Provided by: jyue
Category:

less

Transcript and Presenter's Notes

Title: CSE8380 Parallel and Distributed Processing Presentation


1
CSE8380 Parallel and Distributed Processing
Presentation
  • Hong Yue
  • Department of Computer Science Engineering
  • Southern Methodist University

2
Parallel Processing Multianalysis--- Compare
Parallel Processing with Sequential Processing
3
Why did I select this topic?
4
Outline
  • Definition
  • Characteristics of Parallel Processing and
    Sequential Processing
  • Implementation of Parallel Processing and
    Sequential Processing
  • Performance of Parallel Processing and Sequential
    Processing
  • Parallel Processing Evaluation
  • Major Application of parallel processing

5
Definition
  • Parallel Processing Definition
  • Parallel Processing refers to the
    simultaneous use of multiple processors to
    execute the same task in order to obtain faster
    results. These processors either communicate each
    other to solve a problem or work completely
    independent, under the control of another
    processor which divides the problem into a number
    of parts to other processors and collects results
    from them.

6
Definition .2
  • Sequential Processing Definition
  • Sequential processing refers to a computer
    architecture in which a single processor carries
    out a single task by series of operations in
    sequence. It is also called serial processing.

7
Characteristics of Parallel Processing and
Sequential Processing
  • Characteristics of Parallel Processing
  • ? Each processor can perform tasks
  • concurrently.
  • ? Tasks may need to be synchronized.
  • ? Processors usually share resources, such as
    data, disks, and other devices.

8
Characteristics of Parallel Processing and
Sequential Processing .2
  • Characteristics of Sequential Processing
  • ? Only one single processor performs task.
  • ? The single processor performs a single
  • task.
  • ? Task is executed in sequence.

9
Implementation of parallel processing and
sequential processing
  • Executing single task
  • In sequential processing, the task is
    executed as a single large task.
  • In parallel processing, the task is divided
    into multiple smaller tasks, and each component
    task is executed on a separate processor.

10
Implementation of parallel processing and
sequential processing.2
11
Implementation of parallel processing and
sequential processing .3

12
Implementation of parallel processing and
sequential processing.4
  • Executing multiple independent task
  • ? In sequential processing, independent tasks
    compete for a single resource. Only task 1 runs
    without having to wait. Task 2 must wait until
    task 1 has completed task 3 must wait until
    tasks 1 and 2 have completed, and so on.

13
Implementation of parallel processing and
sequential processing .5
  • Executing multiple independent task
  • ? By contrast, in parallel processing, for
    example, a parallel server on a symmetric
    multiprocessor, more CPU power is assigned to the
    tasks. Each independent task executes immediately
    on its own processor no wait time is involved.

14
Implementation of parallel processing and
sequential processing .6

15
Implementation of parallel processing and
sequential processing .7

16
Performance of parallel processing and sequential
processing
  • Sequential Processing Performance
  • ? Take long time to execute task.
  • ? Cant handle too large task.
  • ? Cant handle large loads well.
  • ? Return is diminishing.
  • ? More increasingly expensive to make a
    single processor faster.

17
Performance of parallel processing and sequential
processing .2
  • Solution
  • using parallel processing - use lots of
    relatively fast, cheap processors in parallel.

18
Performance of parallel processing and sequential
processing .3
  • Parallel Processing Performance
  • ? Cheaper, in terms of price and performance.
  • ? Faster than equivalently expensive
    uniprocessor machines.
  • ? Scalable. The performance of a particular
    program may be improved by execution on a large
    machine.

19
Performance of parallel processing and sequential
processing .4
  • Parallel Processing Performance
  • ? Reliable. In theory if processors fail we
    can simply use others.
  • ? Can handle bigger problems.
  • ? Communicate with each other readily,
    important in calculations.

20
Parallel Processing Evaluation
  • Several ways to evaluate the parallel processing
    performance
  • ? Scale-up
  • ? Speedup
  • ? Efficiency
  • ? Overall solution time
  • ? Price/performance

21
Parallel Processing Evaluation .2
  • Scale-up
  • Scale-up is enhanced throughput, refers to
    the ability of a system n times larger to perform
    an n times larger job, in the same time period as
    the original system. With added hardware, a
    formula for scale-up holds the time constant, and
    measures the increased size of the job which can
    be done.

22
Parallel Processing Evaluation .3


23
Parallel Processing Evaluation .4
  • Scale-up measurement formula

24
Parallel Processing Evaluation .5
  • For example, if the uniprocessor system can
    process 100 transactions in a given amount of
    time, and the parallel system can process 200
    transactions in this amount of time, then the
    value of scale-up would be equal to 200/100 2.
  • Value 2 indicates the ideal of linear scale-up
    when twice as much, hardware can process twice
    the data volume in the same amount of time.

25
Parallel Processing Evaluation .6
  • Speedup
  • Speedup, the improved response time, defined
    as the time it takes a program to execute in
    sequential (with one processor) divided by the
    time it takes to execute in parallel (with many
    processors). It can be achieved by two ways
    breaking up a large task into many small
    fragments and reducing wait time.

26
Parallel Processing Evaluation .7

27
Parallel Processing Evaluation .8
  • Speedup measurement formula

28
Parallel Processing Evaluation .9
  • For example, if the uniprocessor took 40 seconds
    to perform a task, and two parallel systems took
    20 seconds, then the value of speedup 40 / 20
    2.
  • Value 2 indicates the ideal of linear speedup
    when twice as much, hardware can perform the same
    task in half the time.

29
Parallel Processing Evaluation .10
Table 1 Scale-up and Speedup for Different Types
of Workload
30
Parallel Processing Evaluation .11
Figure 7 Linear and actual speedup
31
Parallel Processing Evaluation .12
  • Amdahls Law
  • Amdahl's Law is a law governing the speedup
    of using parallel processors on a problem, versus
    using only one sequential processor. Amdahls law
    attempts to give a maximum bound for speedup from
    the nature of the algorithm

32
Parallel Processing Evaluation .13
  • Amdahls Law

33
Parallel Processing Evaluation .14

Figure 8 Example speedup Amdahl Gustafson
34
Parallel Processing Evaluation .15
  • Gustafsons Law
  • If the size of a problem is scaled up as the
    number of processors increases, speedup very
    close to the ideal speedup is possible.
  • That is, a problem size is virtually never
    independent of the number of processors.

35
Parallel Processing Evaluation .16
  • Gustafsons Law

36
Parallel Processing Evaluation .17
  • Efficiency
  • The relative efficiency can be a useful
    measure as to what percentage of a processors
    time is being spent in useful computation.

37
Parallel Processing Evaluation .18

Figure 9 Optimum efficiency actual efficiency
38
Parallel Processing Evaluation .19

Figure 10 Optimum number of processors in actual
speedup
39
Parallel Processing Evaluation .20
  • Problems in Parallel Processing
  • Parallel processing is like a dogs walking on
    its hind legs. It is not done well, but you are
    surprised to find it done at all.
  • ----Steve Fiddes (University of Bristol)

40
Parallel Processing Evaluation .21
  • Problems in Parallel Processing
  • ? Its software is heavily platform-dependent
    and has to be written for a specific machine.
  • ? It also requires a different, more
    difficult method of programming, since the
    software needs to appropriately, through
    algorithms, divide the work across each
    processor.

41
Parallel Processing Evaluation .22
  • Problems in Parallel Processing
  • ? There isn't a wide array of shrink-wrapped
    software ready for use with parallel machines.
  • ? Parallelization is problem-dependent and
    cannot be automated.
  • ? Speedup is not guaranteed.

42
Parallel Processing Evaluation .23
  • Solution 1
  • ? Decide which architecture is most appropriate
    for a given application.
  • The characteristics of application should
    drive decision as to how it should be
    parallelized the form of the parallelization
    should then determine what kind of underlying
    system, both hardware and software, is best
    suited to running your parallelized application.

43
Parallel Processing Evaluation .24
  • Solution 2
  • ? Clustering

44
Major Applications of parallel processing
  • Clustering
  • ? Clustering is a form of parallel processing
    that takes a group of workstations connected
    together in a local-area network and applies
    middleware to make them act like a parallel
    machine.

45
Major Applications of parallel processing .2
  • Clustering
  • Clustering is a form of parallel processing
    that takes a group of workstations connected
    together in a local-area network and applies
    middleware to make them act like a parallel
    machine.

46
Major Applications of parallel processing .3
  • Clustering
  • ? Parallel processing using Linux Clusters
    can yield supercomputer performance for some
    programs that perform complex computations or
    operate on large data sets. And it can accomplish
    this task by using cheap hardware.
  • ? Clustering can be used at night when
    networks are idle, it is an inexpensive
    alternative to parallel-processing machines.

47
Major Applications of parallel processing .4
  • Clustering can work with two separate but similar
    implementations
  • ? A Parallel Virtual Machine (PVM), is an
    environment that allows messages to pass between
    computers as it would in an actual parallel
    machine.
  • ? A Message-Passing Interface (MPI), allows
    programmers to create message-passing parallel
    applications, using parallel input/output
    functions and dynamic process management.

48
Reference
  • Andrew Boucher, Parallel Machines
  • Stephane vialle, Past and Future Parallelism
    Challenges to Encompass sequential Processor
    evolution

49
The end
  • Thank you!
Write a Comment
User Comments (0)
About PowerShow.com