Ch' 3: MPI and PVM Programming - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Ch' 3: MPI and PVM Programming

Description:

Given: A directed graph G = (V, E) and a 'distance' matrix W ... Striping used to partition work. Other partitioning methods? Data passing requirements? ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 14
Provided by: foxu
Category:

less

Transcript and Presenter's Notes

Title: Ch' 3: MPI and PVM Programming


1
Ch. 3 MPI and PVM Programming
  • MPI and PVM
  • Provide message passing libraries
  • MPI
  • Emphasizes performance
  • Specification for a standard
  • PVM
  • Emphasizes flexibility and portability
  • Dynamic task spawning, cluster environment
    changes, etc.

2
3.3 All Pairs Shortest Path Prob.
  • Problem Description
  • Given A directed graph G (V, E) and a
    distance matrix W
  • Find For every pair of vertices, find the
    shortest path using edges in G
  • 2-D Array Representation of Graph
  • Indices (i or j) are the vertex IDs
  • Entry in array is distance from i to j
  • No connection is represented by infinity entry

3
Sequential Algorithms
  • Dijkstras Algorithm
  • Shown in Algorithm 3.1 (p. 52)
  • Each source vertex considered in turn
  • One row processed at a time
  • Floyds Algorithm
  • Shown in Algorithm 3.2 (p. 53)
  • More of a breadth-first search algorithm

4
Parallel Algorithms
  • Master-Slave Paradigm
  • Shown in Fig. 3.2 (p. 54)
  • All-Peers Paradigm
  • Shown in Fig. 3.3 (p. 54)
  • Similar Approaches Used for Both Parallel
    Dijkstra and Parallel Floyd
  • Alg. 3.3 and Alg. 3.4 (pp. 55-56)
  • Striping used to partition work
  • Other partitioning methods?
  • Data passing requirements?

5
Communication Models
  • Point-to-Point Communication
  • Synchronous communication
  • Blocking send or blocking receive
  • Sender waits for ACK from receiver before
    continuing to process next instruction
  • Asynchronous communication
  • Nonblocking send or nonblocking receive
  • Buffer holds data to be sent while sending
    process continues processing without waiting for
    an ACK from the receiver

6
  • Collective Communication
  • Broadcast
  • A sender node sends the same data item to all
    other nodes in the participating group
  • Reduce (and allreduce)
  • A single node collects data (and combines them in
    some manner) from all other nodes
  • Barrier
  • A group of nodes wait until all nodes in the
    group reach a certain point in their programs
  • Used for synchronizing the operations of multiple
    nodes

7
MPI Program for Parallel Dijkstras Algorithm
  • Code shown in pp. 61-64
  • Master program st_dijk_mpi
  • Slave program pdijk_mpi
  • Sequence of Operations in Master and Slave
  • MPI_Init to initialize the MPI environment
  • MPI_Comm_rank to find my process ID
  • MPI_Comm_size to find number of processes
  • MPI_Pack to serialize data to be sent
  • MPI_Send to send the data
  • MPI_Recv, MPI_Unpack, MPI_Finalize

8
MPI Program for Parallel Floyds Algorithm
  • Code shown in pp. 64-68
  • Implemented as an SPMD program
  • Single program for the master and for all peer
    workers
  • Differentiate code with
  • if (myTid MASTERTID) / master code
    /else / worker code /
  • Data sent/received in smaller chunks
  • Barrier used to synchronize workers

9
PVM Program for Parallel Dijkstras Algorithm
  • Code shown in pp. 74-76
  • Sequence of Operations for master
  • Enroll in PVM using pvm_mytid()
  • Create slave processes using pvm_spawn
  • Send work and receive answers
  • pvm_initsend / prepare to send data /
  • pvm_pkint, , pvm_pkint, / used to serialize
    data /
  • pvm_send
  • Un-enroll from PVM using pvm_exit()

10
  • Sequence of Operations for worker
  • pvm_mytid() / get my process ID /
  • pvm_recv() / receive data /
  • pvm_bufinfo() / get sender ID /
  • pvm_upkint() / deserialize received data /
  • Perform work
  • pvm_initsend(.) / prepare to send data /
  • pvm_pkint() / serialize result data /
  • pvm_send()
  • pvm_exit()

11
PVM Program for Parallel Floyds Algorithm
  • PVM framework supports master-slave paradigm most
    naturally
  • Code shown in pp. 77-82
  • Additional functions
  • pvm_joingroup / enroll in group for
    broadcasting /
  • pvm_barrier / synchronize with other slaves /
  • pvm_bcast / broadcast data to group /

12
Homework 1
  • Due Sep. 10, Tuesday, at 245 pm
  • Short report, including all code and results
  • Perform Assignment 1 in UNCC web page
  • Execute using 14 machines in cluster
  • Obtain cluster account from Min Gu Lee, (email
    bluehope, 2-404, 279-5936)
  • Students with odd IDs should use MPI
  • Students with even IDs should use PVM

13
Homework 2
  • Due Sep. 24, Tuesday, at 245 pm
  • Solve the All Pairs Shortest Path Problem using
    adjacent rows partitioning
  • Modify (slightly) code in textbook
  • Solve for randomly generated connected graphs of
    various sizes (include very large graphs)
  • Use 18 (or more) computers in the PC cluster
  • Students with odd IDs should use PVM
  • Students with even IDs should use MPI
Write a Comment
User Comments (0)
About PowerShow.com