Parallel Programming - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

Parallel Programming

Description:

compile and run: Monte Carlo Approximation. Approximate using Monte ... compile and run: Packing ... compile and run: mpirun. Command line arguments for mpirun ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 41
Provided by: del562
Category:

less

Transcript and Presenter's Notes

Title: Parallel Programming


1
Parallel Programming
  • Jonathan Hauenstein
  • CAM mini-course
  • University of Notre Dame
  • May 8, 2008

2
MPI
  • MPI_Abort
  • immediately terminate MPI program
  • int MPI_Abort(MPI_Comm comm, int errorCode)
  • Fortran
  • MPI_ABORT(COMM, ERRORCODE, IERROR) INTEGER
    COMM, ERRORCODE, IERROR
  • Note After any unusual termination of your
    program, you should always make sure that all of
    the processes have been successfully stopped

3
MPI
  • abort.c
  • compile and run

4
MPI
  • sendRecvError.c
  • Error if count for receiving is not large
    enough to hold the message.
  • compile and run

5
Monte Carlo Approximation
  • Approximate using Monte Carlo method
  • Randomly select test points in the square -1,1
    x -1,1 and determine if they lie in the unit
    circle
  • is approximated as
  • (area of -1,1 x -1,1) (proportion of test
    points that lie in the unit circle)

6
Monte Carlo Approximation
7
  • pi.c

Monte Carlo Approximation
  • samples.c

8
Monte Carlo Approximation
  • compile
  • run

9
Packing and Unpacking
  • MPI can pack data into a single contiguous
    buffer
  • input data can be of any datatype
  • output buffer should be contiguous memory
  • int MPI_Pack(void in, int inCount, MPI_Datatype
    datatype, void out, int outSize, int position,
    MPI_Comm comm)
  • MPI_PACK(INBUF, INCOUNT, DATATYPE, OUTBUF,
    OUTSIZE, POSITION, COMM, IERROR)lttypegt
    INBUF(), OUTBUF()INTEGER INCOUNT, DATATYPE,
    OUTSIZE, POSITION, COMM, IERROR

10
Packing and Unpacking
  • Using MPI_PACKED datatype, the data can be
    transmitted
  • MPI can unpack data from a single contiguous
    buffer
  • int MPI_Unpack(void in, int inSize, int
    position, void out, int outCount, MPI_Datatype
    datatype, MPI_Comm comm)
  • MPI_UNPACK(INBUF, INSIZE, POSITION, OUTBUF,
    OUTCOUNT, DATATYPE, COMM, IERROR)lttypegt
    INBUF(), OUTBUF()INTEGER INSIZE, POSITION,
    OUTCOUNT, DATATYPE, COMM, IERROR

11
  • pack.c

Packing and Unpacking
  • compile and run

12
Packing and Unpacking
  • MPI_Pack_size can be used to calculate how much
    memory to allocate for the output buffer
  • provides an upper bound on how much memory is
    needed
  • int MPI_Pack_size(int count, MPI_Datatype
    datatype, MPI_Comm comm, int size)
  • MPI_PACK_SIZE(INCOUNT, DATATYPE, COMM, SIZE,
    IERROR)INTEGER INCOUNT, DATATYPE, COMM, SIZE,
    IERROR

13
  • packSize.c

Packing and Unpacking
  • compile and run

14
Create new datatypes
  • Users can define there own datatypes
  • using a user-defined datatype is more efficient
    than repeated packing and unpacking of data
  • creating a MPI_Datatype is a 3 step process
  • Setup the structures that define the datatype
  • Build the MPI struct
  • Commit the datatype
  • user-defined datatypes should be cleared after it
    is no longer needed

15
Create new datatypes
  • Step 1 Setup the structures that define the
    datatype
  • The 3 structures define a datatype are
  • blockLen number of elements in each block int
  • indices byte displacement of each block
    MPI_Aint
  • pointer arithmetic
  • MPI_Address(void loc, MPI_Aint address) can be
    used to find the address of loc.
  • types datatype of each block MPI_Datatype

16
Create new datatypes
Step 2 Build the MPI struct int
MPI_Type_struct(int count, int blockLen,
MPI_Aint indices, MPI_Datatype types,
MPI_Datatype new_type) MPI_TYPE_STRUCT(COUNT,
BLOCKLEN, INDICES, TYPES, NEWTYPE, IERROR)
INTEGER COUNT, NEWTYPE, IERROR INTEGER
BLOCKLEN(), INDICES(), TYPES()
17
Create new datatypes
Step 3 Commit the datatype int
MPI_Type_commit(MPI_Datatype new_type) MPI_TYP
E_COMMIT(DATATYPE, IERROR) INTEGER DATATYPE,
IERROR
18
Create new datatypes
Clearing a user-defined datatype after it is no
longer needed int MPI_Type_free(MPI_Datatype
new_type) MPI_TYPE_FREE(DATATYPE,
IERROR) INTEGER DATATYPE, IERROR
19
Create new datatypes
  • test.h
  • definedTest.c
  • definedDatatype.c

20
Create new datatypes
  • compile and run

21
Trapezoid Rule
Approximate using trapezoid rule for
integration
Trapezoid rule
22
Trapezoid Rule
  • trapezoid.h
  • trapezoid.c

23
Trapezoid Rule
  • trapezoidMPI.c

24
Trapezoid Rule
  • trapezoidMain.c

25
Trapezoid Rule
  • compile and run

26
mpirun
  • Command line arguments for mpirun can be used to
  • specify the number of processes to use
  • specify which machines to use and how many
    processes should be put on each of the machines
  • specify whether to run a process locally
  • testing option to see where the processes will
    be located
  • more advanced options allow for different
    programs to be run on different architectures if
    you have a heterogeneous system

27
mpirun
  • A machine file
  • lists the machines you want to utilize
  • contains the number of processes to put onto
    each machine (optional)
  • Processes are placed on the machines in the order
    they appear in the file.
  • If more processes are requested than listed in
    the machine file, it cycles back to the top of
    the file.

28
mpirun
An example of a machine file
29
mpirun
To utilize a machine file mpirun machinefile
ltname_of_filegt
To see the location of the processes, without
running the actual program, utilize the t
option mpirun machinefile ltname_of_filegt t
30
mpirun
Example
31
mpirun
To not put a process on the local machine, use
the nolocal option. mpirun nolocal Example
32
Jumpshot
Jumpshot is a Java-based visualization program
used to create a picture so that you can analyze
your parallel code. It uses log files to create
its graphical representation.
To have MPI create the log files mpicc mpilog
33
Jumpshot
  • input

Creating log file
34
Jumpshot
Converting clog to slog2
35
Jumpshot
Running jumpshot
36
Jumpshot
  • Instead of using a static distribution for
    trapezoid rule, consider using a dynamic
    distribution of the work
  • 1 head and n 1 compute processes
  • trapezoids are distributed 100,000 at a time

37
Jumpshot
Compute portion of trapezoidMPI_dyn.c
38
Jumpshot
Control portion of trapezoidMPI_dyn.c
39
Jumpshot
Running jumpshot
40
Thank You!!!
All slides and example files are available
at http//www.nd.edu/jhauenst/parallelcourse
Write a Comment
User Comments (0)
About PowerShow.com