An Introduction to MPI (message passing interface) - PowerPoint PPT Presentation

Loading...

PPT – An Introduction to MPI (message passing interface) PowerPoint presentation | free to download - id: 4b5512-ZjBlM



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

An Introduction to MPI (message passing interface)

Description:

An Introduction to MPI (message passing interface) Organization In general, grid apps can be organized as: Peer-to-peer Manager-worker (one manager-many workers) We ... – PowerPoint PPT presentation

Number of Views:65
Avg rating:3.0/5.0
Slides: 27
Provided by: Georg151
Learn more at: http://people.sju.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: An Introduction to MPI (message passing interface)


1
An Introduction to MPI (message passing interface)
2
Organization
  • In general, grid apps can be organized as
  • Peer-to-peer
  • Manager-worker (one manager-many workers)
  • We will focus on master-worker.

3
Concepts
  • MPI size of processes in grid app
  • MPI rank
  • Individual process number in executing grid app
  • 0..size-1
  • In manager-worker framework,
  • let manager rank 0
  • and workers ranks be 1..size-1
  • Each individual process can determine its rank.

4
More concepts
  • Blocking vs. nonblocking
  • Blocking calling process waits (blocks) until
    this operation completes
  • Nonblock calling process does not wait (block).
    The calling process initiates the operation but
    does not wait for completion.

5
Compiling MPI grid apps (on scott)
  • Dont use g directly!
  • Use ggrevera/lammpi/bin/mpic
  • Ex.
  • mpic -g -o mpiExample2.exe mpiExample2.cpp
  • mpic -O3 -o mpiExample2.exe mpiExample2.cpp

6
Starting, running, and stopping grid apps
  • Before we can run our grid apps, we must first
    start lam mpi. Enter the command
  • lamboot -v
  • An optional lamhosts file may be specified to
    indicate the host computers (along with CPU
    configurations) that participate in the grid.
  • To run our grid app (called mpiExample1.exe),
    use
  • mpirun -np 4 ./mpiExample1.exe
  • This creates and runs a 4 process grid app.
  • When you are finished, stop lam mpi via
  • lamhalt

7
Getting started
  • include ltmpi.hgt //do this once for mpi
    definitions
  • int MPI_Init ( int pargc, char pargv )
  • INPUT PARAMETERS
  • pargc - Pointer to the number of arguments
  • pargv - Pointer to the argument vector

8
Finish up
  • int MPI_Finalize ( void )

9
Other useful MPI functions
  • int MPI_Comm_rank ( MPI_Comm comm,
  • int rank )
  • INPUT PARAMETERS
  • comm - communicator (handle)
  • OUTPUT PARAMETER
  • rank - rank of the calling process in
    group of comm (integer)

10
Other useful MPI functions
  • int MPI_Comm_size ( MPI_Comm comm,
  • int psize )
  • INPUT PARAMETER
  • comm - communicator (handle - must be
    intracommunicator)
  • OUTPUT PARAMETER
  • psize - number of processes in the group
    of comm (integer)

11
Other useful non MPI functions
  • include ltunistd.hgt
  • int gethostname ( char name, size_t len )

12
Other useful non MPI functions
  • include ltsys/types.hgt
  • include ltunistd.hgt
  • pid_t getpid ( void )

13
Example 1
  • This program is a skeleton of a parallel MPI
    application using the one manager/many workers
    framework.
  • http//www.sju.edu/ggrevera/software/csc4035/mpiE
    xample1.cpp

14
Example 1
  • /
  • \file mpiExample1.cpp
  • \brief MPI programming example 1.
  • \author george j. grevera, ph.d.
  • This program is a skeleton of a parallel MPI
    application using the one
  • manager/many workers framework.
  • ltpregt
  • compile mpic -g -o mpiExample1.exe
    mpiExample1.cpp debug version
  • mpic -O3 -o mpiExample1.exe
    mpiExample1.cpp optimized version
  • run lamboot -v to start lam mpi
  • mpirun -np 4 ./mpiExample1.exe run
    in parallel w/ 4 processes
  • lamhalt _at_ to stop lam mpi
  • lt/pregt
  • /
  • include ltassert.hgt
  • include ltmpi.hgt
  • include ltstdio.hgt

15
Example 1
  • static char mpiName 1024 ///lt host computer
    name
  • static int mpiRank ///lt number of this process
    (0..n-1)
  • static int mpiSize ///lt total number of
    processes (n)
  • static int myPID ///lt process id
  • //------------------------------------------------
    ----------------------

16
Example 1
  • //------------------------------------------------
    ----------------------
  • / \brief main program entry point for example
    1. execution begins here.
  • \param argc count of command line
    arguments.
  • \param argv array of command line
    arguments.
  • \returns 0 is always returned.
  • /
  • int main ( int argc, char argv ) //not
    const because MPI_Init may change
  • if (MPI_Init( argc, argv ) ! MPI_SUCCESS)
  • //actually, we'll never get here but it
    is a good idea to check.
  • // if MPI_Init fails, mpi will exit with
    an error message.
  • puts( "mpi init failed." )
  • return 0
  • //get the name of this computer
  • gethostname( mpiName, sizeof( mpiName ) )
  • //determine rank
  • MPI_Comm_rank( MPI_COMM_WORLD, mpiRank )
  • //determine the total number of processes

17
Example 1
  • printf( "mpi initialized. my rankd,
    sized, pidd. \n",
  • mpiRank, mpiSize, myPID )
  • if (mpiSizelt2)
  • puts("this example requires at least 1
    manager and 1 worker process.")
  • MPI_Finalize()
  • return 0
  • if (mpiRank0) manager()
  • else worker()
  • MPI_Finalize()
  • return 0
  • //------------------------------------------------
    ----------------------

18
Example 1
  • //------------------------------------------------
    ----------------------
  • / \brief manager code for example 1
  • /
  • static void manager ( void )
  • printf( "manager my rankd, sized,
    pidd. \n",
  • mpiRank, mpiSize, myPID )
  • / \todo insert manager code here. /
  • //------------------------------------------------
    ----------------------

19
Example 1
  • //------------------------------------------------
    ----------------------
  • / \brief worker code for example 1
  • /
  • static void worker ( void )
  • printf( "worker my rankd, sized, pidd.
    \n",
  • mpiRank, mpiSize, myPID )
  • / \todo insert worker code here. /
  • //------------------------------------------------
    ----------------------

20
More useful MPI functions
  • int MPI_Send ( void buf, int count, MPI_Datatype
    dtype,
  • int dest, int tag, MPI_Comm comm )
  • INPUT PARAMETERS
  • buf - initial address of send buffer
    (choice)
  • count - number of elements in send buffer
  • (nonnegative integer)
  • dtyp - datatype of each send buffer
    element (handle)
  • dest - rank of destination (integer)
  • tag - message tag (integer)
  • comm - communicator (handle)

21
More useful MPI functions
  • int MPI_Recv ( void buf, int count, MPI_Datatype
    dtype,
  • int src, int tag, MPI_Comm
    comm, MPI_Status stat )
  • INPUT PARAMETERS
  • count - maximum number of elements in
    receive buffer (integer)
  • dtype - datatype of each receive buffer
    element (handle)
  • src - rank of source (integer)
  • tag - message tag (integer)
  • comm - communicator (handle)
  • OUTPUT PARAMETERS
  • buf - initial address of receive buffer
    (choice)
  • stat - status object (Status), which can
    be the MPI constant
  • MPI_STATUS_IGNORE if the return status is not
    desired

22
Defining messages
  • struct Message
  • enum
  • OP_WORK, ///lt manager to worker -
    here's your work assignment
  • OP_EXIT, ///lt manager to worker -
    time to exit
  • OP_RESULT ///lt worker to manager -
    here's the result
  • int operation ///lt one of the above
  • / \todo define operation specific
    parameters here. /
  • C enums assign successive integers to the given
    constants/symbols.
  • C structs are like Java or C objects with only
    the data members and without the
    methods/functions.

23
Example 2
  • This program is a skeleton of a parallel MPI
    application using the one manager/many workers
    framework. The process with an MPI rank of 0 is
    considered to be the manager processes with MPI
    ranks of 1..mpiSize-1 are workers. Messages are
    defined and are sent from the manager to the
    workers.
  • http//www.sju.edu/ggrevera/software/csc4035/mpiE
    xample2.cpp

24
Example 2
  • //------------------------------------------------
    ----------------------
  • / \brief manager code for example 2.
  • /
  • static void manager ( void )
  • printf( "manager my rankd, sized,
    pidd. \n",
  • mpiRank, mpiSize, myPID )
  • / \todo insert manager code here. /
  • //as an example, send an empty work message
    to each worker
  • struct Message m
  • m.operation m.OP_WORK
  • assert( mpiSizegt3 )
  • MPI_Send( m, sizeof( m ), MPI_UNSIGNED_CHAR,
    1,
  • m.operation, MPI_COMM_WORLD )
  • MPI_Send( m, sizeof( m ), MPI_UNSIGNED_CHAR,
    2,
  • m.operation, MPI_COMM_WORLD )
  • MPI_Send( m, sizeof( m ), MPI_UNSIGNED_CHAR,
    3,
  • m.operation, MPI_COMM_WORLD )
  • //------------------------------------------------
    ----------------------

25
Example 2
  • //------------------------------------------------
    ----------------------
  • / \brief worker code for example 2.
  • /
  • static void worker ( void )
  • printf( "worker my rankd, sized, pidd.
    \n",
  • mpiRank, mpiSize, myPID )
  • / \todo insert worker code here. /
  • //as an example, receive a message
  • MPI_Status status
  • struct Message m
  • MPI_Recv( m, sizeof( m ), MPI_UNSIGNED_CHAR,
  • MPI_ANY_SOURCE, MPI_ANY_TAG,
  • MPI_COMM_WORLD, status )
  • printf( "worker d (d) received message.
    \n", mpiRank, myPID )
  • //------------------------------------------------
    ----------------------

26
More useful MPI functions
  • MPI_Barrier - Blocks until all process have
    reached this routine.
  • int MPI_Barrier ( MPI_Comm comm )
  • INPUT PARAMETERS
  • comm - communicator (handle)
About PowerShow.com