Introduction to MPI - PowerPoint PPT Presentation

Loading...

PPT – Introduction to MPI PowerPoint presentation | free to download - id: 216000-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Introduction to MPI

Description:

It is not a programming language or compiler specification ... Early vendor systems (Intel's NX, IBM's EUI, TMC's CMMD) were not portable (or very capable) ... – PowerPoint PPT presentation

Number of Views:88
Avg rating:3.0/5.0
Slides: 26
Provided by: aeGa
Learn more at: http://www.ae.gatech.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Introduction to MPI


1
Introduction to MPI
  • Nischint Rajmohan
  • nischint_at_gatech.edu
  • 5 November 2007

2
  • What can you expect ?
  • Overview of MPI
  • Basic MPI commands
  • How to parallelize and execute a program using
    MPI MPICH2
  • What is outside the scope ?
  • Technical details of MPI
  • MPI implementations other than MPICH
  • Hardware specific optimization techniques

3
Overview of MPI
  • MPI stands for Message Passing Interface
  • What is Message Passing Interface?
  • It is not a programming language or compiler
    specification
  • It is not a specific implementation or product
  • MPI is a specification for the developers and
    users of message passing libraries. By itself, it
    is NOT a library - but rather the specification
    of what such a library should be.
  • The specifications lets you create libraries that
    allow you to do problems in parallel using
    message passing to communicate between processes
  • It provides binding for widely used programming
    languages like Fortran, C/C

4
Background on MPI
  • Early vendor systems (Intels NX, IBMs EUI,
    TMCs CMMD) were not portable (or very capable)
  • Early portable systems (PVM, p4, TCGMSG,
    Chameleon) were mainly research efforts
  • Did not address the full spectrum of issues
  • Lacked vendor support
  • Were not implemented at the most efficient level
  • The MPI Forum organized in 1992 with broad
    participation by
  • vendors IBM, Intel, TMC, SGI, Convex, Meiko
  • portability library writers PVM, p4
  • users application scientists and library
    writers
  • finished in 18 months
  • Library standard defined by a committee of
    vendors, implementers, and parallel programmers

5
Reasons for using a MPI standard
  • Standardization - MPI is the only message passing
    library which can be considered a standard. It is
    supported on virtually all HPC platforms.
    Practically, it has replaced all previous message
    passing libraries.
  • Portability - There is no need to modify your
    source code when you port your application to a
    different platform that supports (and is
    compliant with) the MPI standard.
  • Performance Opportunities - Vendor
    implementations should be able to exploit native
    hardware features to optimize performance.
  • Functionality - Over 115 routines are defined in
    MPI-1 alone.
  • Availability - A variety of implementations are
    available, both vendor and public domain.

6
MPI Operation
Communicator
Processing
7
MPI Programming Model
8
MPI Library
9
Environment Management Routines
  • MPI_Init
  • Initializes the MPI execution environment. This
    function must be called in every MPI program,
    must be called before any other MPI functions and
    must be called only once in an MPI program. For C
    programs, MPI_Init may be used to pass the
    command line arguments to all processes, although
    this is not required by the standard and is
    implementation dependent.
  • C MPI_Init (argc,argv)
  • Fortran MPI_INIT (ierr)

10
Environment Management Routines contd.
  • MPI_Comm_Rank
  • Determines the rank of the calling process within
    the communicator. Initially, each process will be
    assigned a unique integer rank between 0 and
    number of processors - 1 within the communicator
    MPI_COMM_WORLD. This rank is often referred to as
    a task ID. If a process becomes associated with
    other communicators, it will have a unique rank
    within each of these as well.
  • C MPI_Comm_rank (comm,rank)
    FORTRAN MPI_COMM_RANK (comm,rank,ierr)

11
Environment Management Routines contd.
  • MPI_Comm_size
  • Determines the number of processes in the group
    associated with a communicator. Generally used
    within the communicator MPI_COMM_WORLD to
    determine the number of processes being used by
    your application.
  • C MPI_Comm_size (comm,size)
    Fortran MPI_COMM_SIZE (comm,size,ierr)
  • MPI_Finalize
  • Terminates the MPI execution environment. This
    function should be the last MPI routine called in
    every MPI program - no other MPI routines may be
    called after it.
  • C MPI_Finalize ()Fortran
    MPI_FINALIZE (ierr)

12
MPI Sample Program Environment Management
Routines
  • ! In C
  • ! / the mpi include file /
  • include "mpi.h"
  • include ltstdio.hgt
  • int main( argc, argv )
  • int argc
  • char argv
  • int rank, size
  • !/ Initialize MPI /
  • MPI_Init( argc, argv )
  • !/ How many processors are there?/
  • MPI_Comm_size( MPI_COMM_WORLD, size )
  • !/ What processor am I (what is my rank)? /
  • MPI_Comm_rank( MPI_COMM_WORLD, rank )
  • printf( "I am d of d\n", rank, size )
  • MPI_Finalize()

! In Fortran program main ! / the mpi include
file / include 'mpif.h' integer ierr, rank,
size !/ Initialize MPI / call MPI_INIT( ierr
) !/ How many processors are there?/ call
MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr ) !/
What processor am I (what is my rank)? / call
MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr ) print
, 'I am ', rank, ' of ', size call MPI_FINALIZE(
ierr ) end
13
Point to Point Communication Routines
  • MPI_Send
  • Basic blocking send operation. Routine returns
    only after the application buffer in the sending
    task is free for reuse. Note that this routine
    may be implemented differently on different
    systems. The MPI standard permits the use of a
    system buffer but does not require it.
  • C
  • MPI_Send (buf,count,datatype,dest,tag,comm)
  • Fortran MPI_SEND(buf,count,datatype,dest,tag,co
    mm,ierr)

14
Point to Point Communication Routines contd.
  • MPI_Recv
  • Receive a message and block until the requested
    data is available in the application buffer in
    the receiving task.
  • C
  • MPI_Recv(buf,count,datatype,source,tag,comm)
  • FortranMPI_RECV(buf,count,datatype,source,ta
    g,comm,ierr)

15
MPI Sample Program Send and Receive
  • ! In C
  • include ltstdio.hgt
  • include "mpi.h"
  • main(int argc, char argv)
  • int my_PE_num, numbertoreceive, numbertosend42
  • MPI_Status status
  • MPI_Init(argc, argv)
  • MPI_Comm_rank(MPI_COMM_WORLD, my_PE_num)
  • if (my_PE_num0)
  • MPI_Recv( numbertoreceive, 1, MPI_INT,
    MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD,
    status)
  • printf("Number received is d\n",
    numbertoreceive)
  • else MPI_Send( numbertosend, 1, MPI_INT, 0,
    10, MPI_COMM_WORLD)
  • MPI_Finalize()

! In Fortran program shifter implicit none
include 'mpif.h' integer my_pe_num, errcode,
numbertoreceive, numbertosend integer
status(MPI_STATUS_SIZE) call MPI_INIT(errcode)
call MPI_COMM_RANK(MPI_COMM_WORLD, my_pe_num,
errcode) numbertosend 42 if (my_PE_num.EQ.0)
then call MPI_Recv( numbertoreceive, 1,
MPI_INTEGER,MPI_ANY_SOURCE, MPI_ANY_TAG,
MPI_COMM_WORLD, status, errcode) print , 'Number
received is ,numbertoreceive Endif if
(my_PE_num.EQ.1) then call MPI_Send(
numbertosend, 1,MPI_INTEGER, 0, 10,
MPI_COMM_WORLD,errcode) endif call
MPI_FINALIZE(errcode) end
16
Collective Communication Routines
  • MPI_Barrier
  • Creates a barrier synchronization in a group.
    Each task, when reaching the MPI_Barrier call,
    blocks until all tasks in the group reach the
    same MPI_Barrier call.
  • C MPI_Barrier (comm) Fortran
    MPI_BARRIER (comm,ierr)
  • MPI_Bcast
  • Broadcasts (sends) a message from the process
    with rank "root" to all other processes in the
    group.
  • C
  • MPI_Bcast (buffer,count,datatype,root,comm)
    Fortran
  • MPI_BCAST (buffer,count,datatype,root,comm,ierr)

17
Sources of Deadlocks
  • Send a large message from process 0 to process 1
  • If there is insufficient storage at the
    destination, the send must wait for the user to
    provide the memory space (through a receive)
  • What happens with this code?
  • This is called unsafe because it depends on the
    availability of system buffers

18
MPICH MPI Implementation
  • MPICH is a freely available, portable
    implementation of MPI
  • MPICH acts as the middleware between the MPI
    parallel library API and the hardware environment
  • MPICH build is available for Unix based systems
    and also as an installer for Windows.
  • MPICH2 is latest version of the implementation
  • http//www-unix.mcs.anl.gov/mpi/mpich/

19
MPI Program Compilation (Unix)
  • Fortran
  • mpif90 c hello_world.f
  • mpif90 o hello_world hello_world.o
  • C
  • mpicc -c hello_world.cc
  • mpicc o hello_world hello_world.o

20
MPI Program Execution
  • Fortran/C
  • mpiexec n 4 ./hello_world

mpiexec is the command for execution in parallel
environment
used for specifying number of processors
mpiexec -help
This command should give all the options
available to run mpi programs
If you dont have mpiexec installed on your
system, use mpirun and use np instead of -n
21
MPI Program Execution contd.
  • mpiexec machinefile hosts n 7 ./hello_world

This flag allows you to specify a file containing
the host name of the processors you want to use
Sample hosts file
master master node2 node3 node3 node5 node6
22
MPICH on Windows
  • Installing MPICH2
  • Download the Win32-IA32 version of MPICH2 from
  • http//www-unix.mcs.anl.gov/mpi/mpich2/
  • Run the executable, mpich2-1.0.3-1-win32-ia32.msi
    (or a more recent version). Most likely it will
    result in the following error

3. To download version1.1 use this link
http//www.microsoft.com/downloads/details.aspx?Fa
milyId262D25E3-F589-4842-8157-034D1E7CF3A3displa
ylangen
23
MPICH on Windows contd.
  • Install the .NET Framework program
  • Install the MPICH2 executable. Write down the
    passphrase for future reference. The passphrase
    must be consistent across a network.
  • Add the MPICH2 path to Windows
  • Right click My Computer and pick properties
  • Select the Advanced Tab
  • Select the Environment Variables button
  • Highlight the path variable under System
    Variables and click edit. Add C\MPICH2\bin to
    the end of the list, make sure to separate this
    from the prior path with a semicolon.
  • Run the example executable to insure correct
    installation.
  • mpiexec n 2 cpi.exe
  • If installed on a dual processor machine, verify
    that both processors are being utilized by
    examining CPU Usage History in the Windows Task
    Manager.
  • The first time each session mpiexec is run it
    will ask for username and password. To prevent
    being asked for this in the future, this
    information can be encrypted into the Windows
    registry by running
  • mpiexec register
  • The username and password are your Windows XP
    logon information.

24
MPICH on Windows contd.
  • Compilation (Fortran)
  • ifort /fpp /includeCMPICH2/INCLUDE
    /names/uppercase /ifacecref /libsstatic
    /threads /c hello_world.f
  • The above command will compile the parallel
    program and create a .obj file.
  • ifort o hello_world.exe hello_world.obj
    C/MPICH2/LIB/cxx.lib C/MPICH2/LIB/mpi.lib
    C/MPICH2/LIB/fmpich2.lib C/MPICH2/LIB/fmpich2s.l
    ib C/MPICH2/LIB/fmpich2g.lib
  • The above command will link the object file and
    create the executable. The executable is run in
    the same way as specified before using mpiexec
    command

25
THE END
  • Useful Sources
  • http//www.llnl.gov/computing/tutorials/mpi/LLNL
  • http//www-unix.mcs.anl.gov/mpi/
  • CS 6290 - High Performance Computing
    Architecture
  • For more assistance, you can contact

Nischint Rajmohan MK402 404-894-6301 nischint_at_gate
ch.edu
About PowerShow.com