Simple Interface for Polite Computing (SIPC) - PowerPoint PPT Presentation

1 / 1
About This Presentation
Title:

Simple Interface for Polite Computing (SIPC)

Description:

Simple Interface for Polite Computing (SIPC) Travis Finch. St. Edward's University ... Parallel applications, by nature, are resource intensive and often times load ... – PowerPoint PPT presentation

Number of Views:19
Avg rating:3.0/5.0
Slides: 2
Provided by: it862
Category:

less

Transcript and Presenter's Notes

Title: Simple Interface for Polite Computing (SIPC)


1
Simple Interface for Polite Computing (SIPC)
Travis Finch St. Edwards University Department
of Computer Science, School of Natural
Sciences Austin, TX
Abstract As computing labs begin to rely more
on shared commodity workstations to perform
parallel computations load balancing cannot be
ignored. Parallel applications, by nature, are
resource intensive and often times load balancing
techniques do not take into consideration
external events of the application. This can
cause disruption among other users sharing the
same computer. A steep learning curve is also
present in High-Performance Computing (HPC) for
novice programmers, often causing load balancing
to be totally ignored. This paper presents Simple
Interface for Polite Computing (SIPC), a
mechanism that allows external load balancing to
be easily integrated into programs where polite
resource sharing is necessary. While SIPC can be
used with any program, the focus here is the
integration of it with embarrassingly parallel
applications that follow a dynamic scheduling
paradigm.
  • Background
  • Polite computing allows intensive applications to
    run on a shared workstation
  • HPC application does not excessively consume
    resources in the presence of other users
  • Allows other users to remain productive and
    lessens starvation for other processes
  • In his paper "Polite Parallel Computing", Cameron
    Rivers integrated a simple approach to solving
    external load balancing into mpiBLAST
  • The algorithm allowed an application to become
    aware of its surroundings and scale back if
    needed to distribute computing power
  • Method is effective, but introduces unnecessary
    overhead and is difficult for novice HPC
    programmers to utilize
  • SIPC was built around three goals
  • A self-contained library easily utilized by
    novice programmers
  • Results Future Work
  • The inclusion of SIPC into a host application
    proved to have very little overhead
  •  
  • On average increased execution time by 1
  • This work represents a beginning in the
    development of tools designed to improve the
    efficiency of code written by beginning HPC
    programmers
  •  
  • Solution
  • In the HPC community, often times the speed of
    program execution is the only determination of
    success.
  •  
  • Other relevant factors that are ignored include
  • Time to develop the solution
  • - Additional lines of code compared to the serial
    implementation
  • - The cost per line of code
  •  
  • Studies show that HPC development is
    significantly more expensive than serial
    development
  •  
  • HPC applications that use MPI often contain twice
    as many lines of code as their serial counterpart
  •  
  • Tools are needed that allow advanced aspects such
    as external load balancing to be injected into
    the parallel application with minimal effort from
    the novice programmer
  •  

Implementation A goal of SIPC was to obtain CPU
utilization and the number of users currently
logged onto the system and not create a large
processing footprint Obtaining the CPU load is
accomplished by opening the /proc/loadavg file
and retrieving the first value in it with the
fopen and fscanf functions   Counting the number
of users is achieved by executing the users shell
command and capturing the return stream via the
popen and fgets functions   If a
system is under high load at least two users must
be logged into the system   A condition check
prevents the application from sleeping on a
semi-high load with a low number of users   A
final condition check determines if the system
load is high enough to sleep and does for a
predetermined amount of time if it is
necessary   If the target application sleeps, the
timing mechanism used to schedule load checks is
reset so another check will occur soon If the
target application does not sleep, the duration
between load checks is doubled
References Rivers, Cameron. "Polite Parallel
Computing." Journal of Computing Sciences in
Colleges 21 (2006) 190-195. Hochstein, Lorin,
Jeff Carver, Forrest Shull, Sima Asgari, Victor
Basili, Jeffrey K. Hollingsworth, and Marvin V.
Zelkowitz. "Parallel Programmer Productivity a
Case Study of Novice Parallel Programmers."
Proceedings of the 2005 ACM/IEEE Conference on
Supercomputing (2005). Darling, A., L. Carey, and
W. Feng. "The Design, Implementation, and
Evaluation of MpiBLAST." 4th International
Conference on Linux Clusters (2003).
lthttp//www.mpiblast.org/downloads/pubs/cwce03.pdf
gt. "MPI-POVRay." 14 Nov. 2006 lthttp//www.verrall
.demon.co.uk/mpipov/gt. "The Mandelbrot Set." 30
Mar. 2007 lthttp//www.cs.mu.oz.au/sarana/mandelbr
ot_webpage/mandelbrot/mandelbrot.htmlgt.
Acknowledgements Faculty Advisor Dr. Sharon
Weber This work was conducted on a computing
cluster funded by a grant from the Department of
Defense.
Write a Comment
User Comments (0)
About PowerShow.com