Supercomputing the Next Century - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

Supercomputing the Next Century

Description:

Talk to the Max-Planck-Institut fuer Gravitationsphysik Albert-Einstein-Institut, Potsdam, Germany June 15, 1998 – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 41
Provided by: LarryS166
Category:

less

Transcript and Presenter's Notes

Title: Supercomputing the Next Century


1
Supercomputing the Next Century
  • Talk to the Max-Planck-Institut fuer
    Gravitationsphysik Albert-Einstein-Institut,
    Potsdam, Germany
  • June 15, 1998

2
NCSA is the Leading Edge Site for the National
Computational Science Alliance
www.ncsa.uiuc.edu
3
The Alliance Team Structure
  • Leading Edge Center
  • Enabling Technology
  • Parallel Computing
  • Distributed Computing
  • Data and Collab. Computing
  • Partners for Advanced Computational Services
  • Communities
  • Training
  • Technology Deployment
  • Comp. Resources Services
  • Strategic Industrial and Technology Partners
  • Application Technologies
  • Cosmology
  • Environmental Hydrology
  • Chemical Engineering
  • Nanomaterials
  • Bioinformatics
  • Scientific Instruments
  • EOT
  • Education
  • Evaluation
  • Universal Access
  • Government

4
Alliance98 Hosts 1000 AttendeesWith Hundreds
On-Line!
5
The Experts Are Not Always Right!Seek a New
Vision and Stick to It
We do not believe that workstation-class systems
(even if they offer more processing power than
the old Cray Y-MPs) can become the mainstay of a
National center. At 500K purchase prices,
departments should be able to afford the SGI
Power Challenge systems on their own. For these
reasons, we wonder whether the role of the
proposed SGI systems in NCSAs plan might be
different from that of a production
machine. -Program Plan Review Panel Feb 1994
6
The SGI Power Challenge Array as NCSAs
Production Facility for Four Years
7
The NCSA Origin Array Doubles Again This Month
8
The Growth Rate of the National Capacity is
Slowing Down again
Source Quantum Research Lex Lane, NCSA
9
Major Gap Developing in National Usage at NSF
Supercomputer Centers
70 Annual Growth Rate is the Historical Rate of
National Usage Growth. It is Also Slightly
Greater Than the Rate of Moores Law, So Lesser
Growth Means Desktops Gain on Supers
10
Monthly National Usage at NSF Supercomputer
Centers
Capacity Level NSF Proposed at 3/97 NSB Meeting
11
Accelerated Strategic Computing Initiative is
Coupling DOE DP Labs to Universities
  • Access to ASCI Leading Edge Supercomputers
  • Academic Strategic Alliances Program
  • Data and Visualization Corridors

http//www.llnl.gov/asci-alliances/centers.html
12
Comparison of the DoE ASCI and the NSF PACI
Origin Array Scale Through FY99
Los Alamos Origin System FY99 5-6000 processors
NCSA Proposed System FY99 6x128 and 4x641024
processors
www.lanl.gov/projects/asci/bluemtn /Hardware/sched
ule.html
13
High-End Architecture 2000-Scalable Clusters of
Shared Memory Modules
Each is 4 Teraflops Peak
  • NEC SX-5
  • 32 x 16 vector processor SMP
  • 512 Processors
  • 8 Gigaflop Peak Processor
  • IBM SP
  • 256 x 16 RISC Processor SMP
  • 4096 Processors
  • 1 Gigaflop Peak Processor
  • SGI Origin Follow-on
  • 32 x 128 RISC Processor DSM
  • 4096 Processors
  • 1 Gigaflop Peak Processor

14
Emerging Portable Computing Standards
  • HPF
  • MPI
  • OpenMP
  • Hybrids of MPI and OpenMP

15
Top500 Shared Memory Systems
Vector Processors
Microprocessors
TOP500 Reports http//www.netlib.org/benchmark/t
op500.html
16
The Exponential Growth of NCSAs SGI Shared
Memory Supercomputers
Doubling Every Nine Months!
SN1
Origin
Power Challenge
Challenge
17
Extreme and Large PIs Dominant Usage of NCSA
Origin
January thru April, 1998
18
Disciplines Using the NCSA Origin 2000CPU-Hours
in March 1995
19
Users, NCSA, SGI, and Alliance Parallel Team
Working to Make Better Scaling Routine
Source Mitas, Hayes, Tafti, Saied, Balsara,
NCSA Wilkins, OSUWoodward, UMinn Freeman,
NW NCSA 128-processor Origin IRIX 6.5

20
Solving 2D Navier-Stokes Kernel - Performance
of Scalable Systems
Preconditioned Conjugate Gradient Method With
Multi-level Additive Schwarz Richardson
Pre-conditioner (2D 1024x1024)
Source Danesh Tafti, NCSA
21
A Variety of Discipline Codes -Single Processor
Performance Origin vs. T3E
22
Alliance PACS Origin2000 Repository
Kadin Tseng, BU, Gary Jensen, NCSA, Chuck
Swanson, SGI John Connolly, U Kentucky Developing
Repository for HP Exemplar
http//scv.bu.edu/SCV/Origin2000/
23
Simulation of the Evolution of the Universe on a
Massively Parallel Supercomputer
4 Billion Light Years
12 Billion Light Years
Virgo Project - Evolving a Billion Pieces of Cold
Dark Matter in a Hubble Volume - 688-processor
CRAY T3E at Garching Computing Centre of the
Max-Planck-Society
http//www.mpg.de/universe.htm
24
Limitations of Uniform Grids for Complex
Scientific and Engineering Problems
Gravitation Causes Continuous Increase in Density
Until There is a Large Mass in a Single Grid Zone
512x512x512 Run on 512-node CM-5
Source Greg Bryan, Mike Norman, NCSA
25
Use of Shared Memory Adaptive Grids To Achieve
Dynamic Load Balancing
64x64x64 Run with Seven Levels of Adaption on SGI
Power Challenge, Locally Equivalent to
8192x8192x8192 Resolution
Source Greg Bryan, Mike Norman, John Shalf, NCSA
26
NCSA Visualization --VRML Viewers
http//infinite-entropy.ncsa.uiuc.edu/Projects/Amr
Wireframe/
John Shalf on Greg Bryan Cosmology AMR Data
27
NT Workstation Shipments Rapidly Surpassing UNIX
Source IDC, Wall Street Journal, 3/6/98
28
Current Alliance LES NT Cluster Testbed -Compaq
Computer and Hewlett-Packard
  • Schedule of NT Supercluster Goals
  • 1998 Deploy First Production Clusters
  • Scientific and Engineering Tuned Cluster
  • Andrew Chien, Alliance Parallel Computing Team
  • Rob Pennington, NCSA CC
  • Currently 256-processors of HP and Compaq Pentium
    II SMPs
  • Data Intensive Tuned Cluster
  • 1999 Enlarge to 512-Processors in Cluster
  • 2000 Move to Merced
  • 2002-2005 Achieve Teraflop Performance
  • UNIX/RISC NT/Intel will Co-exist for 5 Years
  • 1998-2000 Move Applications to NT/Intel
  • 2000-2005 Convergence toward NT/Merced

29
First Scaling Testing of ZEUS-MP on CRAY T3E and
Origin vs. NT Supercluster
Supercomputer performance at mail-order
prices-- Jim Gray, Microsoft
access.ncsa.uiuc.edu/CoverStories/SuperCluster/sup
er.html
  • Alliance Cosmology Team
  • Andrew Chien, UIUC
  • Rob Pennington, NCSA

Zeus-MP Hydro Code Running Under MPI
30
NCSA NT Supercluster Solving Navier-Stokes
Kernel
Single Processor Performance MIPS R10k
117 MFLOPS Intel Pentium II 80 MFLOPS
Preconditioned Conjugate Gradient Method With
Multi-level Additive Schwarz Richardson
Pre-conditioner (2D 1024x1024)
Danesh Tafti, Rob Pennington, Andrew Chien NCSA
31
Near Perfect Scaling of Cactus - 3D Dynamic
Solver for the Einstein GR Equations
Cactus was Developed by Paul Walker,
MPI-Potsdam UIUC, NCSA
Ratio of GFLOPs Origin 2.5x NT SC
Danesh Tafti, Rob Pennington, Andrew Chien NCSA
32
NCSA Symbio - A Distributed Object Framework
Bringing Scalable Computing to NT Desktops
  • Parallel Computing on NT Clusters
  • Briand Sanderson, NCSA
  • Microsoft Co-Funds Development
  • Features
  • Based on Microsoft DCOM
  • Batch or Interactive Modes
  • Application Development Wizards
  • Current Status Future Plans
  • Symbio Developer Preview 2 Released
  • Princeton University Testbed

http//access.ncsa.uiuc.edu/Features/Symbio/Symbio
.html
33
The Road to Merced
http//developer.intel.com/solutions/archive/issue
5/focus.htmFOUR
34
FY98 Assembling the Links in the Gridwith NSFs
vBNS Connections Program
NCSA Distributed Applications Support Team for
vBNS
StarTAP
NCSA
1999 Expansion via Abilene vBNS Abilene at
2.4 Gbit/s
vBNS Backbone Node
vBNS Connected Alliance Site
vBNS Alliance Site Scheduled for Connection
Source Charlie Catlett, Randy Butler, NCSA
35
Globus Ubiquitous Supercomputing Testbed (GUSTO)
  • Alliance Middleware for the Grid-Distributed
    Computing Team
  • GII Next Generation Winner
  • SF Express -- NPACI / Alliance DoD Mod.
    Demonstration
  • Largest Distributed Interactive Simulation Ever
    Performed
  • The Grid Blueprint for a New Computing
    Infrastructure
  • Edited by Ian Foster and Carl Kesselman, July
    1998
  • IEEE Symposium on High Performance Distributed
    Computing
  • July 29-31, 1998 Chicago Illinois
  • NASA IPG Most Recent Funding Addition

36
Alliance National Technology GridWorkshop and
Training Facilities
Powered by Silicon Graphics Linked by the NSF vBNS
Jason Leigh and Tom DeFanti, EVL Rick Stevens,
ANL
37
Using NCSA Virtual Director to Explore Structure
of Density Isosurfaces of 2563 MHD Star Formation
Simulation by Dinshaw Balsara, NCSA, Alliance
Cosmology Team Visualization by Bob Patterson,
NCSA
38
Linking CAVE to Vis5D CAVE5D Then Use Virtual
Director to Analyze Simulations
Donna Cox, Robert Patterson, Stuart Levy,
NCSAVirtual Director Team
39
Environmental Hydrology Collaboration From CAVE
to Desktop
NASA IPG is Adding Funding To Collaborative Java3D
40
Caterpillars Collaborative Virtual Prototyping
Environment
Real Time Linked VR and Audio-Video Between NCSA
and Germany Using SGI Indy/Onyx and HP
Workstations
Data courtesy of Valerie Lehner, NCSA
Write a Comment
User Comments (0)
About PowerShow.com