Hercules: A System for Massive Parallel EndtoEnd Finite Element Ground Motion Simulations - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

Hercules: A System for Massive Parallel EndtoEnd Finite Element Ground Motion Simulations

Description:

Associate Professor of CS and ECE. Carnegie Mellon Quake Group. www.cs.cmu.edu/~quake. Joint work with (alphabetical order): Jacobo Bielak (CMU CEE), Leonardo Ramirez ... – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 11
Provided by: david2211
Category:

less

Transcript and Presenter's Notes

Title: Hercules: A System for Massive Parallel EndtoEnd Finite Element Ground Motion Simulations


1
Hercules A System for Massive Parallel
End-to-End Finite Element Ground Motion
Simulations
  • David R. OHallaron
  • Associate Professor of CS and ECE
  • Carnegie Mellon Quake Group
  • www.cs.cmu.edu/quake
  • Joint work with (alphabetical order) Jacobo
    Bielak (CMU CEE), Leonardo Ramirez-Guzman (CMU
    CEE), Julio Lopez (CMU ECE), Kwan-Liu Ma
    (UC-Davis Visualization), Ricardo Taborda-Rios,
    Tiankai Tu (CMU CS), Hongfeng Yu (UC-Davis
    Visualization)
  • Funded by SCEC, NSF

2
Hercules
  • An end-to-end approach for parallel simulation
  • Why end-to-end?
  • Eliminate intermediate files.
  • Eliminate heroic efforts to run simulations
  • Fast turnaround on simulation/analysis cycle.
  • Latest result of long term (since 1993)
    collaboration among Carnegie Mellon computer
    scientists and domain scientists at

Tiankai Tu, Hongfeng Yu, Leonardo
Ramirez-Guzman, Jacobo Bielak, Omar Ghattas,
Kwan-Liu Ma, David R. OHallaron, From Mesh
Generation to Scientific Visualization An
End-to-End Approach to Parallel Supercomputing,
Proceedings of SC06, Tampa, FL, November, 2006.
3
Finite Element Ground Motion Modeling
Powerful workstation
1-10 TB
4D vel/disp wavefield
Powerful workstation
Parallel supercomputer
Solving
Mesh generation
Analysis/Viz
10s MB
Unstructured mesh
Velocity model
10-100s GB
10s MB
4
Key Hercules Idea 1
4D vel/disp wavefield
Solving
Mesh generation
Analysis/Viz
Unstructured mesh
Velocity model
Output model
Key Idea 1 Use octrees for the input datasets.
5
Why Octrees?
  • Octrees can be used to represent velocity fields
    and unstructured FEM meshes
  • Octrees have an efficient etree database
    representation (Tu et al)
  • Draws on decades of spatial database research
  • Fast approximate query times allow for fast mesh
    generation
  • Octree meshes a nice tradeoff between structured
    meshes and unstructured tetrahedral meshes
  • Wavelength adaptive
  • Fast parallel mesh generation (e.g. 8 -gt 10
    pts)
  • No need to store mesh topology
  • - Unable to resolve complex geometries

6
Key Hercules Idea 2
4D velocity wavefield
Solving
Mesh generation
Analysis/Viz
etree library
etree library
etree library
Unstructured mesh
Velocity model
Output model
  • Key Idea 2 Each octree is an etree database
  • Portable across different machine types
  • Fast sampling of velocity model
  • Code and documentation available at
    http//www.cs.cmu.edu/euclid

7
Key Hercules Idea 3
4D velocity wavefield
Parallel Supercomputer System
Solving
Meshgen/ partition
Analysis/Viz
etree lib
herc lib
herc lib
herc lib
Partitioned in-situ distributed mesh data
structure
Velocity model
Output model
Key Idea 3 Run entire finite element simulation
(mesh generation, solving, and visualizing), in
place and in parallel (end-to-end simulation).
8
Performance (Tu et al, SC06)
Scenario LA Basin (100km x 100km x 37.5 km),
SCEC CVM Version 2.0 minimum shear wave velocity
100m/s, Lemieux at PSC
  • Processors 1 16 52 184 748 2000
  • Frequency (Hz) 0.23 0.5 0.75 1 1.5 2
  • Elements 6.61E5 9.92E6 3.13E7 1.14E8
    4.62E8 1.22E9
  • Nodes 8.11E5 1.13E7 3.57E7 1.34E8
    5.34E8 1.37E9
  • Anchored 6.48E5 9.87E6 3.12E7 1.14E8
    4.61E8 1.22E9
  • Hanging 1.63E5 1.44E6 4.576 2.03E7
    7.32E7 1.488
  • Max leaf level 11 13 13 14 14 15
  • Min leaf level 6 7 8 8 9 9
  • Elements/PE 6.61E5 6.20E5 6.02E5 6.20E5
    6.18E5 6.12E5
  • Time steps 2000 4000 10000 8000 2500 2500
  • E2E time (s) 12911 19804 38165 48668 13033
    16709
  • Replication(s) 22 71 85 94 187 251
  • Meshing(s) 20 75 128 150 303 333
  • Solver(s) 8381 16060 31781 42892 11960
    16097
  • Vis(s) 4488 3596 6169 5528 558
  • E2E t/ts/e/PE(µs) 9.77 7.98 7.93 7.86 8.44
    10.92
  • Sol t/ts/e/PE (µs) 6.34 6.48 6.60 6.92 7.74
    10.52
  • Mflops/sec/PE 569 638 653 655

1
1
3
2
9
Future CS Directions 1. Standard Velocity Model
Representations
  • Proposal Adopt etree database as the standard
    queryable representation for CVMs and other
    fields
  • Queryable dataset required for any unstructured
    FE code
  • Models currently based on etree databases
  • USGS Bay Area Velocity Model
  • Southern California Velocity Model
  • Other candidates Shaw model, Parkfield model
  • Proposal Develop tool for automatic generation
    of sampled etree fields
  • Quantize and aggregate given some global error
    constraint
  • Standalone C program
  • Arbitrary field function dynamically linked from
    shared library (dll)

10
Future CS Directions 2. Wavefield Database
Systems
  • At present, we only look at a small fraction of
    the volume data produced by our models
  • Proposal Develop compressed and queryable
    database representations of synthetic 4D
    wavefields
  • WaveDB Wavefield Database System (Julio Lopez,
    CMU)
  • Lossless compression through
  • Frequency domain filtering
  • Delta encoding of waveforms
  • micro solver queries based on Greens functions
  • Fast interpolated access methods using mesh as
    index
  • Give me the surface values on a regular grid at
    100 meter spacing for all time steps
Write a Comment
User Comments (0)
About PowerShow.com