VORTONICS: Vortex Dynamics on Transatlantic Federated Grids - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

VORTONICS: Vortex Dynamics on Transatlantic Federated Grids

Description:

VORTONICS: Vortex Dynamics. on Transatlantic Federated Grids. US-UK ... Vortical reconnection governs establishment of steady-state in Navier-Stokes turbulence ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 19
Provided by: brucebo
Category:

less

Transcript and Presenter's Notes

Title: VORTONICS: Vortex Dynamics on Transatlantic Federated Grids


1
VORTONICS Vortex Dynamicson Transatlantic
Federated Grids
US-UK TG-NGS Joint Projects Supported by NSF,
EPSRC, and TeraGrid
2
Vortex Cores
  • Evident coherent structures in Navier-Stokes flow
  • Intuitively useful Tornado or smoke ring
  • Theoretically useful Helicity and linking
    number
  • No single agreed upon mathematical definition
  • Difficulties with visualization
  • Vortex interactions poorly understood

3
Scientific Computational Challenges
  • Physical challenges Reconnection and Dynamos
  • Vortical reconnection governs establishment of
    steady-state in Navier-Stokes turbulence
  • Magnetic reconnection governs heating of solar
    corona
  • The astrophysical dynamo problem
  • Exact mechanism and space/time scales unknown and
    represent important theoretical challenges
  • Mathematical challenges
  • Identification of vortex cores, and discovery of
    new topological invariants associated with them
  • Discovery of new and improved analytic solutions
    of Navier-Stokes equations for interacting
    vortices
  • Computational challenges Enormous problem sizes,
    memory requirements, and long run times
  • Algorithmic complexity scales as cube of Re
  • Substantial postprocessing for vortex core
    identification
  • Largest present runs and most future runs will
    require geographically distributed domain
    decomposition (GD3)
  • Is GD3 on Grids a sensible approach?

Homogeneous turbulence driven by force of
Arnold-Beltrami-Childress (ABC) form
4
Simulations to Study Reconnection
  • Aref Zawadzki (1992) presented numerical
    evidence that two nearby elliptical vortex rings
    will partially link
  • Benchmark problem in vortex dynamics
  • Used vortex-in-cell method for 3D Euler flow
  • Some numerical diffusion associated with VIC
    method, but very small

5
Example Hopf Link
  • Two linked circular vortex tubes as initial
    condition
  • Latice Boltzmann algorithm for Navier-Stokes with
    very low viscosity (0.002 in lattice units)
  • ELI variational result in dark blue and red
  • Vorticity thresholding in light blue
  • The dark blue and red curves do not unlink in the
    time scale of this simulation!

6
Example Aref Zawadzkis Ellipses Front View
  • Parameters obtained by correspondence with Aref
    Zawadzki
  • Lattice Boltzmann simulation with very low
    viscosity
  • They do not link in the time scale of this
    simulation!

7
Same Ellipses Side View
  • Note that not all minima are shown in the late
    stages of this evolution - only the time
    continuation of the original pair of ellipses
  • Again They do not link in the time scale of this
    simulation!

8
Lattice Remapping, Fourier Resizing,and
Computational Steering
  • At its lowest level, VORTONICS contains a general
    remapping library for dynamically changing the
    layout of the computational lattice across the
    processors (pencils, blocks, slabs) using MPI
  • All data on computational lattice can be Fourier
    resized (FFT, augmentation or truncation in k
    space, inverse FFT) as it is remapped
  • All data layout features are dynamically
    steerable
  • VTK used for visualization (each rank computes
    polygons locally)
  • Grid-enabled with MPICH-G2 so that simulation,
    visualization, and steering can be run anywhere,
    or even across sites

9
Vortex Generator Component
  • Given parametrization of knot or link
  • Future Draw a vortex knot
  • Superpose contributions from each
  • Each site on 3D grid performs line integral
  • Divergenceless, parameter-independent
  • Periodic boundary conditions requires Ewald-like
    sum over image knots
  • Poisson solve (FFT) to get velocity field

10
Components for Fluid Dynamics
  • Navier-Stokes codes
  • Multiple-relaxation-time lattice Boltzmann
  • Entropic lattice Boltzmann
  • Pseudospectral Navier-Stokes solver
  • All codes parallelized with MPI (MPICH-G2)
  • Domain decomposition
  • Halo swapping

11
Components for VisualizationExtremal Line
Integral (ELI) Method
  • Intuition Line integral of vorticity along
    vortex core is large
  • Definition A vortex core is the curve along
    which line integral of vorticity is a local
    maximum in the space of all curves in the fluid
    domain
  • with appropriate boundary conditions
  • For smoke ring, periodic BCs
  • For tornado or trailing vortex on airplane wing,
    one end is attached to zero-velocity boundary,
    other at infinity
  • For hairpin vortex, two ends attached to
    boundary
  • Result is one-dimensional curve along vortex core
  • Two References available (Phil. Trans. Physica
    A)

12
ELI Algorithm
  • Ginsburg-Landau equation for which line integral
    is a Lyapunov functional
  • Evolve curve in fictitious time t
  • Equilibrium of GL equation is a vortex core

13
Computational Steering
  • All components use computational steering
  • Almost all parameters are steerable
  • time step
  • frequency of checkpoints
  • outputs, logs, graphics
  • stop and restart
  • read from checkpoint
  • even spatial lattice dimensions (dynamic lattice
    resizing)
  • halo thickness

14
Scenarios for Using TFD Toolkit
  • Run with periodic checkpointing until a
    topological change is noticed
  • Rewind to last checkpoint before topological
    change, refine spatial and temporal
    discretization, viscosity
  • Postprocessing of vorticity field and search for
    vortex cores can be migrated
  • All components portable and may run locally or on
    geographically separated hardware

15
Cross-Site RunsBefore, During, and After SC05
  • Federated Grids
  • US TeraGrid
  • NCSA
  • San Diego Supercomputing Center
  • Argonne National Laboratory
  • Texas Advanced Computing Center
  • Pittsburgh Supercomputing Center
  • UK National Grid Service
  • CSAR
  • Task distribution
  • GD3 - is it sensible for large computational
    lattices?

16
Run Sizes to Date / Performance
  • Multiple Relaxation Time Lattice Boltzmann
    (MRTLB) model
  • 600,000 SUPS/processor when run on one
    multiprocessor
  • Performance scales linearly with np when run on
    one multiprocessor
  • 3D lattice sizes up to 6453 run prior to SC05
    across six sites
  • NCSA, SDSC, ANL, TACC, PSC, CSAR
  • 528 CPUs to date, and larger runs in progress as
    we speak!
  • Amount of data injected into network. Strongly
    bandwidth limited.
  • Effective SUPS/processor
  • Reduced by factor approximately equal to number
    of sites
  • Therefore SUPS approximately constant as problem
    grows in size

nx np GB
512 512 1.5
645 512 2.4
1024 1024 7.6
1536 1024 17.1
2048 2048 38.4
sites kSUPS/Proc
1 600
2 300
4 149
6 75
17
Discussion / Performance Metric
  • We are aiming for lattice sizes that can not
    reside at any one SC Center, but
  • Bell, Gray, Szalay, PetaScale Computational
    Systems Balanced CyberInfrastructure in a
    Data-Centric World (September, 2005)
  • If data can be regenerated locally, dont send it
    over the grid (105 ops/byte)
  • Higher disk to processing ratios - large disk
    farms
  • Thought experiment
  • Enormous lattice, local to one SC Center, by
    swapping n sublattices to disk farm
  • If we can not exceed this performance, it is not
    worth using the Grid for GD3
  • Make the very optimistic assumption that disk
    access time not limiting
  • Clearly total SUPS constant, since it is one
    single multiprocessor
  • Therefore SUPS/processor degrades by 1/n
  • We can do that now. That is precisely the
    scaling that we see now. GD3 is a win!
  • And things are only going to improve
  • Improvements in store
  • UDP with added reliability (UDT) in MPICH-G2 will
    improve bandwidth
  • Multithreading in MPICH-G2 will overlap
    communication and computation to hide latency and
    bulk data transfers
  • Disk swap in volume, interprocessor
    communications on surface, keep in processors!

18
Conclusions
  • GD3 is already a win on todays TeraGrid/NGS,
    with todays middleware
  • With improvements to MPICH-G2, TeraGrid
    infrastructure, and middleware, GD3 will become
    still more desirable
  • The TeraGrid will enable scientific computation
    with larger lattice sizes than have ever been
    possible
  • It is worthwhile to consider algorithms that push
    the envelope in this regard, including relaxation
    of PDEs in 31 dimensions
Write a Comment
User Comments (0)
About PowerShow.com