Center for Simulation of Wave Interactions with MHD SWIM - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

Center for Simulation of Wave Interactions with MHD SWIM

Description:

Link invokes ElVis graphical display client on exported data ... with experiments on present day experiments provide the physics basis ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 38
Provided by: DBBatc
Learn more at: http://w3.pppl.gov
Category:

less

Transcript and Presenter's Notes

Title: Center for Simulation of Wave Interactions with MHD SWIM


1
Center for Simulation of Wave Interactions with
MHD (SWIM)
PSACI PAC Meeting June 5-6, 2008 D. B. Batchelor,
L. A. Berry, E. F. Jaeger, D. A. Spong ORNL
Fusion Energy D. E. Bernholdt, E. DAzevedo, W.
Elwasif, V. Lynch (NCCS) ORNL Computer
Science S. C. Jardin, E. Feibush, D. McCune, J.
Chen, L. P Ku, M. Chance, J. Breslau PPPL G.
Abla, M. Choi, D. P. Schissel General Atomics ,
R. W. Harvey CompX R. Bramley, S. Foley
Indiana University, D. Keyes Columbia
University, D. Schnack, T. Jenkins U.
Wisconsin P. T. Bonol, J. Ramos, J.Wright MIT,
S. Kruger TechX, G. Bateman Lehigh
University Unfunded participants L. Sugiyama
MIT, J. D. Callen, C. C. Hegna, C. Sovinec
University of Wisconsin, E. Held Utah
State H. St. John General Atomics, A. Kritz
Lehigh Univ.
  • Old business frameworks
  • Progress on scientific goals
  • Role of collaborations
  • Role of leadership-class computing resources
  • Final 5-year project goals

See our fun website at www.cswim.org
2
Old business computational frameworks it
seems both reasonable and valuable to explore
various paths
  • In 2004 2005 on ORNL LDRD funding we developed
    a model fusion simulation using CCA combining
    components in Fortran, C, and Python
  • In June, 2005 we held an international Workshop
    on Computational Frameworks in Fusion
  • Attendees included fusion physicists, computer
    scientists and mathematicians from US labs and
    universities, fusion SciDAC and CSET projects,
    and from Europe
  • Presentations and discussions on frameworks
    included CCA, Kepler, PaCo and Padico, Pyre,
    Cactus, CORBA, SciRun, MCT
  • In addition there were more general discussions
    of fusion simulation requirements, scientific
    interface definition languages, and interface
    issues for high performance numerical components
  • During the IPS design phase (2006 early 2007)
    we analyzed the framework requirements for the (3
    year) SWIM project and the extensions that would
    be required for the Slow MHD campaign and that
    would be needed to permit tight coupling of IPS
    components for the 5 year completion
  • Some SWIM team members have considerable
    experience with computational frameworks,
    particularly those associated with CCA

3
Old business (2) computational frameworks it
seems both reasonable and valuable to explore
various paths
We decided to develop a lightweight, extensible,
SWIM specific framework (in Python) rather than
adopt any of the existing framework systems
  • Minimize intrusion to component physics codes
    (and physicists)
  • Component codes under active development
  • Minimize re-training requirements for physicists
    CCA more complex than needed
  • Goal zero modifications (initially) biggest
    driver for current IPS design!
  • Targeted MPP computers from the beginning
    restrictive operating environments
  • No support for dynamic linking, Java
  • Restricted external communication
  • Python is everywhere
  • SWIM scientific goals do not require many
    framework features e.g. language
    interoperability (almost all Fortran90 or link
    compatible)
  • Python framework, under our control, provided
    flexibility to gradually increase tightness of
    component coupling as simulation sophistication
    increased

So far this approach has proven successful. We
are extending the framework to do some innovative
things in parallel component execution MCMD
4
Where we are in the project
  • The framework and associated utilities designed,
    built and work on PPPL cluster and Jaguar (should
    also work on Franklin at NERSC)
  • The web-based portal is operational
  • There has been a major redesign of the Plasma
    State with more general content and increased
    functionality
  • We have a complete set physics components (i.e.
    at least one of each functionality) running in
    the framework
  • We have made improvements to component codes and
    introduced improved algorithms and libraries
  • We are doing physics runs
  • We have made significant progress in the
    theoretical formulation of the Slow MHD campaign
    and are carrying out initial studies with
    non-linear MHD (NIMROD) coupled with reduced RF
    model

5
Directed deliverables (from last years PSACI
meeting) we expect to complete all by end of
3rd year
  • Public release of Integrated Plasma Simulator
    on track, studies by non-project participants
    (supported by project members)
  • Transition of IPS to Jaguar done
  • Slow MHD effect of RF on classical tearing mode
    studied with NIMROD on track, initial numerical
    studies under way
  • Initial sawtooth simulation with IPS done

6
Summary of Recent Progress IPS Environment
  • IPS in production in both Cray XT (ORNL Jaguar)
    and SGI Altix (PPPL Viz/MHD) environment
  • Various internal improvements to support evolving
    physics requirements, user experience with IPS
  • Web portal for job monitoring in production
  • Monitoring job progress, key results
  • Plasma State improvements
  • Additional data elements
  • Back-end netCDF storage restructured
  • Design and initial implementation work on MCMD
    execution model for IPS framework
  • Needed for more complex and efficient use of
    parallel computing resources
  • Detailed explanation follows
  • Development of additional physics components

7
SWIM Portal Real-Time Job Monitoring
  • http//swim.gat.com8000/monitor
  • Server hosted at GA
  • Portal usage is completely optional IPS jobs
    will run without it
  • Real-time monitoring of job progress
  • IPS framework instrumented to automatically
    provide portal with status information based on
    execution flow
  • Component method invocations
  • Data management operations
  • Task failures
  • Messages sent via simple http protocol
  • Components can provide additional information to
    portal

Run page provides history of all messages
received by portal
Main page summarized lists all recent IPS runs
with latest status information
8
SWIM Portal Real-Time Results Monitoring
  • IPS framework and portal are integrated with
    ElVis web-based visualization capability
  • Monitoring component (in IPS) exports data
    ofinterest to netCDF file
  • Separate from Plasma State, smaller
  • Placed in web-accessiblelocation at execution
    site (most centers provide one)
  • Portal displays link based on info received from
    job
  • Link invokes ElVis graphical display client on
    exported data
  • ElVis provides configurable, constantly updating
    display of exported data
  • Allows physicists to monitor key diagnostics,
    progress

9
What is MCMD?
  • The multiple-component multiple-data execution
    model is the component equivalent of
    multiple-program multiple-data (MPMD)
  • Most individual parallel programs are
    single-program multiple-data (SPMD) or SCMD if
    they are component-based
  • A term used in the Common Component Architecture
    community
  • An execution model that allows multiple parallel
    tasks (components) to execute concurrently
  • Simple pictorial example

10
Why is MCMD Important?
  • In general
  • Larger parallel computer systems require software
    to expose more and more parallelism. MCMD
    provides a way to do that with multiple parallel
    tasks instead of scaling up a single parallel
    task (often scientifically not reasonable)
  • Timely topic also outside of fusion simulation
  • In integrated fusion modeling
  • Mathematical operations in the physics codes vary
    widely in the degree of parallelism they support.
    Some highly scalable, others sequential or
    nearly so
  • Coupling of such codes leads to inefficiencies
  • Must allocate processors for most parallel
    component
  • Processors idle while less parallel components
    run
  • Cases where tasks have no data dependencies
  • i.e. multiple independent sources (heat,
    particles) multiple analyses
  • Tighter coupling will require tasks to run
    concurrently for efficiency
  • MCMD provides a general infrastructure to help
    address all of these challenges

11
MCMD Progress to Date
  • Designed framework services required to support
    MCMD execution while maximizing backward
    compatibility with original implementation
  • Task Manager (changed)
  • Resource Manager (new)
  • Data Manager (unchanged)
  • Configuration Manager (slightly changed)
  • Event Service (new)
  • Refactored and extended current IPS
    implementation to provide the new services
  • Carried out with almost no impact on existing
    components or scientific runs
  • Implementation of actual concurrency capabilities
    now underway
  • Service interfaces will remain stable, but
    underlying implementations will require
    significant modification
  • Initial usage scenarios do not require dealing
    with concurrent access to Plasma State

12
Plans for Coming Year IPS Environment
  • Complete MCMD implementation
  • Generalized monitoring component and web
    interface
  • More flexibility, easier configuration of what is
    exported for monitoring
  • Add authentication and personalization
    capabilities to portal
  • Bring prototype metadata management capabilities
    into production
  • Preliminary implementation of tight coupling
    capability
  • Support components coupling more frequently,
    exchanging more data
  • Task launch overheads become too high
  • File-based data exchange may be insuficient
  • Conceptual model CCA-like environment with
    tightly-coupled components in a single parallel
    job, exchange data through subroutine calls
  • Implementation will be more intrusive into
    physics codes than current components
  • Pay-off is generality, reusability of new
    components in other IPS contexts

13
All Simulation data exchanged between components
goes through Plasma State lingua franca of
fusion simulation
  • Fortran 90 Module supports in-memory or
    file-based data exchange (netCDF)
  • Very simple user interface ? functions get,
    store, commit
  • Other powerful functions available, but not
    required ? e.g. grid interpolation
  • Supports multiple state instances ? e.g.
    current/prior/next state,
  • Code is automatically generated from state
    specification text file ? ease and accuracy of
    update
  • Some types of data we havent dealt with yet ?
    distribution functions are just code dependent
    filenames
  • Being shared with other projects
  • Component-to-component data exchange in TRANSP
    and PTRANSP
  • Coupling of neutral beam and fusion product
    sources to FACETS C/C transport driver
  • Discussion with CPES for coupling RF fields to
    XGC codes and XGC distribution functions to IPS

14
Significant Plasma State redesign since last year
15
Our arsenal of developed components includes
  • RF Solve ion cyclotron two implementations
  • AORSA2D
  • TORIC
  • Fokker Planck Solve
  • CQL3D
  • NUBEAM not yet a wrapped component but can run as
    TSC/TRANSP coupling
  • Equilibrium and profile advance (EPA)
  • TSC implementation
  • Linear MHD component multiple implementations
    BALLOON, DCON, PEST-I and PEST-II, Nonlinear MHD
    M3D/NIMROD
  • MONITOR accumulates state data into time series
    for plotting/monitoring with Elvis
  • Numerous utility components reduced models or
    insertion of specified data for testing and
    operation of stand alone components
  • Coming soon
  • RF EC and LH components implemented by GENRAY
  • Fokker Planck solves implemented by ORBIT-RF,
    NUBEAM (Quasi-linear)

16
Progress on Scientific Goals physics studies
  • IPS simulations directed to validation and
    verification
  • RF power propagation and absorption benchmarking
  • Time dependent energetic minority tail formation
    in Alcator C-Mod
  • Sawtooth simulation benchmarking M3D/NIMROD (Fast
    MHD campaign)
  • ITER scenario modeling
  • Progress on analysis of RF terms in fluid
    equations, RF effects in kinetic closures.
    Initial simulations of RF effects on classical
    tearing mode evolution with NIMROD (Slow MHD
    Campaign)
  • Improvements to mathematical algorithms and
    improved libraries
  • Tri-diagonal Newton solver for EPA/GLF23
  • Electric field iteration between EPA and Fokker
    Planck components
  • Introduction of HYPRE and HPL solvers. HPL with
    AORSA specific modifications

17
The largest physics analysis task for SWIM is
developing an MHD closure that includes RF
effects ? closures working group
  • Modeling of RF modifications to the distribution
    function uses the quasilinear operator in the
    kinetic equation.
  • Taking moments to derive fluid equations
  • Looks simple, but many details required,
    especially in effects of distorted distribution
    function on closures.
  • Currently, we assume
  • Form of kinetic distortion similar to previously
    derived expressions (Held)
  • Also many subtleties in consistency of
    assumptions and gyroviscous terms.

Additional terms due to RF
18
With this formalism, a phased approach to
complete problem has been developed
?
  • Phase 0 Use axisymmetric, phenomenological
    model for the RF interaction I.e.,FRF FRF(R,Z)
    is specified as an analytic function.
  • Phase 1 Use non-symmetric phenomenological
    model for RF interaction (FRF FRF(R,Z,f)) and
    include equilibrium toroidal flow.
  • Phase 2 Pass the NIMROD equilibrium data to RF
    ray tracing codes, and fit the ray data generated
    by these codes to the parameters of the
    phenomenological model (e.g. Gaussian half-width,
    amplitude, spatial location, etc.)
  • Phase 3 Use quasilinear diffusion tensor from
    GENRAY and calculate FRF. Update GENRAY sources
    in time while code is running using IPS.
  • Phase 4 Fully couple the RF and MHD codes such
    that FRF is calculated at every time stepusing
    MPMD approach.
  • Phase 5 Incorporate more advanced closures
    and neoclassical effects.

Done
?
In progress
?
In progress
19
RF-induced perturbed current equilibrates over
the flux surfaces due to force balance
  • In cylindrical geometry, equilibration occurs
    over the flux surfaces after only a few Alfven
    times
  • Flux surface averages of RF terms are unnecessary
    - force balance in NIMROD spreads the RF effects
    over the flux surfaces
  • In toroidal geometry, equilibration occurs on
    similar timescales

20
The magnetic island width is reduced in the
presence of RF current drive
  • In DIII-D geometry, begin with an equilibrium
    unstable to (2,1) resistive tearing mode grow
    mode to saturation
  • Turn on axisymmetric ring of RF current,
    centered on (2,1) rational surface
  • Magnetic islands shrink in response to RF current

21
The resistive tearing mode amplitude can be
modified by ECCD even in this simple model
  • In the absence of RF, the (2,1), (3,1), and (5.2)
    islands grow to saturation
  • In the presence of RF, all island widths are
    reduced especially (2,1)
  • More detailed modeling under way- toroidal
    variation, time-dependent phasing, coupling to
    GENRAY, etc.

22
Plans for coming year physics in SlowMHD
Campaign
?
  • Phase 0 Use axisymmetric, phenomenological
    model for the RF interaction I.e.,FRF FRF(R,Z)
    is specified as an analytic function.
  • Phase 1 Use non-symmetric phenomenological
    model for RF interaction (FRF FRF(R,Z,f)) and
    include equilibrium toroidal flow.
  • Phase 2 Pass the NIMROD equilibrium data to RF
    ray tracing codes, and fit the ray data generated
    by these codes to the parameters of the
    phenomenological model (e.g. Gaussian half-width,
    amplitude, spatial location, etc.)
  • Phase 3 Use quasilinear diffusion tensor from
    GENRAY and calculate FRF. Update GENRAY sources
    in time while code is running using IPS.
  • Phase 4 Fully couple the RF and MHD codes such
    that FRF is calculated at every time stepusing
    MPMD approach.
  • Phase 5 Incorporate more advanced closures
    and neoclassical effects.

Done
?
Published
?
Done
?
Done
?
In progress
23
Detailed, quantitative, time dependent modeling
of ICRF energetic minority tail formation on
Alcator C-Mod
  • Recent experiments with pulsed PICRF 2.65 MW
  • Performed at our request (mutually beneficial
    experimental collaboration)

We are using measurements such as this to carry
out validation studies of the RF, Fokker Planck
and transport models used in SWIM The
simulations, coupled with experiments can improve
understanding of RF, energetic particle
formation, sawtooth behavior and transport
24
Simulation of the Alcator C-Mod shot with TSC
using internal ICRF model
Evolution of central electron and ion
temperatures
TSC calculation was carried out using the
Porcelli sawtooth and Coppi-Tang transport
models. Model source terms were used for the ICRF
heating.
Calculated electron temperature profiles at
various stages of the sawtooth oscillation.
25
Comparison of simulated peak tail energy with
experimental Neutral Particle Analyzer count rate
  • Simulation using AORSA2D and CQL3D with fixed
    plasma profiles
  • CNPA count rate binned over several square-wave
    pulses to improve signal to noise ratio
  • Reasonable agreement on tail turn on, not so good
    agreement on decay time (profile effect?,
    indication of energetic particle loss? )

A. Bader C-Mod
This is directed toward fully coupled simulations
with TSC, AORSA2D, CQL3D, MHD stability, with
synthetic diagnostic of NPA measurements on
Jaguar XT4 This is a new capability enabled by
SWIM
26
Comparison of simulated peak tail energy with
experimental Neutral Particle Analyzer count rate
  • Simulation using AORSA2D and CQL3D with fixed
    plasma profiles
  • CNPA count rate binned over several square-wave
    pulses to improve signal to noise ratio
  • Reasonable agreement on tail turn on, not so good
    agreement on decay time (profile effect?,
    indication of energetic particle loss? )

Can we predict absolute count rate ?
Is this significant?
Is this significant?
This is directed toward fully coupled simulations
with TSC, AORSA2D, CQL3D, MHD stability, with
synthetic diagnostic of NPA measurements This is
a new capability enabled by SWIM
27
IPS is supporting ITER simulations for
International Tokamak Physics Activity (ITPA)
tasks
A planned operational scenario of ITER is the
hybrid mode ? achieve high fusion yield for
long discharge time
  • These raise a number of critical questions
  • Are these states achievable with the heating and
    current drive systems available?
  • Are such states controllable to maintain
    stationary current, density and temperature
    profiles?
  • Are such states stable and which sorts of
    instabilities are most dangerous?
  • What are the beta (plasma pressure) limits and
    how close to the limits is it possible to
    operate?
  • How sensitive is the performance to assumptions?
  • Energy transport
  • Pedestal properties
  • Plasma edge conditions

Integrated simulations, in conjunction with
experiments on present day experiments provide
the physics basis for such planned scenarios
28
Simulation of the ITER Hybrid Scenario Startup
with Neutral Beam Heating and current Drive, ICRF
heating and fusion alpha heating
Power balance
total
Evolution of central electron and ion
temperatures
a
Plasma
NBIICRF
Evolution of current drives
NB
BS
radiation
Coupling of TSC (Equilibrium and Profile
Advance), TORIC (RF Ion Cyclotron), NUBEAM
(neutral beam injection) Data transfer through
Plasma State, not yet run in IPS framework
(NUBEAM IPS component not ready yet)
29
MPI version of NUBEAM offers smooth heating
profile without incurring extra computation time.
16x1000
1x1000
Heating to electrons
Beam heating profile _at_ t200 s
Heating to ions
Heating to electrons
a-particle heating profile _at_ t200 s
Heating to ions
r/a
r/a
30
Simulations such as ITER scenario investigations
can benefit from MCMD by minimizing time in
(near) serial operations
Highly parallel components alternating with many
smaller components, performing analyses of
completed time step and initial phases of next
time step.
time
Equilibrium and profile advance for step t,
including parallel anomalous transport tasks for
each flux surface, all running concurrently with
the Fokker Planck component.
Multiple stability analysis components running on
multiple toroidal modes, all running concurrently
on t-1 results.
time
31
There are many possible strategies to use MCDM.
It also allows flexibility in exploring new
time-stepping algorithms
Initial Slow MHD Scenario
Ensemble of Independent Coupled Simulations
time
time
NIMROD
GENRAY
NIMROD
NIMROD
GENRAY
Two (or more) distinct simulations share the same
processor allocation, running out of phase to
maximize utilization.
32
Role of collaborations
  • SciDAC SWIM is built on collaborations with
    other projects
  • Center for Extended MHD modeling (CEMM) ? Linear
    and non-linear MHD
  • Center for Simulation of Wave-Plasma Interaction
    (CSWPI) ? RF and Fokker Planck
  • Predictive TRANSP (PTRANSP) ? Plasma State,
    TRANSP
  • OASCR Centers for Enabling Technology (CET)
  • Toward Optimal Petascale Simulations (TOPS) ?
    advanced solvers, application of PETSc to NIMROD
  • Center for Technology for Advanced Scientific
    Component Software (TASCS) ? component
    architecture, CCA
  • Center for Interoperable Technologies for
    Advanced Petascale Simulations (ITAPS) ? mesh
    transfer
  • Visualization and Analytics Center for Enabling
    Technologies (VACET) ? visualization, analysis of
    magnetic islands
  • Coordinated Infrastructure for Fault Tolerant
    Systems (CIFTS)
  • Students and Post Docs Columbia, Princeton U,
    IU, U Wisc
  • International collaborations
  • International Tokamak Physics Activitiy (ITPA) ?
    SWIM physics/ CS at Integrated Modeling a Global
    Effort (IMAGE, ITPA subgroup)
  • Participation in ITER planning for integrated
    modeling
  • SBIR Schema-based environment for configuring,
    analyzing and documenting integrated fusion
    simulations TechX

33
Collaborations with other FSP Pilots FACETS,
CPES
  • An open relationship exists between projects
  • Participate in each others project meetings
  • Include each other on project email distribution
    lists
  • Overlap of project participants
  • Direct technical collaborations
  • FACETS
  • C/C interface for Plasma State to FACETS driver
    for NUBEAM data
  • Collaboration on Transport Common Interface
    Module (Lehigh U.)
  • CPES
  • Plasma State interface for RF electric field data
    and distribution functions
  • Collaboration on representation of particle based
    distribution functions for application to
    continuum codes e.g. RF, stability

34
Internal Collaborations
We spent a lot of time defining functionality and
specifying interfaces
  • We have 4 groups of developers framework,
    component, plasma state, physics code
  • Allows people to focus on what they do best
  • Project user/advisory committee M. Greenwald
    (MIT), C. Kessel (PPPL), M. Murakami
    (ORNL/DIII-D), A. Siegel (ANL), A. Sussman (U.
    MD)
  • Communication Project meetings, web site,
    conference calls, SVN repository, TRAC bug and
    milestone monitoring

35
Verification and Validation Strategy
The major projects providing SWIM components have
VV programs. We benefit from that (CEMM, CSWPI,
TRANSP, PTRANSP)
  • SWIM Verification
  • Dummy physics components permit verification of
    IPS framework, Portal, and monitoring functions
  • Component benchmarking
  • SWIM plug-and-play architecture plus Plasma State
    data exchange facilitate component VV efforts
    AORSA/TORIC, M3D/NIMROD, many more planned
  • Unsolicited testimonial from TRANSP
  • A bug has been detected in TRANSP's legacy
    Fokker Planck code FPPMOD, in the collision
    operator, as described below The credit for
    finding this bug goes to the SWIM and PTRANSP
    projects-- testing associated with code
    development work related to these efforts led to
    its discovery
  • Comparison with reduced or analytic models
  • SWIM supports easy substitution of reduced models
    which can be compared to more complete models in
    overlapping range of validity e.g. ICRF Stix
    model compared to QCL3D, ORBIT-RF in collisional
    (isotropic) limit.
  • Validation comparison with experiment
  • Example C-Mod time dependent tail formation
    comparisons
  • Synthetic diagnostic development a priority

Validation of coupled simulation is a challenge.
But the ability to couple the most sophisticated
physics models is essential to approaching that
challenge
36
Role of leadership class computing
  • Major codes ported to Jaguar
  • M3D, NIMROD, AORSA, TORIC, CQL3D, ORBIT-RF, TSC,
    Plasma State
  • Have had issues with availability, stability,
    utilities (like the compiler)
  • Porting to PPPL SGI cluster
  • A gateway for development and testing SWIM
    invested in 32 SGI processors on PPPL cluster
  • All major codes also ported to PPPL cluster
  • INCITE project Simulation of Wave Interactions
    and MHD
  • Early years mostly CEMM and CSWPI ? SWIM later
    years (i.e. now)
  • Renewed in 2007
  • Not renewed in 2008, despite uniformly favorable
    reviews
  • Presently completing C-Mod simulations and MCMD
    studies on Jaguar under directors discretionary
    account
  • Will move everything to NERSC
  • Considering applying for INCITE next call

37
Targeted deliverables for project completion
  • By the end of the five-year project we will have
  • Developed IPS so that it provides a computational
    environment satisfying the SWIM projects
    scientific needs for concurrency, performance,
    and data management, and is the tool of choice
    for those performing tokamak scenario
    simulations
  • Demonstrated capability of the SWIM system to
    address important questions of sawtooth
    instability behavior and their control by RF
    (Fast MHD campaign)
  • Completed the coupling of ECCD, non-linear MHD
    and kinetic closure for study of RF stabilization
    of Neoclassical Tearing Modes (NTM), and
    performed numerical simulations comparing with
    NTM experiments (Slow MHD campaign)
  • Provided a base of experience with
    framework/component architecture applied to
    integrated fusion simulation that can be factored
    into the design of a larger scale Fusion
    Simulation Project.
Write a Comment
User Comments (0)
About PowerShow.com