WRF Outline - PowerPoint PPT Presentation

1 / 12
About This Presentation
Title:

WRF Outline

Description:

Develop advanced community mesoscale and data-assimilation system: ... We enjoin against passing derived data types through the model-layer interface for this reason. ... – PowerPoint PPT presentation

Number of Views:49
Avg rating:3.0/5.0
Slides: 13
Provided by: JohnMic9
Category:
Tags: wrf | enjoin | outline

less

Transcript and Presenter's Notes

Title: WRF Outline


1
WRF Outline
  • Overview and Status
  • WRF QA

www.wrf-model.org John Michalakes Mesoscale and
Microscale Meteorology Division National Center
for Atmospheric Research michalak_at_ucar.edu
2
Weather Research and Forecast (WRF) Model
  • Principal Partners
  • NCAR Mesoscale and Microscale Meteorology
    Division
  • NOAA National Centers for Environmental
    Prediction
  • NOAA Forecast Systems Laboratory
  • OU Center for the Analysis and Prediction of
    Storms
  • U.S. Air Force Weather Agency
  • U.S. DoD HPCMO
  • NRL Marine Meteorology Division
  • Federal Aviation Administration
  • Additional Collaborators
  • NOAA Geophysical Fluid Dynamics Laboratory
  • NASA GSFC Atmospheric Sciences Division
  • NOAA National Severe Storms Laboratory
  • EPA Atmospheric Modeling Division
  • University Community
  • Large, collaborative effort pool
    resources/talents
  • Develop advanced community mesoscale and
    data-assimilation system
  • Focus on 1-10km accurate, efficient, scalable
    over broad range of scales
  • Advanced physics, data assimilation, nesting
  • Flexible, modular, performance-portable with
    single-source code

3
WRF Software Architecture
Driver
Driver Layer
Config Inquiry
I/O API
DM comm
Package Independent
Solve
Mediation Layer
OMP
Config Module
WRF Tile-callable Subroutines
Data formats, Parallel I/O
Package Dependent
Message Passing
Threads
Model Layer
External Packages
  • Driver I/O, communication, multi-nests, state
    data
  • Model routines computational, tile-callable,
    thread-safe
  • Mediation layer interface between model and
    driver (also handles dereferencing of driver
    layer objects to simple data structures for model
    layer)
  • Interfaces to external packages

4
(No Transcript)
5
WRF Irons in Fire
  • New Dynamical Cores
  • NCEP NMM core (June 2003)
  • NCEP semi-implicit semi-Lagrangian core (?)
  • NRL COAMPS integration (?)
  • China Met. Admin and GRAPS integration (June
    2004)
  • Data Assimilation
  • WRF 3DVAR (later 2003)
  • WRF 4DVAR (2005-6)
  • Development Initiatives
  • WRF Developmental Testbed Center (Summer 2003
    and ongoing)
  • Hurricane WRF (2006)
  • NOAA air quality initiative (WRF-Chem) (2003-04)
  • NCSA WRF/ROMS coupling using MCT (MEAD) (2003
    and ongoing)
  • DoD WRF/HFSOLE coupling using MCEL (PET) (2003)
  • WRF Software Development
  • WRF nesting and research release (later 2003)
  • Vector performance Earth Simulator, Cray X-1
    (?)
  • NASA ESMF integration 2004 start with time
    manager, proof-of-concept dry dynamics
  • NCSA MEAD and HDF5 I/O (?)

6
WRF QA
  • 1. What system requirements do you have - e.g.
    Unix/Linux, CORBA, MPI, Windows,...
  • UNIX/Linux, MPI, OpenMP (optional)
  • Happiest gt .5GB memory per distributed memory
    process
  • 2. Can components spawn other components? What
    sort of relationships are allowed? directory
    structure model, parent child process model, flat
    model, peer-to-peer model, client-server etc....
  • WRF can spawn nested domains within this
    component. No other spawning
  • Applications are basically peer to peer, though
    the underlying coupling infrastructure may be
    implemented as client server or other models
  • 3. What programming language restrictions do you
    have currently?
  • Using Fortran90 and C but have no restrictions
    per se
  • 4. Are you designing to be programming language
    neutral?
  • Yes. We enjoin against passing derived data types
    through the model-layer interface for this
    reason. (This has been violated by 3DVAR as
    implemented in the WRF ASF, however)
  • 5. What sort of component abstraction do you
    present to an end user? Atmosphere component,
    Ocean component, an Analysis component, a generic
    component etc...
  • Generic. The person putting the components
    together is required to know what each needs from
    the other.
  • Models see coupling as I/O and read and write
    coupling data through the WRF I/O/coupling API

7
WRF QA
  • 9. Does data always have to be copied between
    components or are there facilities for sharing
    references to common structures across component
    boundaries? When, how and why is this needed?
  • All data is exchanged through the WRF
    I/O/Coupling API how this is implemented is up
    to the API however, the API doesn't presently
    have semantics for specifying structures that are
    common across component boundaries
  • 13. What levels and "styles" of
    parallelism/concurrency or serialization are
    permitted/excluded. e.g. can components be
    internally parallel, can multiple components run
    concurrently, can components run in parallel
  • WRF runs distributed-memory, shared-memory, or
    hybrid. WRF I/O/Coupling API is assumed
    non-thread-safe. No restriction on
    concurrency/parallelism with other components.
  • 14. Do components always have certain specific
    functions/methods?
  • WRF always produces a time integration of the
    atmosphere however, there are several dynamics
    options and numerous physics options.
  • Note the time-loop is in the driver layer and
    it's straightforward to run over framework
    specified intervals (the nested domains do this
    under the control of parent domains)

8
WRF QA
  • 15. What, if any, virtualization of process,
    thread and physical CPU do you use?
  • Unit of work is a tile model layer subroutines
    are designed to be "tile-callable".
  • Distributed memory mapping of distributed memory
    processes to OS-level processes and physical
    processors is up to the underlying comm layer.
  • Shared memory up to the underlying thread
    package (we currently use OpenMP)
  • 16. Can components come and go arbitrarily
    throughout execution?
  • WRF nested domains (which are part of this
    component) can by dynamically instantiated/uninsta
    ntiated.
  • WRF as a component has a relatively high overhead
    for starting up wouldn't be ideal as a transient
    component
  • 17. Can compute resources be acquired/released
    throughout execution?
  • Definitely not at the application layer at this
    time decomposition is basically static and the
    number of distributed memory processes can not
    change over the course of a run.
  • We intend to allow migration of work between
    processes for LB.
  • Whole processes might someday migrate, but WRF
    would have to include support for migration of
    state (which it does not currently have).
  • Using smaller or larger number of shared-memory
    threads would be up to the thread package.

9
WRF QA
  • 18. Does you system have an initialization phase?
  • Yes, big one. Initial I/O is relatively costly
  • 19. Is the high-level control syntax the same in
    different circumstances? e.g. serial component to
    serial component, versus parallel M component to
    parallel N component.
  • Not strictly applicable, since we're talking
    about a single component, WRF.
  • However, WRF can be easily subroutine-ized.
  • 20. What standards must components adhere to -
    languages, standard functions/API's, standard
    semantics etc...
  • Standards in WRF apply internally between layers
    in software hierarchy and in API's to external
    packages. The API's are WRF-specific, allowing
    flexibility over a range of implementations.
  • Plan to merge WRF I/O/Coupling API with ESMF-API
    specification provided it gave similar
    functionality and interfaced with WRF in the same
    way
  • 23. What is your approach to saving and restoring
    system state?
  • Write and read restart data sets at user
    specified intervals.
  • 26. Who are your target component authors.
  • Physics developers, dynamical core developers,
    and WRF development team

10
(No Transcript)
11
(No Transcript)
12
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com