Ensemble Prediction To Evaluate Flow Dependent Variations In Predictability - PowerPoint PPT Presentation

About This Presentation
Title:

Ensemble Prediction To Evaluate Flow Dependent Variations In Predictability

Description:

Average 2-day advantage. Prob. Evaluation (cost-loss analysis) ... All three multi-model ensembles are better than NCEP's ... simple multi-model ensemble ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 47
Provided by: NCEP8
Category:

less

Transcript and Presenter's Notes

Title: Ensemble Prediction To Evaluate Flow Dependent Variations In Predictability


1
Ensemble Prediction To Evaluate Flow Dependent
Variations In Predictability
  • Yuejian Zhu
  • Environmental Modeling Center
  • NCEP/NWS/NOAA
  • Presentation for COAA 2004 Beijing
  • June 28 2004
  • http//wwwt.emc.ncep.noaa.gov/gmb/ens/
  • http//wwwt.emc.ncep.noaa.gov/gmb/yzhu/
  • Acknowledgements
  • Z. Toth, H-L Pan and S. Lord (EMC)
  • R. Buizza (ECMWF) and P. L. Houtekamer(MSC)

2
Contents
  • Introduction and useful references
  • Why do we need ensemble forecast
  • Methodologies of ensemble model forecast
  • Review of statistical ensemble forecast
  • The skill of ensemble model forecast
  • The application of ensemble forecast
  • Discussion and conclusions

3
Introduction
  • Background and ensemble concept
  • A few decades ago
  • Early application by lag ensemble
  • Due to lack of computation resource (long-term)
  • Required by extended range forecast
  • Especially for week-2 forecast
  • Improved data analysis and model physics
  • Less room to improve the skills?
  • Improved observation system
  • More understand of forecast errors

4
References
  • Toth, Talagrand, Candille and Zhu, 2003
    "Probability and ensemble forecasts" book
    chapter.
  • Zhu, 2004Probabilistic forecasts and
    evaluations based on a global ensemble prediction
    systemIn book of Observation, theory and
    modeling of atmospheric variability
  • Zhu, Iyenger, Toth, Tracton and Marchok,
    1996"Objective evaluation of the NCEP global
    ensemble forecasting system" AMS conference
    proceeding.
  • Toth, Zhu and Marchok, 2001"The use of
    ensembles to identify forecasts with small and
    large uncertainty". Weather and Forecasting
  • Zhu, Toth, Wobus, Rechardson and Mylne,
    2002"The economic value of ensemble-based
    weather forecasts BAMS
  • Buizza, Houtekamer, Toth, Pellerin, Wei and Zhu,
    2004 Assessment of the status of global
    ensemble prediction, accepted by MWR
  • more related articles

5
(No Transcript)
6
(No Transcript)
7
(No Transcript)
8
Prob. Evaluation (cost-loss analysis)
  • Based on hit rate (HR) and false alarm (FA)
    analysis
  • .. Economic Value (EV) of forecasts

Ensemble forecast
Average 2-day advantage
Deterministic forecast
9
Prob. Evaluation (cost-loss analysis)
  • Based on hit rate (HR) and false alarm (FA)
    analysis
  • .. Economic Value (EV) of forecasts

Ensemble forecast
Average 2-day advantage
Deterministic forecast
10
Prob. Evaluation (useful tools)
  • ... Small and large uncertainty.
  • 1 day (large uncertainty) 4 days (control)
    10-13 days (small uncertainty)

11
(No Transcript)
12
(No Transcript)
13
(No Transcript)
14
NCEP global ensemble configurations
New configuration Updated 04/29/2004
15
The skill of global ensemble
  • Simple measurements
  • Ensemble mean ( like deterministic forecast)
  • PAC, RMS and spread
  • Distributions
  • Talagrand distribution
  • Outliers
  • Probabilistic evaluations
  • Brier skill scores (BSS)
  • In terms of resolution and reliability
  • Ranked probability skill scores (RPSS)
  • Relative operational characteristics (ROC)
  • In terms of hitting rate and false alarm rate
  • Economic values (EV)

16
Northern Hemisphere 500hPa geopotential height
Pattern Anomaly Correlation
Root Mean Square
Simple Measurement For Ensemble mean
17
One day advantage
Due to model imperfection
18
Outlier from Talagrand distribution
Spread is too small/bias
Negative Spread is too big
19
(No Transcript)
20
ROC area
May-July 2002
21
Brier Skill Scores and decomposition
Resolution
Reliability
22
Applications
  • Spaghetti Diagram
  • Early application (easy understand)
  • Probabilistic quantitative precipitation forecast
    (PQPF)
  • Early application (first probabilistic forecast)
  • Ensemble mean and spread
  • Relative measure of predictability (RMOP)
  • Combination of mean, spread, spatial and temporal
    statistical average
  • Precipitation type forecast
  • PQPF, PQRF, PQSF, PQFF and PQIF

23
PQRF
PQPF
PQFF PQIF
PQSF
24
RMOP
500hPa Z
25
(No Transcript)
26
Example of joint products
27
Issues to discuss
  • Ensemble configurations
  • Initial perturbations and physical
    parameterizations.
  • Ensemble sizes and resolutions
  • Multi-model ensemble (future NAEFS)
  • Ensemble applications
  • Post process ensemble mean and spread
  • Statistical calibration ( bias-free)
  • Probabilistic forecasts ( such as PQPF, RMOP)
  • Ensemble based data assimilation
  • ETKF is testing

28
ENSEMBLE SIZE (importance issue)
NCEP ensemble
SPREAD
RMS
ECMWF ensemble
PAC
29
ENSEMBLE FROM DIFF. INITIAL CONDITION
LAG ENSEMBLE (SAME SIZE)
SPREAD
RMS ERROR
OUTLIERS
MEAN BIAS
30
5-day forecast
CTL is better (dominate)
8-day forecast
61 21
CTL is better (less dominate)
31
1-year data
Low resolution is better
GFS.vs.CTL Resolution difference
32
Daily comparison of NH 500 hPa height PAC
NCEP - DAO
33
Daily comparison of NH 500 hPa height PAC
NCEP - MSC
34
Daily comparison of NH 500 hPa height 5-day PAC
scores
NCEP0.807 DAO0.794 MSC0.791
Diff. gt 0.1 (10) 18 cases
Diff. gt0.1 (10) 56 cases
35
Bias comparison of NH 500 hPa height
MSCs bias
DAOs bias
36
Synoptic example of 500hPa height forecast
Ini 2003021200 Valid 2003021700
NCEP analysis
NCEP forecast
NCEP.7978 DAO.8257 MSC.6068
DAO forecast
MSC forecast
37
Multi-model ensemble
  • Simple multi-model ensemble experiments (2-member
    ensemble)
  • Considering ensemble mean only
  • Forecasting models include NCEP/GFS, NASA/DAO,
    MSC, NOGAPS and NCEP random one pair low
    resolution ensemble (reference)
  • Experiment period 20020815-20021215 (5 months)

38
Individual model performances
  • EXPs NCEP operation
  • EXPd DAO/NASA
  • EXPm MSC
  • EXPn NOGAPS
  • Top PAC for 500hPa 5-d, NCEP is best
  • Bottom left RMS error
  • Bottom right Bias, perfect bias for NOGAPS

39
Multi-model ensemble results
  • EXPs NCEP operation (reference)
  • EXPnd NCEPDAO
  • EXPnm NCEPMSC
  • EXPnn NCEPNOGAPS
  • EXPnp NCEP ensemble (random one pair, lower
    resolution)
  • All three multi-model ensembles are better than
    NCEPs deterministic fcst
  • NCEPMSC is best

40
PAC scores for wave groups
Wave 1-20
RMS error and Bias
41
Conclusions forsimple multi-model ensemble
  • NCEP/GFS NASA/DAO is better than either one
    (GFS or DAO) at 5-day forecast
  • NCEP/GFS MSC is best around all these forecasts
  • NCEP/GFS NOGAPS is closed the best around these
    single forecasts and combinations.
  • NOGAPS has 3 lower PAC score than NCEP/GFS for
    5-day forecast
  • More experiments needed for more members
  • Considering bias-correction
  • Considering probabilistic verifications (
    distribution )

42
Issues to discuss
  • Ensemble configurations
  • Initial perturbations and physical
    parameterizations.
  • Ensemble sizes and resolutions
  • Multi-model ensemble (future NAEFS)
  • Ensemble applications
  • Post process ensemble mean and spread
  • Statistical calibration ( bias-free)
  • Probabilistic forecasts ( such as PQPF, RMOP)
  • Ensemble based data assimilation
  • ETKF is testing

43
The pre-NWP forecast accuracy
  • A schematic illustration of the increase of
    RMSE with forecast time. The pre-NWP forecaster
    started from a persistence forecast which he
    skillfully extrapolated into the future,
    converging towards climate for longer ranges

persistence
A?2
meteorologist
A
  • The time unit can be anything from hours to
    days depending on the parameter (hours for
    clouds, days for temperature)

44
NWP more accurate - but also less
  • A good NWP model is able to simulate all
    atmospheric scales throughout the forecast. It
    has the same variance as the observations and the
    persistence forecasts, which yields an error
    saturation level 41 above the climate

persistence
A?2
worlds best NWP
meteorologist
A
45
The art of good forecasting
persistence
  • The way out of the dilemma
  • Combine the high accuracy of NWP in the
    short range with a filtering of the
    non-predictable scales for longer ranges
  • This can be done both with and without the EPS

A?2
worlds best NWP
meteorologist
A
modified NWP forecast
46
Thank You!!!
Write a Comment
User Comments (0)
About PowerShow.com