An evaluation of the probabilistic information in multi-model ensembles - PowerPoint PPT Presentation

About This Presentation
Title:

An evaluation of the probabilistic information in multi-model ensembles

Description:

Huug van den Dool , Malaquias Pena, Peitao Peng, and Suranjana Saha ... Doubles trends. Errors in estimating high frequency model forecasts. U.S. T and P consolidation ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 38
Provided by: dun98
Category:

less

Transcript and Presenter's Notes

Title: An evaluation of the probabilistic information in multi-model ensembles


1
An evaluation of the probabilistic information in
multi-model ensembles
  • David Unger
  • Huug van den Dool , Malaquias Pena, Peitao Peng,
    and Suranjana Saha
  • 30th Annual Climate Prediction and Diagnostics
    Workshop
  • October 25, 2005

2
Objective
  • Produce a Probability Distribution Function (PDF)
    from the ensembles.
  • Challenge
  • Calibration
  • Account for skill
  • Retain information from ensembles
  • (Or not if no skill)

3
Schematic illustration
Temperature
4
s
5
Schematic example
sz
Temperature (F)
6
Kernel vs. Mean
7
Regression
Step 1. Standardization
Step 2. Skill Adjustment

Step 3. Make the forecast


8
______
se scv 1-Rm2
se
9
Analysis of Ensemble Variance
  • V Variance
  • Total Explained Unexplained
  • Variance Variance Variance
  • sc2 Ve Vu
  • sc2 sFm2 se2
  • sc2 sFm2 (E2 sz2)



10
Analysis of Ensemble Variance (Continued)
  • With help of some relationships commonly used
    in linear regression
  • ltE2 gt (Rm2 - Ri2) sc2
  • sz2 (1-2Rm2 Ri2) sc2
  • Rz2 2Rm2 - Ri2 Rz 1.


11
Ensemble Calibration
Step 1. Standardization
Step 2. Ensemble Spread Adjustment Zi
K(Zi - Zm) Zm
Step 3. Skill Adjustment Zi Rz Zi

Step 4. Make the Forecast


12
Schematic example
sz
Temperature (F)
13
Rz.97, Rfm.94, Ri.90
14
Rz.93, Rfm.87, Rf.30
15
Rz.85, Rfm.67, Ri.41
16
Rz.62, Rfm.46, Rf.20
17
Weighting
18
Wgts 50 Ens. 1, 17Ens 2, 3, 4
19
Real time system
  • Time series estimates of Statistics.
  • Exponential filter
  • FT1 (1-a)FT a fT1
  • Initial guess provided from 1956-1981 CA
    statistics

20
Continuous Ranked Probability Score
21
Some Results
  • Nino 3.4 SSTs
  • Operational system
  • 15 CFS ,12 CA, 1 CCA , 1 MKV
  • Demeter Data
  • 9 CFS, 12 CA, 1 CCA, 1 MKV
  • 9 UKM, 9 MFR, 9 MPI,
  • 9 ECM, 9 ING, 9 LOD, 9 CER,

22
Nino 3.4 SSTs
23
Nino 3.4 SST 5-month lead by initial time
1982-2001
24
CRPSS Nino 3.4 SSTs All Initial times
1982-2001
25
CRPSS Nino 3.4 SSTs All Initial times
1990-2001 (Independent)
26
CRPSS Nino 3.4 SSTs 5-month Lead, All Initial
times 1990-2001 (Independent data)
27
Reliability Nino 3.4 SST (1990-2001)
28
U.S. Temperature and Precipitation Consolidation
  • 15 CFS
  • 1 CCA
  • 1 SMLR
  • Trends are removed from models
  • Statistics and distribution are computed
  • Trend added to end result.

29
Trend Problem
  • Should a skill mask be applied? How much?
  • - This technique requires a quantitative
    estimate of the trend.
  • Component models sometimes learn trends, making
    bias correction difficult. Doubles trends.
  • Errors in estimating high frequency model
    forecasts

30
U.S. T and P consolidation
Skill Mask on Trends
No Skill Mask on Trends
31
CRPS and RPS-3 (BNA) Skill Scores
Temperature
.033 .048
.030 .037
.084 .121
.043 .074
.016 -.007
.052 .029
.026 .047
.051 .045
.020 .040
.010 -.025
.101 .160
.074 .105
190 .253
.138 .190
.057 .070
.039 .045
-.001 -.013
.039 .019
Skill
.010 .067
.000 .000
High
Moderate
Low
None
.228 .372
.173 .259
.15
.086 .110
.044 .050
.07
.02
CRPS
RPS
.060 .091
.053 .063
Cons
1-Month Lead, All initial times
Ensm
32
Reliability U. S. Temperatures(1995-2003)
33
CRPS and RPS-3 (BNA) Skill Scores
Precipitation
.030 .037
.030 .037
.016 .031
.016 .029
-.006 -.004
-.006 .004
.005 .005
.005 .007
.001 .002
.002 .002
.005 .002
.005 .002
.006 .011
.006 .009
-.004 -.009
-.004 -.009
.013 .007
.012 .007
Skill
.055 .064
.055 .064
High
Moderate
Low
None
.078 .059
.076 .059
.10
.014 .020
.013 .020
.05
.01
CRPS
RPS
.018 .018
.017 .018
Cons
1-Month Lead, All initial times
Ensm
34
Reliability U.S. Precipitation(1995-2003)
35
Conclusions
  • Calibrated ensemble and ensemble means score very
    closely (by CRPS)
  • Calibrated ensembles seem to have a slight
    edge.
  • No penalty for including many ensembles (but not
    much benefit either)
  • Considerable penalty for including less skillful
    ensembles Weighting is critical.
  • Probabilistic predictions are reliable (when
    looked at in terms of a continuous PDF)

36
Conclusions (Continued)
  • Calibrated ensembles tend to be slightly
    overconfident
  • Trends are a major problem and are outside the
    realm of consolidation (but they are critically
    important for seasonal temperature forecasting).

37
6-10 day Forecasts (based on Analogs)
Write a Comment
User Comments (0)
About PowerShow.com