Comprehensive Particulate Matter Modeling: A One Atmosphere Approach - PowerPoint PPT Presentation

1 / 60
About This Presentation
Title:

Comprehensive Particulate Matter Modeling: A One Atmosphere Approach

Description:

PM Modeling Studies Used for Performance Benchmarks. SAMI (GT) ... PM Modeling Studies Used for Performance Benchmarks. MANE-VU (GT) July 2001 (CMAQ/IMPROVE/36 km) ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 61
Provided by: systema254
Category:

less

Transcript and Presenter's Notes

Title: Comprehensive Particulate Matter Modeling: A One Atmosphere Approach


1
PM Model Performance Goals and Criteria
James W. Boylan Georgia Department of Natural
Resources - VISTAS National RPO Modeling
Meeting Denver, CO May 26, 2004
2
Outline
  • Standard Bias and Error Calculations
  • Proposed PM Model Performance Goals and Criteria
  • Evaluation of Eight PM Modeling Studies Using
    Proposed Goals and Criteria
  • Discussion Should EPA recommend PM Model
    Performance Goals and Criteria in PM Modeling
    Guidance Document?

3
PM Model Evaluations
  • Air Quality Modeling and Ambient Measurements are
    two different ways to estimate actual ambient
    concentrations of pollutants in the atmosphere
  • Both modeling and measurements have some degree
    of uncertainty
  • Measurements should not be considered the
    absolute truth
  • Large differences between monitoring networks due
    to sampling and analysis techniques
  • Normalized bias and error calculations should not
    be normalized by observations, but rather the
    average of the model and the observations.

4
Performance Metrics Equation
Mean Bias (mg/m3)
Mean Error (mg/m3)
Mean Normalized Bias () (-100 to ?) Mean Normalized Error () (0 to ?)
Normalized Mean Bias () (-100 to ?) Normalized Mean Error () (0 to ?)
Mean Fractional Bias () (-200 to 200) Mean Fractional Error () (0 to 200)
5
Performance Metrics
  • Mean Normalized Bias and Error
  • Usually associated with observation-based minimum
    threshold
  • Some components of PM can be very small making it
    difficult to set a reasonable minimum threshold
    value without excluding a majority of the data
    points
  • Without a minimum threshold, very large
    normalized biases and errors can result when
    observations are close to zero even though the
    absolute biases and errors are very small
  • A few data points can dominate the metric
  • Overestimations are weighted more than equivalent
    underestimations
  • Assumes observations are absolute truth

6
Performance Metrics
  • Normalized Mean Bias and Error
  • Biased towards overestimations
  • Assumes observations are absolute truth
  • Mean Fractional Bias and Error
  • Bounds maximum bias and error
  • Symmetric gives equal weight to underestimations
    and overestimations
  • Normalized by average of observation and model

7
Example Calculations
Model (mg/m3) Obs. (mg/m3) MB (mg/m3) NMB () MNB () MFB () ME (mg/m3) NME () MNE () MFE ()
0.05 1.0 -0.95 -95 -180.95 0.95 95 180.95
1.0 0.05 0.95 1900 180.95 0.95 1900 180.95
1.0 0.01 0.99 9900 196.04 0.99 9900 196.04
0.04 0.05 -0.01 -20 -22.22 0.01 20 22.22
0.5225 0.2775 0.245 88.3 2921.3 43.5 0.725 261.3 2978.8 145.0
  • Mean Normalized Bias and Error
  • Most biased and least useful of the three metrics
  • Normalized Mean Bias and Error
  • Mean Fractional Bias and Error
  • Least biased and most useful of the three metrics

8
PM Goals and Criteria
  • Performance Goals Level of accuracy that is
    considered to be close to the best a model can be
    expected to achieve.
  • Performance Criteria Level of accuracy that is
    considered to be acceptable for regulatory
    applications.
  • It has been suggested that we need different
    performance goals and criteria for
  • Different Species
  • Different Seasons
  • Different Parts of the Country
  • 20 Haziest and 20 Cleanest Days
  • Answer performance goals and criteria that vary
    as a function of concentration

9
PM Modeling Studies Used for Performance
Benchmarks
  • SAMI (GT)
  • July 1995 (URM/IMPROVE/variable grid)
  • July 1991 (URM /IMPROVE /variable grid)
  • May 1995 (URM /IMPROVE /variable grid)
  • May 1993 (URM /IMPROVE /variable grid)
  • March 1993 (URM /IMPROVE /variable grid)
  • February 1994 (URM /IMPROVE /variable grid)
  • VISTAS (UCR/AG/Environ)
  • July 1999 (CMAQ/IMPROVE/36 km)
  • July 1999 (CMAQ /IMPROVE/12 km)
  • July 2001 (CMAQ /IMPROVE/36 km)
  • July 2001 (CMAQ /IMPROVE/12 km)
  • January 2002 (CMAQ /IMPROVE/36 km)
  • January 2002 (CMAQ /IMPROVE/12 km)

10
PM Modeling Studies Used for Performance
Benchmarks
  • WRAP 309 (UCR/CEP/Environ)
  • January 1996 (CMAQ/IMPROVE/36 km)
  • February 1996 (CMAQ/IMPROVE/36 km)
  • March 1996 (CMAQ/IMPROVE/36 km)
  • April 1996 (CMAQ/IMPROVE/36 km)
  • May 1996 (CMAQ/IMPROVE/36 km)
  • June 1996 (CMAQ/IMPROVE/36 km)
  • July 1996 (CMAQ/IMPROVE/36 km)
  • August 1996 (CMAQ/IMPROVE/36 km)
  • September 1996 (CMAQ/IMPROVE/36 km)
  • October 1996 (CMAQ/IMPROVE/36 km)
  • November 1996 (CMAQ/IMPROVE/36 km)
  • December 1996 (CMAQ/IMPROVE/36 km)

11
PM Modeling Studies Used for Performance
Benchmarks
  • WRAP 308 (UCR/CEP/Environ)
  • Summer 2002 (CMAQ/IMPROVE/36 km/WRAP)
  • Summer 2002 (CMAQ/IMPROVE/36 km/US)
  • Winter 2002 (CMAQ/IMPROVE/36 km/WRAP)
  • Winter 2002 (CMAQ/IMPROVE/36 km/US)
  • EPA (Clear Skies)
  • Fall 1996 (REMSAD/IMPROVE/36 km)
  • Spring 1996 (REMSAD/IMPROVE/36 km)
  • Summer 1996 (REMSAD/IMPROVE/36 km)
  • Winter 1996 (REMSAD/IMPROVE/36 km)

12
PM Modeling Studies Used for Performance
Benchmarks
  • MANE-VU (GT)
  • July 2001 (CMAQ/IMPROVE/36 km)
  • July 2001 (CMAQ/SEARCH/36 km)
  • January 2002 (CMAQ/IMPROVE/36 km)
  • January 2002 (CMAQ/ SEARCH /36 km)
  • Midwest RPO
  • August 1999 (CMAQ/IMPROVE/36 km)
  • August 1999 (CAMx/IMPROVE/36 km)
  • August 1999 (REMSAD/IMPROVE/36 km)
  • January 2000 (CMAQ/IMPROVE/36 km)
  • January 2000(CAMx/IMPROVE/36 km)
  • January 2000(REMSAD/IMPROVE/36 km)

13
PM Modeling Studies Used for Performance
Benchmarks
  • EPRI (AER/TVA/Environ)
  • July 1999 (CMAQ/IMPROVE/32 km)
  • July 1999 (CMAQ/IMPROVE/8 km)
  • July 1999 (MADRID/IMPROVE/32 km)
  • July 1999 (MADRID /IMPROVE/8 km)
  • July 1999 (CAMx/IMPROVE/32 km)

14
Mean Fractional Error
15
Mean Fractional Bias
16
Proposed PM Goals and Criteria
  • Based on MFE and MFB calculations
  • Vary as a function of species concentrations
  • Goals MFE ? 50 and MFB ? 30
  • Criteria MFE ? 75 and MFB ? 60
  • Less abundant species should have less stringent
    performance goals and criteria
  • Continuous functions with the features of
  • Asymptotically approaching proposed goals and
    criteria when the mean of the observed and
    modeled concentrations are greater than 2.5 mg/m3
  • Approaching 200 MFE and 200 MFB when the mean
    of the observed and modeled concentrations are
    extremely small

17
Proposed Goals and Criteria
  • Proposed PM Performance Goals
  • Proposed PM Performance Criteria

18
MFE Goals and Criteria
19
MFB Goals and Criteria
20
Model Performance Zones
  • Zone I
  • Good Model Performance
  • Level I Diagnostic Evaluation (Minimal)
  • Zone II
  • Average Model Performance
  • Level II Diagnostic Evaluation (Standard)
  • Zone III
  • Poor Model Performance
  • Level III Diagnostic Evaluation (Extended) and
    Sensitivity Testing

21
Mean Fractional Error
22
Mean Fractional Bias
23
Sulfate Mean Fractional Error
24
Sulfate Mean Fractional Bias
25
Nitrate Mean Fractional Error
26
Nitrate Mean Fractional Bias
27
Ammonium Mean Fractional Error
28
Ammonium Mean Fractional Bias
29
Organics Mean Fractional Error
30
Organics Mean Fractional Bias
31
EC Mean Fractional Error
32
EC Mean Fractional Bias
33
Soils Mean Fractional Error
34
Soils Mean Fractional Bias
35
PM2.5 Mean Fractional Error
36
PM2.5 Mean Fractional Bias
37
PM10 Mean Fractional Error
38
PM10 Mean Fractional Bias
39
CM Mean Fractional Error
40
CM Mean Fractional Bias
41
SAMI Mean Fractional Error
42
SAMI Mean Fractional Bias
43
SAMI Mean Fractional Error
44
SAMI Mean Fractional Bias
45
EPA Mean Fractional Error
46
EPA Mean Fractional Bias
47
VISTAS Mean Fractional Error
48
VISTAS Mean Fractional Bias
49
MANE-VU Mean Fractional Error
50
MANE-VU Mean Fractional Bias
51
WRAP (309) Mean Fractional Error
52
WRAP (309) Mean Fractional Bias
53
WRAP (308) Mean Fractional Error
54
WRAP (308) Mean Fractional Bias
55
MRPO Mean Fractional Error
56
MRPO Mean Fractional Bias
57
EPRI Mean Fractional Error
58
EPRI Mean Fractional Bias
59
Concluding Remarks
  • Performance evaluation should be done on an
    episode-by-episode basis or on a month-by-month
    basis for annual modeling.
  • Recommended performance goals and criteria should
    be used to help identify areas that can be
    improved upon in future modeling
  • Failure to meet proposed performance criteria
    should not necessarily prohibit the modeling from
    being used for regulatory purposes
  • Need to perform extended diagnostic evaluation
    and sensitivity tests to address poor performance

60
Concluding Remarks (cont.)
  • As models mature, performance goals can be made
    more restrictive by simply adjusting the
    coefficients in the performance goals and
    criteria equations (MFE and MFB)
  • Performance goals and criteria for measurements
    with longer averaging times (e.g., weekly) should
    be more restrictive and those with a shorter
    averaging times (e.g., hourly) should be less
    restrictive.
  • Discussion Questions
  • Should EPA recommend PM model performance goals
    and criteria in PM Modeling Guidance Document?
  • Is there a need for performance goals for gaseous
    precursors and/or wet deposition species?
Write a Comment
User Comments (0)
About PowerShow.com