Program to Evaluate High Resolution Precipitation Products PEHRPP: An Update - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

Program to Evaluate High Resolution Precipitation Products PEHRPP: An Update

Description:

Use of HRPP's by Users of Hydrological and Mesoscale Forecast Models ... Recommend that countries or weather institutions with high quality ground ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 19
Provided by: sap81
Category:

less

Transcript and Presenter's Notes

Title: Program to Evaluate High Resolution Precipitation Products PEHRPP: An Update


1
Program to Evaluate High Resolution Precipitation
Products (PEHRPP) An Update
  • Matt Sapiano
  • P. Arkin, J. Janowiak, D. Vila, Univ. of
    Maryland/ESSIC, College Park, MD
  • Joe Turk, Naval Research Laboratory, Monterey, CA
    (Presenter)
  • E. Ebert, Bureau of Meteorology, Melbourne,
    Australia

2
Outline
  • Brief explanation of PEHRPP
  • PEHRPP activities
  • Workshop highlights
  • Recommendations to IPWG

3
What is PEHRPP?
  • A collaborative effort to understand the
    capabilities and characteristics of High
    Resolution Precipitation Products
  • High Resolution Daily/sub-daily lt1 degree
  • Sponsored by IPWG with broad voluntary
    participation
  • Capitalizing on existing research and operational
    activities/datasets
  • Providing a link between the observational and
    application communities

4
PEHRPP Strategy
  • PEHRPP is designed to exploit four kinds of
    validation opportunities
  • Networks based on national or regional
    operational rain gauges or radar networks
  • High-quality time series from ongoing research
    programs
  • GEWEX CEOP, TAO/TRITON buoy gauges
  • Ethiopia, Sao Paolo
  • Field program data sets
  • NAME, BALTEX
  • Coherent global scale variability as depicted by
    the various data sets - the big picture

5
Real-time radar/gauge comparisons
(See Ebert et al., BAMS, 2007)
(slide courtesy of C. Kidd, with additions)
6
Guangdong Validation SiteJianyin Liang, CMA
with Pingping Xie, NOAA
  • April June 2005 period of initial data (394
    hourly real-time gauges)

Guang-Dong
Seasonal Mean Bias
Gauge
CMORPH
3B42RT
3B42
MWCOMB
7
Sub-daily, high quality time-series
Bias
Sapiano and Arkin, J. Hydrometeorology, 2008 (in
press)
Bias
  • Comparison against sub-daily TAO/TRITON buoys and
    US SGP sites
  • Correlations are generally high
  • Under-estimates over ocean
  • Over-estimates over US in Summer in absence of
    gauge correction
  • Models included and perform OK at daily, less
    well at sub-daily

8
North American Monsoon Experiment Precipitation
Daily Evolution NERN vs Satellite over NAME
Domain (Nesbitt)
9
First PEHRPP Workshop
  • Hosted by the IPWG
  • 3-5 December 2007, WMO, Geneva
  • 40 attendees from 12 countries
  • Presentations and working group reports on
    applications, validation and error metrics

10
Presentations and discussions
  • Talks
  • Precipitation Products (5)
  • Regional Validation (10)
  • Applications (9)
  • Error Metrics (6)
  • Advanced Blending Methodologies
  • Use of Kalman filter
  • Updates on 8 Validation Sites
  • Northern Europe, Southern Europe, Japan, Brazil,
    Australia, Mozambique, Continental US, Western
    Africa
  • Use of HRPPs by Users of Hydrological and
    Mesoscale Forecast Models
  • Improved and Relevant Error Metrics
  • Focused on user requirements

Summary due to appear in December BAMS (Turk et
al.)
11
Key recommendations
  • Multiple recommendations were made, the key ones
    being
  • Several high resolution precipitation products
    exhibit useful skill, but clear superiority for
    one is not yet evident continuing activities are
    useful to this end
  • IPWG should establish a continuing effort to
    conduct, facilitate and coordinate validation and
    evaluation of such products
  • A concerted validation/intercomparison campaign,
    covering multiple climatic regimes and seasons,
    should be designed and conducted

12
Discussion
  • PEHRPP has become a useful framework for
    validation activities on high resolution data
  • Not all elements have been addressed
  • Synthesis of results is still lacking hence
    the need for a concerted campaign
  • New leadership is required for activities to
    continue
  • Mandate needs to come from IPWG
  • Currently working on joint proposal to WGNE to
    collaborate by including more model precip in
    PEHRPP
  • Ebert, Huffman, Kidd, Sapiano and others

13
Working Group Key Recommendations VALIDATION
Recommendation 1 Recommend an intercomparison
project (similar to PIP,AIP) for the evaluation
of HRPP. Products should aim for a standard of
three-hourly, 0.25 degree resolution with global
coverage, with validation done at the regional
scale. Details of the inter-comparison
(locations, temporal scale, etc.) will be charged
to an intercomparison working group in
association with the GPM working group to
maximize the impact of such a comparison. The
intercomparison should be completed in the next
24 to 36 months.
Historical Background Precipitation
Intercomparison Program (PIP), sponsored by
NASAs WetNet Project PIP-1 First assessment of
SSMI precipitation algorithms on a global scale,
Aug-Nov 1997. PIP-2 Examined SSMI precipitation
algorithms on a case basis for multiple years,
seasons, and meteorological events (Jul 1987-Feb
1993). PIP-3 Examined global scale
precipitation algorithms over an entire year
(1992). Algorithm Intercomparison Program (AIP),
sponsored by the Global Precipitation Climatology
Program AIP-1 Japan and surrounding region
during JunAug 1989, covering frontal and
tropical convective rainfall. AIP-2 Western
Europe during FebApr 1991 with rainfall and
snowfall over both land and sea regions. AIP-3
Tropical Pacific Ocean region (1N4S,
153158E) during Nov 1992Feb 1993. Kidd, 2001
Satellite Rainfall Climatology A Review, Int. J.
Climatol., 1041-1066.
14
Working Group Key Recommendations VALIDATION
Recommendation 2 Recommend that the outputs of
the current and future validation efforts are
better utilized a working group should be formed
under IPWG as a PEHRPP activity, and should
report by the next IPWG meeting (October 2008).
The co-chairs should be a product developer and
validation site developer. Recommendation
3 Recommend the use of existing HRPP in
hydrological impact studies, such as the EUMETSAT
H-SAF and HydroMet testbeds in the US, to assess
the usefulness of the HRPP products in
hydrological models. Recommendation 4 Recommend
that we include and/or encourage the development
of high-latitude sites such as the BALTEX, LOFZY,
high latitude maritime radar sites, and/or the
Canadian sites. Recommendation 5 Recommend that
countries or weather institutions with high
quality ground validation dataset actively
participate in IPWG sponsored validation
activities.
15
Working Group Key Recommendations APPLICATIONS
Recommendation 1 Product developers should be
encouraged to formulate and produce error
estimates for the products, by Engaging end
users IPWG should investigate the forms of error
required for applications Engaging other product
developers Since full error estimates will take
time to obtain, developers should be encouraged
to make other information available such as the
main source of data (i.e. SSM/I F-13 GPROF V6)
and the latency of PMW data (time since last MW
overpass)
16
Working Group Key Recommendations APPLICATIONS
Recommendation 2 PEHRPP/IPWG should make
satellite organizations aware of the fact that
PMW data are useful for a broad range of
applications and that these applications would
benefit from more data, faster data delivery and
the maintenance of all existing data
streams. Recommendation 3 Product developers
should be encouraged to pursue other assimilation
and/or downscaling methodologies which exploit
all available information (satellites, NWP,
gauges, lightning estimates), particularly those
which are optimized for specific applications.
17
Working Group Key Recommendations ERROR METRICS
There is a general feeling that the current
understanding of HRPP quality/certainty/errors
suffers from a lack of adequate error metrics
that are pertinent to users and
well-understood Long-term Recommendations Physic
ally based error characterization of retrievals
(key element of GPM) Consistent set of basic
metrics Comprehensive quantitative error model
that allows users to specify time and space
scales, give the space-time coefficients
associated with a precip data set, and obtain
estimated RMS error (diagnostic) or create
synthetic precip fields (prognostic) Work towards
an assimilation-like method for combinations
18
Working Group Key Recommendations ERROR METRICS
Short-term Recommendations Develop a standing
working group on error metrics Agree on a short
list of error metrics each needs confidence
intervals - traditional metrics that give
insight at the scales of interest - other metrics
suggested by the long-term vision - fuzzy
validation framework - WWRP/WGNE Joint Working
Group on Verification list of metrics - diagnostic
s (PDFs, conditional statistics, ) - examine
using transformed data in metrics Test
practicality of these metrics for producers and
utility for users - Inter-satellite errors
(Joyce/NOAA subsetted gridded (30-min, 0.25)
precipitation data sets from 15
satellites/sensors) - Characterizing errors by
regime - Establishing some minimum set of
space/time correlations that are needed
Write a Comment
User Comments (0)
About PowerShow.com