A data assimilation method and an analysis of its application to model the ocean state and dynamics - PowerPoint PPT Presentation

1 / 31
About This Presentation
Title:

A data assimilation method and an analysis of its application to model the ocean state and dynamics

Description:

The zonal spatial resolution was 1.5o everywhere in the model domain. ... Aspects of the zonal current at the equator coincide with the GFDL assimilation. ... – PowerPoint PPT presentation

Number of Views:96
Avg rating:3.0/5.0
Slides: 32
Provided by: CAST103
Category:

less

Transcript and Presenter's Notes

Title: A data assimilation method and an analysis of its application to model the ocean state and dynamics


1
A data assimilation method and an analysis of its
application to model the ocean state and dynamics
Konstantin P. Belyaev Shrishov Institute of
Oceanology, Russian Academy of Sciences
(SIO/RAS) Laboratório Nacional de Computação
Científica (LNCC/MCT)K_P_Belyayev_at_mtu-net.ruCle
mente A. S. Tanajura Laboratório Nacional de
Computação Científica (LNCC/MCT)cast_at_lncc.br
First LNCC Meeting on Computational Modelling,
Petrópolis, 9-13 August 2003
2
1. Introduction and Motivation
  • Question How to represent the atmosphere and
    the ocean states?
  • Answer With observational data, with model
    products and with their combination by data
    assimilation (DA) schemes.
  • Jazwinski (1970) Ghil (1980) Derber and Rosati
    (1989) Ghil and Malanotte-Rizzoli (1991), Daley
    (1991) Derber et al. (1991) Evensen (1992
    2001), Miller et al. (1994) Cohn(1997), Chen et
    al (1997), Ji and Leetmaa (1997) Behringer et
    al. (1998) Schneider et al. (1999) Carton et
    al. (2000) Belyaev et al. (2000), and others
    developed data assimilation schemes to ocean and
    atmosphere models.

The general goal of the present work is to
develop and apply data assimilation methods to
monitor the ocean state and to investigate the
dynamics and the predictability of the climate
system. The specific goal of the present work
is to demonstrate the realization of the data
assimilation method developed by K. Belyaev and
to investigate the impact of the assimilation
scheme on the tropical Atlantic and Pacific
oceans.The present work is inserted in the
research lines proposed in PIRATA, CAMISA, and
other international projects.
3
  • 2. The models and the data used
  • The present work uses
  • the data assimilation method by Belyaev et al.
    (2000)
  • temperature data from the PIRATA and the
    TAO/TRITON arrays
  • the Center for Ocean-Land-Atmosphere Studies
    (COLA) coupled ocean-land-atmosphere model (CGCM)
    (Schneider et al. 1998)
  • the Max-Planck-Institut für Meteorologie (MPIMET)
    Hamburg Ocean Primitive Equation Model (HOPE)
    (Marsland et al. 2002)
  • Climatological data (Levitus Atlas 1998,
    Oberhuber Atlas 1988) and the NCEP/NCAR
    reanalyses data to perform the model integrations.

4
The observational data is heterogeneously
distributed in the ocean and atmosphere. Also,
not all the model prognostic variables are
observed. It is shown below the Pilot Rearch
Moored Array in the Tropical Atlantic (PIRATA)
array, and the Tropical Atmosphere-Ocean
(TAO)/TRITON array. The present work uses the
temperature profiles and other data coleted from
these arrays. PIRATA Array
5
The COLA CGCM components are the GFDL OGCM MOM_2
(Pacanowski 1995) and the spectral COLA AGCM
(Kinter et. al. 1997). The OGCM component
covered the region between 40o S and 40o N around
the globe. Beyond these latitudes, climatological
values were prescribed. The zonal spatial
resolution was 1.5o everywhere in the model
domain. The meridional spatial resolution was
0.5o between 20o S and 20o N, linearly increasing
to 1.5o toward 40o S and 40o N. The maximum
depth was set to 4000 m discretized by 20 levels.
The top 17 levels were located in the upper 500
m.The AGCM component was global set with a
T30L18 resolution.The anomaly coupling strategy
was used in which both the ocean and the
atmosphere models climatologies are known a
priori. The models exchange predicted anomalies,
which are computed relatively to their own
climatology. The climatologies of the ocean and
the atmosphere models are produced by separate
uncoupled long-term simulations (45-year
integration forced with observed SST for the
AGCM 13-year integration with prescribed
observed wind stresses, after a 25-year spin up
forced with climatological data for the
OGCM).Given a SST field from the OGCM, the AGCM
predicts a wind stress field ?. The AGCM climate
wind field is then subtracted from ?, and this
anomaly is added to the observed climatology.
Similar steps are taken for the calculation of
the SST provided to the AGCM.
6
The HOPE model equations are discretized on a
C-grid in the horizontal plane and uses
z-coordinate in the vertical direction. A
particular feature of the model is the conformal
transformation of geographical coordinates. The
grid configuration used in the present work is
shown below. The region of the highest resolution
(about 25 km) is the North Atlantic. In remote
areas, e.g., the tropical Pacific, the horizontal
resolution is around 300 km. The vertical
resolution was uniform with 20 vertical layers.

7
A long procedure is used to start the
assimilation experiment data preparation to
be assimilated data preparation to the model
integration (wind stress, sensible heat flux,
latent heat flux, precipitation, short wave and
longwave radiation at the ocean surface) spin
up integration with climatological data (several
years) model forced run (several years)
data assimilation (DA) experiments to further
adjust the ocean state to observations.
8
3. The DA method the Belyaev version of the
Kalman Filter.
Details can be found in Belyaev et al. (2001,
Applied Mathematical Modelling, v. 25,
pp.655-670) and Tanajura et al. (2002, Ocean
Modelling, v. 52, pp.123-132). Similarly to
the classical Kalman scheme, the main idea of
this scheme is to optimally correct the model
variable by a quantity which depends on the model
error and its evolution. Advantages The main
difference between this method and the classical
Kalman Filter is that the formulation of the
co-variance matrix of the error between the model
and the observations is in phase-space, and not
in physical-space. The covariance of the error
is calculated first among the points of
observation given the joint probability
distribution of the error predicted by the
Fokker-Planck Equation. Since the number of
observations is often much smaller than the
number of model grid points, the method is
computationally efficient from this point of
view. An advantage is that the method does not
prescribe the magnitude and the pattern of the
analyses increment, but uses the model dynamics
to predict it and to extrapolate the data
information. This brings more realism to the
increment.
9
Disadvantages
Since the method requires the solution of the
Fokker-Planck Equation for each pair of errors at
the observational points, if the number of
observations increase too much the method can
become computationally expensive. The DA method
has no mathematical restrictions to assimilate
several variables, but in work should be done for
its realization. In the majority of the
experiments performed so far, only temperature
profiles were used. Correcting only temperature
creates an inbalance between the mass and the
temperature fields that can generate undesirable
gravity waves. This can be corrected by an
initialization scheme or by the model itself. An
initialization techinque based on the normal
modes is under development today at LNCC. The DA
method has two restrictions (i) the interval
between two consecutive assimilation steps should
be small in relation to the total integration
time (one assimilation per day is
recommended) (ii) the difference between model
and observations should not be very big. In our
case, if the model temperature error is greater
than 10oC, no assimilation is performed. NOTE
These problems are found by all DA methods, not
only by the one proposed here.
10
3.1 The proposed version of the Kalman Filter.
Let both the model variable and the real
variable be defined by
where ?(t) is the model operator, and
are the true and the model
variables, ? is a random noise with known
stochastic distribution, and x is the
3-dimensional space variable. Let the error be
defined by
.

Then, if the model operator is linear
If the model in non-linear, it is also assumed
that the above equation is valid, but with a
random noise ? with different distribution.
It is assumed that
where R(r) is a known decreasing monotonic
function of the distance r between the spacial
points x and y,
11
for any estimation ?.This is the problem of
finding an unbiased estimation with minimum
variance.
The following problem is considered Find the
optimal estimation of the true
value such that
Let the model variable be corrected by the
observational data according to
where are the
optimal linear estimation and the model values of
the variable at time t at the analysis and
observed spatial points x, xi, respectively.
N(?) is the number of points of observations at
time ? , which are considered here continuously
distributed in time.
12
In this formulation, the observational errors are
neglected, i.e., the true value and the observed
value are considered the same. Therefore,
where is the observed value
at time t at the observational point xi .
The weigths ?i , ?( ?, x, x( ) are unknown,
and they should be determined using the minimum
variance condition. This condition is equivalent
to calculate ?i such that
On the other hand, this is equivalent to solving
the Wiener-Hopf equation
where K( t,x1 , x2 ) is the covariance function
of the errror. In phase-space it is calculated by
and p(t, s1 , s2 ) is the joint probability
density of the errrors s1 ?(t ,x1 ) , s2 ?(t
,x2).
13
The previous formula is a representation of the
covariance function in phase-space. The problem
now is to find the joint probability density of
the errors. Assume that the evolution of the
vector of pair of errors
can be represented by the Langevin
equation
where is the drift
vector, is the
diffusion matrix, and W is a two-dimensional
Wiener process. Then, the Fokker-Planck equation
can be used to represent the evolution of the
joint probability density. The Fokker-Planck
equation is given by
where and the drift
vector and the diffusion matrix are defined by
where simbol ? denotes the transpose of the
vector.
14
The following notation was used
The Fokker-Planck equation requires initial and
boundary condition. They are physically
consistent
The drift vector a and the diffusion matrix B are
actually calculated according to the model
dynamics using a histogram technique. Their
calculation requires construction of the
conditional probability of the error
in which the known analysis in the previous
assimilation step minus its space average is
taken for the condition and the model result
minus its space average is used to estimate the
evaluation of the conditional probability. The
average is subtracted from the fields because a
necessary condtion to be satisfied for the
variable is that it should have zero mean.
15
Then, the drift vector and the diffusion matriz
are calculated by
For the practical realization of the method, it
should be noted that for large the conditional
probability is close to zero.
16
4.1. Results with the PIRATA data and the COLA
CGCMThe COLA CGCM was used to assimilate the
PIRATA data set during 1999. Only 10 PIRATA
buoys were operational by this time. Their
location are shown in circles in the next slide.
The integrations were performed at
CPTEC/INPE.The figure below shows the time
series during March 1999 of the temperature mean
squared error (mean taken over the 10 PIRATA
buoys) without assimilation (line with open
circles), with assimilation in the previous day
(thick solid line with squares) and with
assimilation in the current day (thin solid
line). After day 15, the curve splits, and the
upper branch represents (standard deviation of
the error simulation). (a) contains these
variances at 40-m depth, and at (b) at 500-m
depth. Unit is ºC.
17
The PIRATA Experiment. The figure on the
left shows the mean temperature at 80 m depth
for March 1999 over the Atlantic without
assimilation on the right, with assimilation on
the botton, the difference with minus without
assimilation. Unit is oC. The location of the 10
PIRATA buoys used in the DA experiment is
represented by circles in the figure on the
right. The crosses show the location of 2 PIRATA
buoys used for independent validation in Dec 1999
simulation.
18
The
PIRATA Experiment The figure below shows the
time series during December 1999 of the SST mean
squared error without assimilation (dashed line),
with assimilation in the previous day (solid
line without circles) and with assimilation in
the current day (solid line with circles)
averaged over the two PIRATA buoys marked by
crosses (not used in the assimilation). Unit is
oC.
19
The PIRATA
Experiment The figure shows the vertical
profile of the mean April 1999 temperature from
the PIRATA data (squares), model (open circles)
and model with assimilation (shaded circles) at
0o , 23 ºW. Unit is oC.
20
The PIRATA ExperimentThe figures show the
March 1999 mean zonal current cross section at
the equator for the COLA CGCM simulation and the
GFDL simulation (top row), and the assimilation
runs. Unit is 0.1 ms-1.
COLA CGCM
GFDL OGCM
21
The PIRATA ExperimentThe figures show the
March 1999 mean zonal current cross section at
the equator for the COLA CGCM and the GFDL OGCM
difference assimilation minus simulation. Unit is
0.1 ms-1.
COLA CGCM
GFDL OGCM
22
4.2 Results with the TAO/TRITON data and the
MPIMET HOPE modelThe results presented below
are from Tanajura et al. (2003) (Ocean Modelling,
submitted). The HOPE model was used to assimilate
the TAO/TRITON data set during the 1997 El Niño
year. The figure shows the time series of the
averaged temperature error variances over the TAO
buyos (dashed line), (solid line), and (solid
line with circles) at 30 m depth during 1997.
The average is taken over all available
TAO/TRITON buoys at the end of each month. Units
in C2.
23
TAO/TRITON Experiment. The figures show the
vertical cross section at the equator of the
December 1997 mean temperature difference HOPE
minus WOA and TAO observation minus climatology.
Units in C.
24
TAO/TRITON Experiment.Vertical profiles of
the monthly mean temperature from February (left)
to December 1997 (right) at 0 N, 100 W for the
control run (red line) and the assimilation run
(black line). Units in C, and the depth is in m.
25
TAO/TRITON Experiment.Vertical profile of
the monthly mean temperature for December 1997 at
0 N, 100 W produced at GFDL/NOAA with MOM_3 and
the Derber and Rosati (1989) schem for the
control run (dashed line) and the assimilation
run (solid line). Units in C, and the depth is
in m. This figure can be compared with the Dec
1997 profile in the last slide.
26
TAO/TRITON Experiment. The figure shows the
vertical cross section at the equator of the
difference assimilation minus control of the
December 1997 mean temperature. Units in C. 
27
TAO/TRITON ExperimentThe figure shows the
difference assimilation minus control of the
December 1997 mean temperature at the depth of
215 m from the TAO/TRITON experiment. Units in
C.
28
TAO/TRITON ExperimentThe figure shows the
horizontal structure of the data assimilation
increment at the depth of 50 m in December 1997.
Unit is oC.
29
Vertical cross section at the equator of the
December 1997 mean zonal current for (a) the
control run, and (b) the assimilation run. Units
in cm s-1.(c) Vertical profile of the December
1997 mean zonal current at 0o N, 110o W observed
from the TAO/TRITON ADCP. Units in m s-1.
TAO/TRITON Experiment
(a)
(c)
(b)
30
5. Conclusions The data assimilation method
worked in the right direction and it decreased
the error of the model temperature. It
improved the simulated mixed layed depth by
warming it, and by cooling the region immediately
below it. This was observed in both the
PIRATA/COLA CGCM and the TAO-TRITION/HOPE
experiments, and also at GFDL. The impact of
the data information extrapolated the deep
tropics towards the subtropics and it followed
the ocean dynamics. The model with
assimilation showed the influence of the initial
condition can last a couple of weeks or more.
With a better model configuration and the
assimilation of salinity, the influence of the
initial condition can be extended with impacts in
the predictability. Large impacts of the
assimilation were verified not only in
temperature but also in the currents. Aspects of
the zonal current at the equator coincide with
the GFDL assimilation. Since the temperature
influences all other model variables, a new
climatology of the ocean circulation can be
derived with a long-term data assimilation
experiment. The data assimilation is reliable
and computationally efficient. Therefore, it can
be used in monitoring, forecasting and simulating
large-scale climate features.
31
6. Future Directions Finish the current
development of the initialization scheme to
balance the temperature analyses with the mass
field Investigate the sensitivity of the DA
method to the initial condition of the
Fokker-Planck Equation and to the frequency of
the observational data Apply the DA scheme
and MOM_3 over the Atlantic to produce a time
series of daily analyses from 1Jan1999 until
today. Perform 3-6 month forecasts of the
tropical Atlantic Ocean state and the South
American climate with a global coupled model
using the DA scheme to initialize the ocean and
the NCEP or ECMWF analyses to initialize the
atmosphere. Set closer collaboration with COLA
(Ben Kirtman, J. Shukla), IRD (Jacques Servain),
INPE (Emanuel Giarolla Paulo Nobre), NASA/DAO
(Arlindo da Silva Ricardo Todling) IRI (Michael
Tippett Steve Zebiak) and the CAMISA group.
Write a Comment
User Comments (0)
About PowerShow.com