Title: Noise sensitivity of portfolio selection under various risk measures
1Noise sensitivity of portfolio selection under
various risk measures
- Imre Kondor
- Collegium Budapest and Eötvös University
- Application of Random Matrices to Economy and
Other Complex Systems, Cracow, May 25-28, 2005
2Contents
- Background and motivation risk measures and
noise - The noise sensitivity of variance, random matrix
filtering - Convexity, coherence
- Risk measures in practice VaR, regulatory
measures - Mean absolute deviation (MAD), expected
shortfall (ES) and worst loss (WL) - The effect of noise
- The feasibility problem
3Coworkers
- Szilárd Pafka and Gábor Nagy (CIB Bank, Budapest)
- Richárd Karádi (Institute of Physics, Budapest
University of Technology) - Balázs Janecskó, András Szepessy, Tünde
Ujvárosi (Raiffeisen Bank, Budapest) - István Varga-Haszonits (Eötvös University,
Budapest)
4Background
- Portfolio selection a tradeoff between risk and
reward - There is a more or less general agreement on what
we mean by reward, but the status of risk
measures is controversial - For optimal portfolio selection we have to know
what we want to optimize - We also have to be able to implement the chosen
risk measure in practice
5Motivation
- Expected returns are hard to measure on the
market with any precision - Even if we disregard returns and go for the
minimal risk portfolio, lack of sufficient
information will introduce noise, i. e. error,
into our decision - The problem of noise is more severe for large
portfolios (size N) and relatively short time
series (length T) of observations, and different
risk measures are sensitive to noise to a
different degree. - We have to know how the decision error depends on
N and T for a given risk measure
6A classical risk measure the variance
- When we use variance as a risk measure we assume
that the underlying statistics is essentially
multivariate normal or close to it. For
long-tailed distributions this may be grossly
misleading minimazing the variance may actually
increase risk rather than decreasing it. - Minimizing the variance of a portfolio without
considering return does not, in general, make
much sense. In index tracking, or benchmarking,
however, this is precisely what one has to do.
7Variance 2
- Assume that the underlying process is close to
normal and we want to determine the minimal
variance portfolio. - Then we need the covariance matrix. For a
portfolio of N assets the covariance matrix has
O(N²) elements. The time series of length T for N
assets contain NT data. In order for the
measurement be precise, we need N ltltT. This
rarely holds in practice. As a result, there will
be a lot of noise in the estimate, and the error
will scale in N/T.
8Variance 3
- Analytic results and simulations confirm this
scaling. For N/T ?1-0, the error diverges. - The covariance matrix is positive definite. The
rank of the estimated covariance matrix is the
smaller of N and T. For TltN the estimated
covariance matrix develops zero eigenvalues and
the task loses its meaning. - For TgtN, however, the optimization problem always
has a solution, even if it is not very precise
for T close to N.
Scaling of the error due to noise
9Variance 4
- In order to reduce error, we have to reduce the
effective dimension of the problem. This can be
achieved by a number of filtering techniques
existing in the literature. Since the root of the
problem is lack of sufficient information, we
have to inject knowledge from some external
source (expert analysis, knowledge about the
structure of market, etc.) - This will introduce bias into the estimate. One
has to chose a filtering method so that to be
able to control the bias.
10Variance 5
- Factor analysis is one of the standard filtering
methods. The choice of factors depends on
economic intuition, their number is not
determined by any objective criterion. The latter
holds also for principal components. - L.Laloux, P. Cizeau, J.-P. Bouchaud, M. Potters,
PRL 83 1467 (1999) and Risk 12 No.3, 69 (1999),
and V. Plerou, P. Gopikrishnan, B. Rosenow,
L.A.N. Amaral, H.E. Stanley, PRL 83 1471 (1999),
observed that the spectra of empirical covariance
matrices are infested with noise and proposed a
filtering method based on random matrix theory
(RMT).
11Variance 6
- Their method can be regarded as a systematic
version of principal component analysis, with an
objective criterion on the number of principal
components. - Aspects of RMT-based filtering are the subject of
a couple of papers at this conference and will
not be further discussed in my contribution.
Instead, I will look into the noise sensitivity
of various other risk measures.
12Some elementary criteria on risk measures
- A risk measure is a quantitative characterization
of our intuitive risk concept. - Any reasonable risk measure must satisfy
- - convexity
- - invariance under addition of risk free asset
- - assigning zero risk to a zero position
- The appropriate choice may depend on the nature
of data and on the context (investment, risk
management, benchmarking, tracking, regulation,
capital allocation)
13A more elaborate set of risk measure axioms
- Coherent risk measures (P. Artzner, F. Delbaen,
J.-M. Eber, D. Heath, Risk, 10, 33-49 (1997)
Mathematical Finance,9, 203-228 (1999)) required
properties monotonicity, subadditivity, positive
homogeneity, and translational invariance.
(Homogeneity is questionable for very large
positions.) - Spectral measures (C. Acerbi, in Risk Measures
for the 21st Century, ed. G. Szegö, Wiley, 2004)
a special subset of coherent measures, with an
explicit representation. They are parametrized by
a spectral function that reflects the risk
aversion of the investor.
14Convexity
- Convexity is extremely important.
- A non-convex risk measure
- - penalizes diversification (without convexity
risk can be reduced by splitting the portfolio
in two or more parts) - - does not allow risk to be correctly aggregated
- - cannot provide a basis for rational pricing of
risk - - cannot serve as a basis for a consistent limit
system - In short, a non-convex risk measure is not a risk
measure at all.
15Risk measures in practice VaR
- VaR (Value at Risk) is a high (95, or 99)
quantile, a threshold beyond which a given
fraction (5 or 1) of the statistical weight
resides. - Its merits (relative to the Greeks, e.g.)
- - universal can be applied to any portfolio
- - probabilistic content associated to the
distribution - - expressed in money
- Wide spread across the whole industry and
regulation. Has been promoted from a diagnostic
tool to a decision tool. - It is not convex!
16Risk measures implied by regulation
- Banks are required to set aside capital as a
cushion against risk - Minimal capital requirements are fixed by
international regulation (Basel I and II, Capital
Adequacy Directive of the EEC) the magic 8 - Standard model vs. internal models
- Capital charges assigned to various positions in
the standard model purport to cover the risk in
those positions, therefore, they must be regarded
as some kind of implied risk measures - These measures are trying to mimic variance by
piecewise linear approximants. They are quite
arbitrary, sometimes concave and unstable
17An example Foreign exchange
According to Annex III, 1, (CAD 1993, Official
Journal of the European Communities, L14, 1-26)
the capital requirement is given as
,
,
in terms of the gross
.
and the net position
The iso-risk surface of the foreign exchange
portfolio
18Mean absolute deviation (MAD)
Some methodologies (e.g. Algorithmics) use the
mean absolute deviation rather than the standard
deviation to characterize the fluctuation of
portfolios. The objective function to minimize is
then
instead of
The iso-risk surfaces of MAD are polyhedra again.
19Effect of noise on absolute deviation-optimized
portfolios
We generate artificial time series (say iid
normal), determine the true abs. deviation and
compare it to the measured one
We get
20Noise sensitivity of MAD
- The result scales in T/N (same as with the
variance). The optimal portfolio other things
being equal - is more risky than in the
variance-based optimization. - Geometrical interpretation The level surfaces of
the variance are ellipsoids.The optimal portfolio
is found as the point where this risk-ellipsoid
first touches the plane corresponding to the
budget constraint. In the absolute deviation case
the ellipsoid is replaced by a polyhedron, and
the solution occurs at one of its corners. A
small error in the specification of the
polyhedron makes the solution jump to another
corner, thereby increasing the fluctuation in the
portfolio.
21(No Transcript)
22Filtering for MAD (?)
- The absolute deviation-optimized portfolios can
be filtered, by associating a covariance matrix
with the time series, then filtering this matrix
(by RMT, say), and generating a new time series
via this reduced matrix. This procedure
significantly reduces the noise in the absolute
deviation. - Note that this risk measure can be used in the
case of non-Gaussian portfolios as well.
23Expected shortfall (ES) optimization
- ES is the mean loss beyond a high threshold
defined in probability (not in money). For
continuous pdfs it is the same as the
conditional expectation beyond the VaR quantile.
ES is coherent (in the sense of Artzner et al.)
and as such it is strongly promoted by a group of
academics. In addition, Uryasev and Rockefellar
have shown that its optimizaton can be reduced to
linear programming for which extremely fast
algorithms exist. - CVaR-optimized portfolios tend to be much noisier
than either of the previous ones. One reason is
the instability related to the (piecewise) linear
risk measure, the other is that a high quantile
sacrifices most of the data. - In addition, CVaR optimization is not always
feasible!
24Feasibility of optimization under ES
Probability of the existence of an optimum under
CVaR. F is the standard normal distribution. Note
the scaling in N/vT.
25A pessimistic risk measure maximal loss
- In order to better understand the feasibility
problem, select the worst return in time and
minimize this over the weights -
- subject to
-
- This risk measure is coherent, one of Acerbis
spectral measures. - For T lt N there is no solution
- The existence of a solution for T gt N is a
probabilistic issue again, depending on the time
series sample
26Why is the existence of an optimum a random event?
- To get a feeling, consider NT2.
- The two planes
- intersect the plane of the budget constraint in
two straight lines. If one of these is
decreasing, the other is increasing with ,
then there is a solution, if both increase or
decrease, there is not. It is easy to see that
for elliptical distributions the probability of
there being a solution is ½.
27Probability of the feasibility of the minimax
problem
- For TgtN the probability of a solution (for an
elliptical underlying pdf) is -
- (The problem is isomorphic to some problems in
operations research and random geometry.) - For N and T large p goes over into the error
function and scales in N/vT. - For T? infinity, p ?1.
28Probability of the existence of a solution under
maximum loss. F is the standard normal
distribution. Scaling is in N/vT again.
29(No Transcript)
30(No Transcript)
31(No Transcript)
32(No Transcript)
33(No Transcript)
34(No Transcript)
35(No Transcript)
36(No Transcript)
37(No Transcript)
38(No Transcript)
39(No Transcript)
40(No Transcript)
41(No Transcript)
42(No Transcript)
43Concluding remarks
- Piecewise linear risk measures show instability
(jumps) in a noisy environment - Risk measures focusing on the far tails show
additional sensitivity to noise, due to loss of
data - The two coherent measures we have studied suffer
from feasibility problems under noise. - This may make them problematic in a risk
management or regulatory context.
44Some references
- Physica A 299, 305-310 (2001)
- European Physical Journal B 27, 277-280 (2002)
- Physica A 319, 487-494 (2003)
- Physica A 343, 623-634 (2004)
- submitted to Quantitative Finance, e-print
cond-mat/0402573
45 Some key points
- Laloux et al. and Plerou et al. demonstrate the
effect of noise on the spectrum of the
correlation matrix C. This is not directly
relevant for the risk in the portfolio. We wanted
to study the effect of noise on a measure of
risk. The whole covariance philosophy corresponds
to a Gaussian world, so our first risk measure
will be the variance. -
46Optimization vs. risk management
- There is a fundamental difference between the two
kinds of uses of the covariance matrix s for
optimization resp. risk measurement. - Where do people use s for portfolio selection at
all? - - GoldmanSachs technical document
- - tracking portfolios, benchmarking, shrinkage
- - capital allocation (EWRM)
- - hidden in softwares
47Optimization
- When s is used for optimization, we need a lot
more information, because we are comparing
different portfolios. - To get optimal portfolio, we need to invert s,
and as it has small eigenvalues, error gets
amplified.
48Risk measurement management - regulatory
capital calculation
- Assessing risk in a given portfolio no need to
invert s the problem of measurement error is
much less serious
49Dimensional reduction techniques in finance
- Impose some structure on s. This introduces bias,
but beneficial effect of noise reduction may
compensate for this. - Examples
- single-index models (ßs) All these help.
- multi-index models Studies are based
- grouping by sectors on empirical data
- principal component analysis
- Baysian shrinkage estimators, etc.
50Contribution from econophysics
- Random matrices first appeared in a finance
context in G. Galluccio, J.-P. Bouchaud, M.
Potters, Physica A 259 449 (1998) - Then came the two PRLs with the shocking result
that most of the eigenvalues of s were just noise - How come s is used in the industry at all ?
51Main source of error
- Lack of sufficient information
- input data N T ( N -
size of portfolio, - required info N N T - length of
time series) - Quality of estimate is measured by Q T/N
- Theoretically, we need Q gtgt 1.
- Practically, T is bounded by 500-1000 (2-4 yrs),
- whereas N can be several hundreds or thousands.
- Dimension (effective portfolio size) must be
reduced
52Simplified portfolio optimization
- Go for the minimal risk portfolio (apart from the
riskless asset) - (constraint on return omitted)
-
53Measure of the effect of noise
- where w are the optimal weights corresponding
to - and , resp.
54Numerical results
55Analytical result
- can be shown easily for Model 1. It is valid
within O(1/N) corrections also for more general
models.
56Results for the market sectors model
57Comments on the efficiency of filtering techniques
- Results depend on the model used for Cº.
- Market model still scales with T/N,
singular at T/N1 - much improved (filtering
technique matches structure), can go even below
TN. - Market sectors strong dependence on parameters
- RMT filtering outperforms the other two
- Semi-empirical data are scattered, RMT wins in
most cases
58- Filtering is very powerful in supressing noise,
particularly when it matches the underlying
structure.
59One step towards reality Non-stationary case
- Volatility clustering ?ARCH, GARCH, integrated
GARCH?EWMA in RiskMetrics - t actual time
- T window
- a attenuation factor ( Teff -1/log a)
60- RiskMetrics aoptimal 0.94
- memory of a few months, total weight of data
preceding the last 75 days is lt 1. - Filtering is useful also here. Carol Alexander
applied standard principal component analysis.
RMT helps choosing the number of principal
components in an objective manner. - We need upper edge of RMT spectrum for
exponentially weighted random matrices
61Exponentially weighted Wishart matrices
62- Density of eigenvalues
- where v is the solution to
63Spectra of exponentially weighted and standard
Wishart matrices
64- The RMT filtering wins again better than plain
EWMA and better than plain MA. - There is an optimal a (too long memory will
include nonstationary effects, too short memory
looses data). - The optimal a (for N 100) is 0.996
gtgtRiskMetrics a.
65Model 1
- Spectrum
-
- ? 1, N-fold degenerate
- Noise will split this
- into band
1
0
C
66The economic content of the single-index model
-
-
- return market return with
- standard deviation s
- The covariance matrix implied by the above
- The assumed structure reduces of parameters to
N. - If nothing depends on i then this is just the
caricature Model 2.
67Model 2 single-index
- Singlet ?11?(N-1) O(N)
- eigenvector (1,1,1,)
- ?2 1- ? O(1)
- (N-1) fold degenerate
?
1
68Model 4 Semi-empirical
- Very long time series (T) for many assets (N).
- Choose N lt N time series randomly and derive Cº
from these data. Generate time series of length
T ltlt T from Cº. - The error due to T is much larger than that due
to T.
69Why do we use simulated data?
- In order to be able to compare the sensitivity of
risk measures to noise (i.e. to lack of
sufficient information) we better get rid of
other sources of uncertainty, like
non-stationarity. This can be achieved by using
artificial data where we have total control over
the underlying stochastic process