Improving accuracy in inefficient firm level forecasts: with lessons for macro-forecasters. - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

Improving accuracy in inefficient firm level forecasts: with lessons for macro-forecasters.

Description:

Improving accuracy in inefficient firm level forecasts: with lessons for macro-forecasters. Robert Fildes Lancaster University Centre for Forecasting – PowerPoint PPT presentation

Number of Views:144
Avg rating:3.0/5.0
Slides: 53
Provided by: paulgo1
Category:

less

Transcript and Presenter's Notes

Title: Improving accuracy in inefficient firm level forecasts: with lessons for macro-forecasters.


1
Improving accuracy in inefficient firm level
forecasts with lessons for macro-forecasters.
  • Robert Fildes Lancaster University Centre for
    Forecasting
  • Paul Goodwin, University of Bath
  • Supported by Kostas Nikolopoulos
  • Wing Yee Lee, University of Bath

An EPSRC Research Project A collaboration between
Lancaster Centre for Forecasting,, and 10
companies, including McBride, Interbrew, Heinz,
Cow and Gate
2
Outline
  • How and why forecasting is carried out at the
    company level
  • Similarities and differences compared to
    macroeconomic forecasting
  • Analysis of company level forecasts
  • Explaining the excessive use of judgmental
    interventions
  • Potential lessons for macroeconomic forecasting

3
  • Focus
  • on short and medium term demand forecasting in
    supply chain companies
  • Forecasts in these companies are used in
    decisions relating to
  • - logistics,
  • - human resource planning,
  • - stock control,
  • - purchasing,
  • - cash flow management.

Service - inventory investment tradeoff curves
The wrong product in the wrong place at the wrong
time - forecasters motivated to improve accuracy
4
Information affecting the supply chain
The full model Ordersf(past orders, Sales,
forecast sales, promotions, Events) The basic
model Ordersf(past orders) judgemental
estimates of promotions
5
A complete statistical model?Is it possible?
Marketing factors
  • The Problems
  • Too complex
  • Incomplete data on many drivers
  • Unique events
  • No available statistical expertise
  • Management understanding acceptance
  • Belief that managerial expertise is best
  • Cost of cleaning past data to remove unusual
    events

The Practical Solution Capture the unusual
complexity by managerial judgement
6
Complementary nature of statistical forecasts
management judgment
Organisationally based Forecasting combines
statistical analysis with managerial judgement
  • humans are adaptable and can take into account
    one-off events, but they are inconsistent and
    suffer from cognitive biases
  • statistical methods are rigid, but consistent,
    and can take into account large volumes of
    information

7
How companies make their forecasts
  • 1 Package embodying simple robust time
    series methods is used (e.g. simple exponential
    smoothing, Holt-Winters)
  • Statistical forecast can then be judgmentally
    adjusted, usually at a meeting.
  • ostensibly for special events like
    promotion campaigns

8
Differences between macroeconomic and company
forecasting
  • Very large numbers of series need to be forecast
    regularly in companies
  • Macro forecasters are usually statistically
    trained, company forecasters are not
  • Unlike macro-economic forecasting there are
    usually no company forecasts that compete to
    forecast the same variable
  • Macro forecast variables are at a higher level
    of aggregation, but lower frequency
  • Macro forecasters employ explanatory models,
    company forecasters tend to use univariate methods

9
Similarities with macroeconomic forecasting
  • Process of forecasting similar
  • Judgment used in model selection and fitting and
    in subsequent adjustment of resulting forecast
  • Difficult to tease out these separate effects of
    judgment
  • Starting conditions often not known for certain

10
Company Evidence Data (4 U.K. based companies)
The EPSRC Research Project
  • 578 SKUs, 3668 Months
  • Company A Major UK Manufacturer of Laundry,
    household cleaning and personal care products
  • - 234 SKUs x 9 months -gt 908 triplets
  • Company B Major International Pharmaceutical
    Manufacturer
  • - 134 SKUs x 24 months -gt 2023 triplets
  • Company C Major International canned Food
    Manufacturer
  • 210 SKUs x 6 months -gt 908 triplets
  • 548 SKUs, 2039 Weeks
  • Company D Major UK Retailer (over 26000 SKUs)
  • - 548 SKUs x 52 weeks -gt 2039 triplets

11
Research Issues and Hypotheses (practical and
theoretical)
  • Does adjustment improve accuracy?
  • Under what circumstances
  • Are the organisational forecasts rational?
  • Do inefficiencies arise from organisational
    processes
  • Can the forecasts be improved?
  • By Model based improvements
  • By software improvements
  • By process improvements

12
Evidence on frequency of judgmental adjustments
from 4 of our companies
  • A significant of forecasts are judgmentally
    adjusted

Companies Data N Adjusted Adjusted
A Monthly 3264 2034 62
B Monthly 873 744 85
C Monthly 1416 942 67
D1 Weekly 12789 1851 14
D2 Weekly 44899 4392 10
Total   63241 9963 16
13
Results MAPE by Adjustment Size
Small Large Adjustments Adjustments
ALL Monthly data - Final vs. System Size
Error
Final Forecast System Forecast
14
Results
Company D - Final vs. System Size
15
For first two companies larger adjustments tend
to lead to the greatest gains in accuracy
Companies Small Small Medium Medium Large Large Very Large Very Large
Companies RelAdjlt10 RelAdjlt10 10-50 10-50 50-150 50-150 gt150 gt150
Companies SFC Improvement SFC Improvement SFC Improvement SFC Improvement
A 11.8 -0.2 20.6 5.5 44.0 13.4 60.0 11.8
B 1 0.3 19.8 2.3 79.4 44.4 71.4 54.9
C 12.7 -0.1 22.0 5.7 36.6 -5.5 51.3 -30.2
D1 12.5 0 19.0 -7.9 18.3 -48.3 43.6 -144.4
D2 27.9 1.0 26.9 -1.2 25.3 -36.8 No Obs. No Obs.
Do not make small adjustments!
SFC statistical forecast
16
The Types of Error
  • Adjustments in the wrong direction
  • With expected positive information
  • Adjust up, but actual is below system Finalgt
    Systemgt Actual
  • With expected negative information
  • Adjust down, but actual is above system
  • Adjustment in the right direction
  • Adjustment is not strong enough
  • With positive info, SystemgtFinalgt Actual
  • With negative info, ActualltFinalltSystem
  • Adjustment is too strong
  • With Positive info, SystemltActuallt Final
  • With Negative info, FinalltActualltSystem

17
Adjustments made in the wrong direction reduce
accuracy
Wrong Direction adjust from SFC Point improved Overshoot SFC Point improved Undershoot SFC Point improved
A 12.90 -9.60 10.10 -2.00 27.00 14.50
B 11.90 -13.10 11.80 -1.30 30.00 16.80
C 13.50 -22.40 13.60 -2.10 22.70 10.00
D1 15.70 -34.10 12.10 -15.10 26.10 -0.30
D2 32.90 -36.40 13.00 -7.60 36.60 16.30
SFC statistical forecast
18
Bias
Fcasts
Fcasts
19
Modelling the forecastsStatistical Issues
For unbiasedness
  • Errors heteroscedastic with outliers
  • Can firms be pooled?
  • Solutions
  • Errors normalised by standard deviation of
    actuals and analysed by size of adjustment

20
Bias of final forecasts
Company N (no. dropped) a, d b Coefficient Std. Error
A 1810 (7) Significant -0.12 0.35
B 326 (0) Significant -0.13 0.52
D1 887 (1) Significant -0.35 0.51
D2 2630(0) Significant -0.47 0.63
C 535 (5) Significant -0.15 0.37
Also evidence that for A,B and C companies bias
is reduced through judgmental adjustment
21
Can we model the error to ensure an efficient
forecast?? improved forecast error
The models
Adjust is proxy for Market intelligence
Efficiency all available information is being
used effectively
The last- the 50/50 model Blattberg Hoch
22
The Effects of Information
  • Define Positive (negative) information as when
    the system forecast is adjusted upward
    (downward)
  • Questions
  • Are there differences in accuracy?
  • Are there differences in efficiency?
  • yes affecting both mean bias and regression bias
  • Can these be capitalised on to design rules to
    improve accuracy?

23
Comparative Results Major gains with some
companies
Final Forecast System forecast Optimal combination
MAPE (Median APE) MAPE (Median APE) MAPE (Median APE)
A Positive Info 34.5 (12.5) 27.5 (14.0) 28.1 (12.7)
Negative Info 25.8 (14.0) 45.7 (19.8) 27.2 (13.7)
B Positive Info 71.6 (18.2) 55.8 (17.1) 49.0 (14.5)
Negative Info 27.4 (10.9) 69.5 (15.6) 19.4 (11.2)
D1 Positive Info 68.9 (35.8) 35.5 (17.8) 28.4 (14.2)
Negative Info 48.2 (29.7) 58.7 (20.0) 34.9 (18.4)
D2 Positive Info 72.1 (50.0) 40.3 (24.6) 21.9 (16.1)
Negative Info 30.2 (19.8) 43.5 (30.3) 15.0 (10.9)
C Positive Info 28.8 (15.9) 25.2 (19.2) 23.1 (13.8)
Negative Info 28.8 (12.7) 39.2 (16.9 26.4 (13.0)
24
Weighting the Information Sources
25
Conclusions from the Empirical Analysis
  • 1 out of 3 adjustments is in the wrong direction!
  • They are very costly
  • b) Inefficiencies persist
  • effects are exaggerated for positive info
  • Double counting? Wishful thinking?
  • c) Small adjustments (lt10-20) should not be
    made!
  • d) Final forecasts are (usually) more accurate
    than system forecasts
  • e) Reweighting of system forecast and market
    adjustment leads to improved accuracy

- is it the data, the drivers or the forecasting
process that makes the difference? Can we by
understanding the process, develop an FSS that
supports improved accuracy
26
  • The adjusted forecasts are biased and
    inefficient
  • the process through which the forecasts are
    adjusted and the Market Intelligence estimated
    leads to inaccuracies
  • Hence there may be gains in improving the
    quality of judgmental inputs to forecasts
  • But most forecasting software do not provide
    facilities to support judgment.
  • Instead, packages promote their statistical
    power..!
  • To understand the process of adjustment
  • To influence the adjustments through better FSS
    design

Aim
27
Why do managers adjust forecasts excessively?
  • The advice perspective
  • We can regard the computer systems forecasts
  • as a form of advice
  • Our willingness to accept advice from a machine
    can vary

28
Theories of why we accept advice
  • Yaniv Kleinberger (2000)
  • People trust their own forecasts more because
    they have greater access to the rationale for
    these
  • - No rationale or explanation is provided in
    most forecasting software packages employing
    univariate methods

29
Theories of why we accept advice
  • Yaniv Kleinberger (2000)
  • Weight attached to advice is dependent on
    reputation of adviser
  • - but negative information about an adviser is
    perceived to be more diagnostic than positive
  • -In forecasting, noise special events may
    contribute to a negative perception
  • Kaplan, et al (2001) found people were more
    likely to rely on a system when its accuracy was
    not disclosed.

30
Why do managers adjust forecasts excessively?
  • Misconceptions of randomness
  • We tend to see systematic patterns in what are
    really random movements in graphs.

31
Would you want to adjust the statistical forecast
for month 21 (below)?
32
  • Ability to manipulate the system and carry out
    what if analyses leads to an illusion of
    control

  • (Davis et al 1994)
  • -manipulation involves effort and is
    perceived
  • to involve skill
  • -people become overconfident in their
  • judgments

33
Manipulations
Change the responsiveness of the forecasts
Change the forecasting method...
Change the amount of past data used..
34
Result of overconfidence
  • Over weighting of judgmental forecasts relative
    to statistical forecasts
  • -even when evidence shows judgment is less
    accurate.

In one study people continued to rely on their
judgment despite receiving messages from the
computer like Please be aware that you are
18.1 LESS ACCURATE than the statistical forecast
provided to you.
35
Why do managers adjust forecasts excessively?
  • 4. Confusion between forecasts and
  • -targets
  • -decisions
  • -politically acceptable numbers

A forecast I think demand will be 200
units A decision I think we ought to produce
250 units, in case demand is
unexpectedly high
36
Why do managers adjust forecasts excessively?
  • 5. Need for ownership of the forecasts
  • Demonstration that youve contributed to the
    forecasting process and earned your salary
  • Demonstration of your marketing expertise to
    your colleagues at meetings

37
  • 6. Base-rate information vs Case-specific
    information
  • Kahneman and Tversky
  • Group A This description has been drawn
    randomly from a folder containing descriptions of
    30 engineers 70 lawyers
  • Group B This description has been drawn
    randomly from a folder containing descriptions of
    70 engineers 30 lawyers
  • -What is the probability that Tom W is an
    engineer?

"Tom W. is of high intelligence, although lacking
in true creativity. He has a need for order and
clarity, ...... in which every detail finds its
appropriate place. His writing is rather dull and
mechanical, enlivened by somewhat corny puns
He has a strong drive for competence. He seems to
feel little sympathy for other people .
Self-centered, he nonetheless has a deep moral
sense.
38
  • People over-emphasise case specific information
    at the expense of base-rate information
  • Our company forecasters focused on each demand
    figure on the graph as being a special case.
  • -the base rate information represented by the
    statistical forecasts was thus under weighted

39
Why do managers adjust forecasts excessively?
  • 7. Recency bias
  • - only the recent past was thought to
  • be relevant
  • Statistical methods were fitted to
  • very short sets of data (often 6
  • months to 2 years)
  • Statistical methods not given much of a chance
  • -reinforcing the forecasters confidence in
    the relative accuracy of judgment

40
Potential lessons for macroeconomic forecasting
  • 1. Lessons relating to when to intervene
  • 2. Lessons relating to improving judgmental
    estimation

41
Lesson relating to when to intervene
  • Beware misconceptions of randomness,
    underweighting of base-rate information recency
    biases
  • -require recording of reasons for judgmental
    intervention
  • Beware overconfidence in judgment
  • - restrict what-if analyses?
  • Beware numbers masquerading as forecasts
  • Build explanations into software
  • so its advice carries greater weight

42
Lessons relating to improving judgmental
estimation - Supporting the use of analogies
  • Involves identifying similar products, or similar
    promotion campaigns, and using these as a basis
    for the judgmental forecast
  • E.g. We have a 2 week B.O.G.O.F promotion in the
    North region starting on 5 June 2006
  • Database identifies most similar cases

43
Supporting the use of analogies
  • Support for different stages of judgmental
    process
  • Memory support so forecaster avoids need to
    recall past cases
  • Similarity support helps forecaster to identify
    most similar past cases
  • Adaptation support -helps forecaster to adapt
    from similar past cases to specifics of current
    case
  • Eg. -Most similar past BOGOF promotion may
    have lasted for 3 weeks,
  • - Forthcoming BOGOF promotion
    lasts for only 2 weeks.
  • - Database will estimate effect of
    2 weeks rather than 3
  • from all past promotions..

44
The interface
Adaptation judgment support
Similarity judgment support
Memory support
45
Lessons relating to improving judgmental
estimation
  • Using profiles to estimate effects of special
    events over time..
  • E.g. timing effects of unexpected financial
    crisis in SE Asia

46
(No Transcript)
47
Lessons relating to improving judgmental
estimation
  • Using decomposition to reduce the demands of
    holistic estimation

With some obvious macro analogies
48
Macroeconomic forecasting errors
Can the lessons learnt in micro forecasting help?
  • Evidence of forecast failure (Fildes Stekler,
    2002)
  • Cyclical turns
  • They are missed
  • Conservatism
  • in underestimating growth in periods of
    expansion, overestimating in periods of
    contraction
  • Inflation underestimated when accelerating,
    overestimating when slowing
  • Rationality
  • Observed inefficiencies
  • Failure to improve over time

49
Explaining the errors (Stekler, 2007)
  • Model derived errors
  • Structural change
  • Model and data interactions
  • Incompatibility between model / data vintages
  • Forecasters loss functions
  • Institutional effects
  • Judgemental effects
  • Bias
  • anchors

50
Our focus judgemental adjustments In macro
forecasting, why are they made? (Donihue, J.
Forecasting, 1993)
  • incompleteness in model
  • dotcom
  • structural change,
  • New exchange rate regime
  • missing variable in historic data base
  • SE Asia financial crisis
  • data inadequacies
  • Current data or revised data
  • recent level errors

51
Common Features - micro forecasting in macro
forecasting
  • Biased and inconsistent forecasts
  • Reputational effects
  • Herding (Smith, 2002 most actuals outside range
    of forecasts)
  • Political pressues (?)
  • Limited historic record keeping
  • No monitoring of reasons for judgement
  • Final forecast error monitoring
  • No learning
  • But
  • Judgemental adjustments add value

52
Conclusions
  • Judgment is employed in both macroeconomic and
    company forecasting
  • This leads to improved accuracy in both domains
  • but still much scope for improvement
  • Macro forecasters should
  • Monitor judgemental forecast accuracy (perhaps
    they do?)
  • Use a notes system to justify adjustments
  • Use support systems to understand the
    effectiveness of their interventions
Write a Comment
User Comments (0)
About PowerShow.com