Impact Assessment - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Impact Assessment

Description:

the assessment compares two or more groups of intervention targets ... Hawthorne Effect. the act of measuring changes what is being measured ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 31
Provided by: davidd90
Category:

less

Transcript and Presenter's Notes

Title: Impact Assessment


1
Impact Assessment
  • OU Social Work
  • Program Evaluation
  • February 23, 2004

2
Impact Assessment
  • commonly known as outcomes evaluation
  • impact assessment examines . . .
  • intended effects
  • the net effect of the program intervention, and
  • how a program impacts upon a social problem

3
Key Concepts
  • All impact assessment are . . .
  • comparative
  • the assessment compares two or more groups of
    intervention targets
  • those who receive the intervention and those who
    do not

4
Impact Assessment Research Design
  • Experimental design
  • intervention control groups with random
    assignment
  • Quasi-Experimental design
  • intervention comparison groups without
    random assignment
  • Observational design
  • one point of data collection after
    intervention

5
Prerequisites for Impact Assessment
  • 1. program objectives must be articulated and
    measurable
  • 2. the intervention must be (adequately)
    implemented

6
Linking Interventions to Outcomes
  • essential problem is establishing that the
    intervention caused a desired outcome
  • we are attempting to show
  • causality
  • Probability statement
  • A is a cause of B, given circumstances C

7
Confounding Factors
  • Always assess and acknowledge covariates
  • Covariates are also known as
  • confounding factors (variables)
  • nuisance variables
  • Covariates offer alternative explanations for the
    effects of the intervention

8
Perfect vs. Good Enough
  • usually impossible to conduct a perfect or
    very-best impact assessment
  • limitations include
  • randomized experiments are too expensive and
    other unethical to human subjects
  • may be too expensive or not enough time
  • some interventions are more worthy of involved
    assessments

9
  • choice of design is always a trade-off
  • use a GOOD ENOUGH rule for formulating research
    designs
  • choose a design by considering . . .
  • potential importance of findings
  • practicality feasibility of design
  • probability that design will produce creditable
    results

10
Gross vs. Net Outcomes
  • NET EFFECTS
  • changes resulting from intervention, without
    influence of extraneous factors
  • GROSS EFFECTS
  • Sum of . . .
  • effects of intervention (net effect)
  • extraneous confounding factors
  • design effects

11
Extraneous Confounding Factors
  • Uncontrolled Selection
  • selection bias in the sample -- where some
    members of the target group are more likely to be
    selected than others
  • extremely difficult to assess -- even in matching
  • watch for effects of being a volunteer
  • watch for drop-out patterns

12
Endogenous Change
  • interventions occur over the course of time and
    events occur in an ordinary or natural sequence
  • these changes influence outcomes and are called
    endogenous changes
  • may have either positive or negative influences

13
Secular Drift
  • long-term changes in community or region
  • also known as history
  • may have either positive or negative effects

14
Interfering Events
  • the effect of a short-term event that enhances or
    masks net program effects
  • examples include a natural disaster, labor
    strike, local political event

15
Maturation
  • identifies changes in the target population that
    result from naturally growing older
  • examples include gaining abilities when growing
    older, or loosing abilities when aging, failing
    to account for lifestyle changes (age grading)

16
Design Effects
  • result from actually conducting the evaluation
  • measuring an event always changes that event
  • design effects are always present and threaten
    internal validity

17
Stochastic Effects
  • means effects due to chance
  • statistical power reduces effects of chance
  • increase power by increasing the sample size
  • consider statistical inference
  • are results statistically significant?
  • Type I or Type II errors

18
Measurement Reliability Validity
  • reliability is the ability to get the same
    measurement repeatedly
  • check instruments for reliability
  • validity asks the question, Are we measuring what
    we want to or should measure?

19
Outcome Measures
  • plan considerable effort to develop outcome
    measures
  • poor outcome measures can completely undermine
    assessment of impact
  • outcome measures must be both reliable and valid

20
Proxy Measures
  • outcomes often can not be measured directly,
    thus, use proxy measures
  • may be more cost effective
  • may reflect intermediate steps
  • use multiple proxy measures

21
Hawthorne Effect
  • the act of measuring changes what is being
    measured
  • clients (research subjects) will try to please
    evaluator--and respond accordingly
  • often known as a placebo effect
  • can not separate the evaluation process from the
    effect

22
Missing Data
  • all studies have missing data
  • missing data is not random
  • must assess and report on the pattern of missing
    data
  • consider potential missing data in development of
    instruments

23
Sampling Effects
  • select a sample to be an unbiased representation
    of the target population
  • 1. identify the population served by the program
  • 2. develop a selection strategy with equal
    probably of being selected
  • 3. get the sample that you actually intended to
    get

24
Control or Comparison Groups
  • five types of comparison groups
  • 1. randomized controls
  • 2. matched constructed controls
  • 3. statistically equated controls
  • 4. reflexive controls
  • 5. generic controls

25
Full or Partial-Coverage Programs
  • when a program reaches 80 or more of intended
    target population, it is a full coverage program
  • when very few are not reached by program, then
    there is a greater difference between the two
    groups

26
Evaluation Research Designs
27
Judgmental Approaches
  • advantage is reduced costs time
  • disadvantage is limited value of findings
  • types include . . .
  • expert opinion (connoisseurial)
  • administrator impact assessment
  • participants judgment (client satisfaction)

28
Qualitative Assessment of Impact
  • Can qualitative methods be used for impact
    assessment?
  • Yes . . . but carefully.

29
Issues in Evaluation Research
  • Reproducibility
  • ability to get similar findings in similar eval
  • Generalizability
  • ability extending findings beyond the sample
  • Pooling evaluations Meta-analysis
  • combing multiple existing studies into one report

30
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com