Evaluation Research - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Evaluation Research

Description:

Example: National Science Foundation. Research examples: SBE Science Nuggets ... bureaucratic impediment/facilitation. Classical experimental design ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 31
Provided by: julieh2
Category:

less

Transcript and Presenter's Notes

Title: Evaluation Research


1
Chapter 12
  • Evaluation Research

2
Pure versus Applied research
  • Pure research research conducted solely for the
    purpose of advancing social scientific knowledge
    "basic" research.
  • Example National Science FoundationResearch
    examples SBE Science Nuggets

3
  • Applied research research undertaken for the
    purpose of influencing some phenomenon in the
    "real" world research conducted for the purpose
    of putting the results into practice
  • Examples of Applied Research from the Institute
    for Research on Poverty (Click on "Research" for
    examples)
  • Examples from the Institute for Womens Policy
    Research (Click on "Our Research" for examples)

4
Evaluation research
  • Definition in Babbie - Research that attempts to
    determine whether a social intervention has
    produced its intended result.
  • Social intervention - an action taken within a
    social context for the purpose of producing some
    intended result.
  • This is really a definition of a specific kind of
    evaluation research - outcome evaluation research
    - which makes the text material somewhat limited

5
Two basic types of evaluation research
  • 1. Formative evaluation (a.k.a. Process
    Evaluation) - provides information for
    intervention improvement, modification,
    documentation and management in the process of
    intervention implementation.
  • Goal is to strengthen the intervention by
    providing feedback on its implementation and
    progress.

6
  • 2. Outcome evaluation (a.k.a. Impact or Summative
    Evaluation) - measures the extent to which the
    interventions stated goals and objectives were
    achieved and determines any unintended
    consequences of the intervention and whether
    these were positive or negative.
  • Important for making major decisions about
    intervention continuation, expansion, reduction,
    and funding.

7
Types of formative evaluation
  • Formative evaluation projects differ according to
    their goals
  • a. Needs assessment
  • b. Intervention monitoring
  • c. Context evaluation

8
a. Needs assessment
  • Determines whether there are demands for new
    services or gaps in already established services
    that need to be met.
  • Importance of needs assessment for establishing
    goals, objectives, intervention structure and
    activities, and resource requirements.

9
b. Intervention monitoring
  • Tracks the process of intervention delivery
    (e.g., what services are delivered, how many
    clients are served, what are client
    characteristics?)
  • Example from the VCU SERL Ryan White Title II
    Data Reporting

10
c. Context evaluation
  • Provides information about the setting or
    environment in which the intervention is
    implemented. Assesses how certain settings
    contribute to or impede intervention success.
  • Important considerations specific needs of
    individuals targeted by the intervention social,
    political, economic, geographic, and/or cultural
    factors.
  • VCU SERL example - TANF ("Temporary Assistance
    for Needy Families") evaluation of programs
    designed to help recipients get and keep jobs.
    Effects of economic and bureaucratic contexts.
  • (Context evaluation is also important in outcome
    evaluations - see below.)

11
Back to two types of evaluation research
  • 1. Formative (process) evaluation - provides
    information for intervention improvement,
    modification, documentation and management in the
    process of intervention implementation.
  • 2. Outcome (impact/summative) evaluation -
    measures the extent to which the interventions
    stated goals and objectives were achieved and
    determines any unintended consequences of the
    intervention and whether these were positive or
    negative.

12
Outcome evaluation
  • Important for making major decisions about
    intervention continuation, expansion, reduction,
    and funding.
  • Babbie discusses only outcome evaluation

13
Examples of outcome evaluation research
  • From an evaluation researcher at the State of
    Florida Juvenile Justice Department, a look at
    Juvenile Justice Evaluation Research.
  • Example from Virginia government Joint
    Legislative Audit and Review Commission
  • (1) Monitors whether state agencies and programs
    are in compliance with legislative intent
    concerning appropriations and objectives, and
  • (2) Determines whether state agencies and
    programs meet criteria of economy, efficiency,
    and effectiveness.

14
Measurement issues in outcome evaluation
  • 1. Specification of the outcome variable is
    critical in outcome evaluation.
  • 2. The characteristics of the intervention itself
    must be measured.
  • 3. The experimental context should be considered.

15
1. Specification of outcome variable
  • Examples of measurement issues
  • In a program designed to reduce illegal drug use
    among teenagers
  • Exactly which drug(s) should be included?
  • How should drug use be measured? Over what time
    period? What quantity? With what frequency of
    use?
  • How much of a reduction would the program have to
    accomplish to be considered a success?

16
Measuring success
  • Those responsible for the intervention may commit
    themselves in advance to a measurable outcome
    that will be regarded as an indication of
    success.
  • The intervention may be amenable to cost/benefit
    analysis (how much does the program cost in
    relation to what it returns in benefits?).
    Example from VCU SERL ADAP (AIDS Drug
    Assistance Program)
  • Evaluators can examine the outcome performance of
    competing programs and compare them.

17
2. Measuring the characteristics of the
intervention
  • Examples
  • program delivery
  • dates
  • content
  • personnel
  • number of programs
  • extent or quality of program participation
  • comparison of program delivery/participation to
    program goals

18
3. Considering the experimental context
  • What is happening outside the intervention that
    could affect its effectiveness?
  • Examples
  • economy
  • political situation
  • cultural context
  • bureaucratic impediment/facilitation

19
Classical experimental design
The gold standard in determining the effect of an
experimental intervention/stimulus
(R random assignment of subjects from a pool O
observation of the dependent variable X
administration of the experimental stimulus) Can
this design be applied to the real world of
evaluation research?
20
Types of quasi-experimental designs
  • 1. Time-series design observation of events
    before, during, and after the intervention.
  • 2. Nonequivalent control group design instead
    of a randomized control group, a control group as
    comparable as possible to the experimental group
    is selected and observed.
  • 3. Multiple time-series design combination of
    1 and 2 a time-series design involving the
    observation of one or more comparable control
    groups.

21
1. Time-series design - observation of events
before, during, and after the intervention
22
1. Time-series design considerations
  • Preferable to collect as many data points as
    possible before and after intervention
  • Data collection instrument should remain
    unchanged
  • Disadvantage does not control for the
    possibility of the effect of variables other than
    the intervention (extraneous variables - factors
    other than the experimental intervention that
    occurred at the same time and affected the
    outcome variable).

23
2. Nonequivalent control group design a control
group as comparable as possible to the
experimental group is selected and observed.
(C attempt to establish comparability O
observation of the dependent variable X
administration of the experimental
stimulus) Disadvantage unless the subjects are
randomized, we cant be confident that the
intervention is the only difference between the
two groups. Strength of the conclusions rests on
the level of comparability of the groups.
24
3. Multiple Time-Series Design combination of
1 and 2 a time-series design involving the
observation of one or more comparable control
groups.
Preferable to 1 and 2, because it involves the
examination of two or more groups over time.
However, the weaknesses of each may affect this
design also.
25
Qualitative methods in evaluation research
  • SERL example FIFI "Georgia HIV/AIDS
    Comprehensive Needs Assessment" (1998).
  • Survey of services provided by HIV/AIDS
    organizations and perceived needs for services
    (quantitative)
  • Analysis of HIV/AIDS surveillance data and other
    relevant existing databases (quantitative)
  • Focus groups consumers of services and high
    risk groups (qualitative)
  • Key informant interviews views on service
    provision and needs among key stakeholders in the
    care system (qualitative)

26
Why are outcome evaluation research results often
ignored? 
  • 1. Results may not be presented in a way that
    non-researchers (e.g., the program
    administrators) can understand.
  • Example of an SERL report specifically designed
    to address this problem.
  • 2. Results may contradict deeply held beliefs
    (e.g., Nixons pornography commission).
  • 3. Persons with vested interests may act to
    prevent implementation of the results.

27
Examples of ethical issues in evaluation research
  • Who should receive the intervention?
  • Is it ethical to deprive a control group of the
    intervention?
  • To what extent should the program administrators
    influence the research design?
  • Should an intervention be evaluated by persons
    connected to the organization?
  • Should the evaluator agree to a contract in which
    the results are not disseminated beyond the
    organization being evaluated?

28
  • Is it ethical to evaluate programs that have been
    developed without the participation of those who
    are affected by them?
  • Should you, as a professional evaluation
    researcher, participate in a project that does
    not meet your personal ethics and you feel does
    not contribute to the greater good of society?
  • Web link for evaluation research ethics American
    Evaluation Associations "Guiding Principles for
    Evaluators."

29
Important direction in evaluation research
  • Participatory evaluation evaluation that
    involves all stakeholders (persons with vested
    interests) in the design and implementation of
    the project and process of putting results to
    use.
  • The American Evaluation Association has a
    Collaborative, Participatory, and Empowerment
    Evaluation topical interest group.

30
  • Developing collaborative roles between
    researchers and participants from the Annie E.
    Casey Foundation
  • Participatory research from the World Bank
  • Internet resources for participatory action
    research
Write a Comment
User Comments (0)
About PowerShow.com