Impact and outcome evaluation involve measuring the effects of an intervention, investigating the di - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Impact and outcome evaluation involve measuring the effects of an intervention, investigating the di

Description:

Impact and outcome evaluation involve measuring the effects of an ... Sleeper effect. Backsliding effect. Trigger effect. Historical effect. Backlash effect ... – PowerPoint PPT presentation

Number of Views:48
Avg rating:3.0/5.0
Slides: 17
Provided by: blac63
Category:

less

Transcript and Presenter's Notes

Title: Impact and outcome evaluation involve measuring the effects of an intervention, investigating the di


1
(No Transcript)
2
Evaluation Impact and Outcome Evaluation
  • Impact and outcome evaluation involve measuring
    the effects of an intervention, investigating the
    direction and degree of change
  • Impact evaluation assesses the immediate effects
    of the intervention and corresponds with
    measuring the intervention objectives
  • Outcome evaluation measures the longer-term
    effects of the intervention and corresponds to
    the intervention goal

Impact and Outcome Evaluation
3
Impact and Outcome Evaluation
4
What is the difference?
  • Impact and outcome evaluation both involve the
    assessment of intervention effects but at
    different levels
  • Impact and outcome evaluation test the logic
    model or causal chain of events that has been
    postulated
  • e.g. changing knowledge, awareness and
    availability changes dietary behaviour
  • The key difference between impact and outcome
    evaluation is not what is being measured but is
    defined by the sequence of measurement
  • ? what aspects of the causal chain the
    intervention goals and objectives aim to address
  • A factors assessed in outcome evaluation in one
    intervention may be assessed as part of impact
    evaluation in another intervention

Impact and Outcome Evaluation
5
When to evaluate?
  • Predicting when the intervention effect/s will
    take place and the timing of impact and outcome
    evaluation is important for true findings
  • There are several possible effects an
    intervention can have over time
  • Ideal effect
  • Sleeper effect
  • Backsliding effect
  • Trigger effect
  • Historical effect
  • Backlash effect
  • Intelligence should be used to predict the type
    of effect your intervention will have if
    intelligence is lacking a pilot study is
    recommended

Impact and Outcome Evaluation
6
Key measures
  • A mixture of qualitative and quantitative methods
    are used in 6 key evaluation measures
  • The extent that each method is used depends upon
    the intervention strategies, the target group and
    the size of the intervention
  • Knowledge
  • Involves assessing what people know, what people
    recognise, what they are aware of, what they
    understand and what people have learned
  • Commonly broken into measuring awareness or
    recognition of an intervention or intervention
    message

Impact and Outcome Evaluation
7
Key measures
  • Attitude and self-efficacy
  • Involves assessing how people feel about the
    intervention or topic matter, or their ability to
    participate in intervention activities
  • Commonly involves qualitative methods which
    encourage more freedom in expression
  • Methods of exploring attitudes can include
    showing short films, roll-plays or picture/verbal
    stories of scenarios depicting the topic of
    interest
  • Behaviour
  • Measuring behaviour can be achieved through
    self-report however this method is generally not
    accurate because of social desirability
  • Food/exercise diaries or observation can help to
    minimise inaccuracies

Impact and Outcome Evaluation
8
Key measures
  • Health status
  • When selecting a health status measure it is
    important to
  • revisit the intended effect of the intervention
  • ensure the measure suits the target group
  • Health status can be measured using biochemical
    or anthropometric indicators
  • Social support
  • A variety of self-completed questionnaires and
    interview schedules are available to measure
    social support
  • Simple measures can also be used for example, the
    number of young mothers who can provide the names
    of each others partners

Impact and Outcome Evaluation
9
Key measures
  • Environmental support
  • Measuring environmental support considers change
    in the physical environment, policies,
    legislation and workforce support
  • For example physical activity in the workplace
    may audit the work environment considering the
    availability of secure bike racks, shower, locker
    or gym facilities, accessibility of stairwells
    etc
  • Environmental audit tools for different
    surroundings such as schools, workplaces and
    communities are becoming more readily available

Impact and Outcome Evaluation
10
Reliability
  • Reliability is the stability of a measure
  • A reliable tool measures the same things each
    time the measure is used and for each person it
    is used with
  • The method used to test and develop reliability
    is to repeat administration of the measurement on
    the same subject using the same administration
    procedures within a short period of time
  • ? to ensure this test-retest procedure elicits
    the same results

Impact and Outcome Evaluation
11
Validity
  • Validity is the truth of a measure
  • A valid tool is a tool that measures what it
    intended to measure
  • A common approach to assess validity is using
    biochemical or physiological tests, where these
    tests are considered true measures of the
    factors of interest
  • Some factors (attitude, beliefs, capacity
    building) are not able to be objectively assessed
    though some simple procedures can be employed
    to assess validity
  • Face validity expert consensus on measurement
    tool
  • Content validity ensures factor is covered in
    measurement item
  • Construct validity turning non-observable
    concepts into measures

Impact and Outcome Evaluation
12
Sampling Bias
  • Bias is where something differs systematically
    from the true situation and influences the
    evaluation conclusions
  • Sampling bias concerns the characteristics of
    intervention participants, reasons for their
    participation and the duration of their
    participation
  • How participants are recruited to participate in
    the intervention
  • Whether or not participants represent the whole
    target population
  • Non-response when an appropriate person refuses
    to participate
  • Participant retention or drop-out

Impact and Outcome Evaluation
13
Sampling Methods
  • Most PHN interventions rely upon a subset of
    individuals from the population to assess the
    impact and outcome of the intervention
  • A random sample is the best method for evaluating
    intervention effects in population groups because
    the effects can be considered applicable to the
    entire target population
  • It may not always be possible or practical to
    achieve a true random sample and oversampling of
    a specific group or convenient sampling may
    result

Impact and Outcome Evaluation
14
Statistical Analysis
  • Statistical analysis allows evaluation data to be
    interpreted and produces useful information about
    intervention success or otherwise
  • Statistical methods should be considered during
    evaluation planning to determine the sample size
    and which statistical tests to apply
  • Some key statistical considerations include
  • Statistical significance
  • Confidence intervals

Impact and Outcome Evaluation
15
Possible evaluation designs for PHN interventions
Impact and Outcome Evaluation
16
Impact and Outcome Evaluation
Write a Comment
User Comments (0)
About PowerShow.com