Title: Practical Approach to Monitoring and Evaluating the Program Process
1Practical Approach to Monitoring and Evaluating
the Program Process
- Joseph Telfair, DrPH, MSW/MPHProfessorDepartment
of Public HealthSchool of Health and Human
Performance University of North Carolina at
GreensboroGreensboro, NC (USA)j_telfai_at_uncg.edu
? (336) 334 - 3240
2OVERVIEW OF PRESENTATION
- Setting the Stage Why Important, Definitions
and Key Concepts - Performance Measurement Selecting and
Constructing Measures - Process Monitoring Developing a Monitoring
System - Concluding Remarks
- Questions and Discussion
3Setting the Stage
4WHY? (1)
- Three Primary reasons
- To develop and maintain
- an effective program and service delivery process
at the state and local level - To enhance staffs understanding of the factors
that contribute to the extent to which and in
what ways the - specific aims
- program service targets
- evaluation objectives
- are being followed
5WHY? (2)
- Three Primary reasons (cont)
- Assures staff and stakeholders by putting in
place a process for determining whether or not
the program and service delivery activities are
succeeding as planned
6Definitions and Key Concepts (1)
- Definition Evaluation or program measurement
(PM) is a systematic process for staff and
institutions to obtain information on the service
delivery process, its outcomes, and the
effectiveness of its work, so that they can
improve the process and describe its
accomplishments Mattessich, PW (2003) (p. 3)
modified
7Definitions and Key Concepts (2)
- Definition Program monitoring is the process of
assessing progress toward achievement of a
service delivery processs objectives to
determine whether the process was implemented as
planned (Peoples-Sheps Telfair (2005 See
Handout)
8Definitions and Key Concepts (3)
- Evaluation or PM involves a comparison of the
staffs planned processes and outcomes with
selected standards in order to assess
accomplishments - Evaluation or PM involves the application of
social science methods to determine whether
assessed efforts are the cause of observed
results
9Definitions and Key Concepts (4)
- Evaluation or PM relies on both qualitative and
quantitative methods, and often a triangulation
of the two, to produce informative results
10Definitions and Key Concepts (5)
- Program monitoring is carried out by assessing
the extent to which a program is implemented as
designed that involves tracking progress toward
achievement of a programs objectives
(Peoples-Sheps Telfair (2005) - It is a very traditional form of assessment that
is generally considered an administrative
function and integral to the ongoing operations
of every program (Kettner, et al., 1999).
11Definitions and Key Concepts (6)
- Definition A performance measure is a specific,
quantitative or qualitative representation
(measure) of a capacity, process, or outcome
deemed relevant to the assessment of program
performance (Peoples-Sheps Telfair (2005))
12Definitions and Key Concepts (7)
- Both program monitoring and performance measures
- depend on strong, meaningful measures of program
and service delivery process performance
13Performance Measurement
14Selecting or Constructing Measures (1)
- Deciding what to measure is an essential first
step - The aspects the service delivery process that are
measured attract attention and generate action
(Hatry, 1999). - Conversely, aspects not measured may go unnoticed
until a crisis brings them to the surface (e.g.,
discovery of inadequate data collection efforts
that did not allow for population or service
targets to be met)
15Selecting or Constructing Measures (2)
- If the staff takes the time to think through what
is needed, they are much less likely to miss
something important - To cover all of the bases, start with the
monitoring and evaluations specific aim(s) or
hypothesis(es) to identify the main program and
service delivery efforts and expected outcomes
16Selecting or Constructing Measures (3)
- To construct performance measures, three tasks
must be undertaken - identifying concepts to be measured
- selecting or constructing measures
- locating or developing data sources
17Selecting or Constructing Measures (4)
- Performance measures can be formulated in many
different ways. They may be - numbers (number of TB deaths)
- rates (TB mortality rate)
- proportions or percentages (percentage of days
missed at work among person with TB)
18Selecting or Constructing Measures (5)
- Performance measures can be formulated in many
different ways. They may be (cont) - averages (average number of emergency department
visits per person 18 to 44 years of age in a
given year) - Categories (team meetings held)
- Numbers, percentages, and rates are the most
frequently used in MCH
19Selecting or Constructing Measures (6)
- Numbers, percentages, and rates are the most
frequently used in MCH - Least used, but often just as critical are
Qualitative indicators such as consensus
measures, aggregated (agreement/ disagreement)
statements, archival text-based descriptors
(e.g., policy statements and group opinions from
advisor or consumer groups
20Selecting or Constructing Measures (7)
- It is often helpful to include numbers and
qualitative indicators along with rates and
percentages so that the latter measures can be
understood in the context of the type of service
focus for which they were derived - To select or develop high-quality performance
measures, candidate measures are generally
assessed according to criteria that represent
both scientific rigor and practical relevance
21Selecting or Constructing Measures (8)
- Responsive measures are able to detect a change
- Measures need to be understandable to the
audience to whom they will be presented - Regardless of how it is formulated, a measure
should have very precise wording, a specific
timeframe, and a clearly defined research
population (e.g., persons with TB - Quant) or set
of tasks (e.g., steps for securing needed sample
- Qual)
22Selecting or Constructing Measures (9)
- A performance measure should be meaningful,
valid, reliable, responsive, and understandable
and should allow for risk adjustments (errors)
23Selecting or Constructing Measures (10)
- A valid measure is one that measures what it
intends to measure. - Validity, like all of the qualities in this list,
is measured on a continuum, meaning that some
measures have greater validity than others
24Selecting or Constructing Measures (11)
- Reliable performance measures can be reproduced
regardless of who collects the data or when they
are collected (assuming the true results have not
changed) - Like validity, reliability is viewed as a
continuum
25Selecting or Constructing Measures (12)
- The selection of measures is closely tied to the
data or research project information available to
construct them - Data or information sources should
- Be of high quality, with standardized definitions
(as defined and agreed upon by the research team)
and data collection methods and - Have acceptable levels of validity and
reliability on the items of interest
26Selecting or Constructing Measures (13)
- Data or information sources should (cont)
- Be available within the program service delivery
timeframe (e.g., 3 years) - Have cost conforming to budgetary constraints of
the program - It is more efficient, but not essential, to
construct measures from existing, or secondary,
data sources, rather than to collect new data
specifically for a given set of performance
measures
27PROCESS MONITORING
28- Source Mattessich, PW (2003). p. 10
29Developing a Monitoring System (1)
- Development of a monitoring system is an
essential component of program and service
delivery process measurement plan - The monitoring process described in this
presentation - identifies the programs objectives
- the base from which formulas to measure progress
are developed
30Developing a Monitoring System (2)
- The monitoring process described in this
presentation (cont) - relative strength or emphasis of a measure is
assigned as necessary - data collection plans are developed
- achievement scores are calculated at
predetermined intervals
31Developing a Monitoring System (3)
- Start with the Aim-linked Objectives
- The objectives of a Specific Aim, each of which
consists of a performance measure and a target,
serve as the foundation for project monitoring - Fully developed, measurable objectives must
correspond with the program or service purpose - Performance measures must be developed as the
program is being planned
32Developing a Monitoring System (4)
- Each objective should have an explicit date by
which the target is to be achieved (see example
next slide) - With objectives clearly and precisely stated, the
next challenge is to develop a system through
which progress towards meeting the programs
targets can be monitored
33Performance Measure Target
Percentage of adults in village 2 by desired gender and age within normal range A 7 increase over baseline (estimated at 80)
Average amount of time spent collecting staff comments per week by program assistants Four hours
Number adults from Village 2 in the project shuttled to and from the city for the purpose of data gathering Thirty adults sampled 80 of the allocated study days per month
34Developing a Monitoring System (5)
- The information derived from monitoring shows
which program objectives need more attention in
the future and whether any of them require less
intensive work - If the process has fallen short on some
objectives, this information should trigger an
in-depth search for the reasons expected targets
were not achieved
35Developing a Monitoring System (6)
- The Table on the slide to come shows the
components of a monitoring system - The first two columns are identical to those in
the previous slide showing performance measures
and targets
36Developing a Monitoring System (7)
- The remaining three columns represent the basic
elements of a monitoring system, as it builds on
the programs Specific Aims linked objectives
37Performance Measure Target Formula to Measure Progress Results at End of Year 1 Achievement Score
Percentage of adults in village 2 by desired gender and age within normal range A 7 increase over baseline (estimated at 80) Percentage over baseline with BMI within normal range 7 1.75 0.25
Number adults from Village 2 in the project shuttled to and from the city for the purpose of data gathering Thirty adults sampled 80 of the allocated study days per month Number of adults sampled 80 of study days 30 24 0.80
Average amount of time spent collecting comments per week by program assistants Four hours Number of hours spent in collecting comments 4 3.2 hours 0.80
38Developing a Monitoring System (8)
- Formulas
- The first step in developing a monitoring system
is to construct formulas to reflect progress
toward achievement of the objectives targets. - The formula is based on the principle that a
score of 1.00 is complete accomplishment
39Developing a Monitoring System (9)
- Formulas (cont)
- For example, A score of 0.99 or lower signifies
that the performance measure fell short of the
target a score that exceeds 1.00 indicates
greater than expected achievement
40Developing a Monitoring System (10)
- Formulas (cont)
- Three types of formulas can serve this purpose
- When the target is a percentage, proportion, or a
simple count, the most informative and frequently
used formula involves dividing the level of
actual achievement at a specified time with the
level given in the target - - Actual value
- Targeted value
41Developing a Monitoring System (11)
- Data collection plan
- The first three columns of the previous Table
should be completed with the projects initial
plan. - To create a fully operational monitoring system,
one more step is required - data items and sources necessary to construct
performance measures should be identified
42Developing a Monitoring System (12)
- This step should not be missed even if some data
sources seem obvious since it is far too common
to discover that researchers had incorrectly
assumed the necessary data would be available and
accessible when needed
43Developing a Monitoring System (13)
- Interpretation of Results
- The information derived from monitoring shows
which objectives need more attention in
subsequent years and whether any of them require
less intensive work - Adjustments in resource allocations can be based
on the needs of specific objectives for more or
less effort
44Developing a Monitoring System (14)
- Interpretation of Results (cont)
- Careful assessment of the reasons for shortfalls
on objectives should be conducted before any
reallocation decisions are made. - A review of end of year achievement scores
provides helpful information for further
investigation and subsequent adjustments to the
process
45IN CONCLUSION
46IN CONCLUSION (1)
- Service Programs may not reach their targets for
a number of reasons - A primary reason is inadequate resources, which
may take the form of insufficient funds across
the board or misallocation of funds across
Specific Aims linked objectives - It may be possible to detect misallocation if
some targets are overachieved, whereas others
fall short
47IN CONCLUSION (2)
- Other commonly cited reasons why programs may
fall short in achieving objectives include - a lack of adequate knowledge about feasible
target levels - external factors that make it difficult or
impossible to reach the target (e.g., inability
to find or retain clients that meet the program
criteria) - inaccurate measurement of the objective
- a conceptual error in the program purpose
48IN CONCLUSION (3)
- As an evaluation strategy, monitoring has three
important shortcomings - First, it does not produce evidence of
causeeffect relationships only evaluation
research can do that. - Second, the results of monitoring are limited to
a single program they cannot be extrapolated
from one program to another
49IN CONCLUSION (4)
- As an evaluation strategy, monitoring has three
important shortcomings (cont) - Finally, there are no firm guidelines for
interpretation of the scores - Although a score of 0.70 might be considered good
and 0.90 might be superior, the most useful
interpretations depend on the programs context
and purpose (Peoples-Sheps, Rogers, Finerty,
2002).
50IN CONCLUSION (5)
- Advantages and Disadvantages of Monitoring
- Program monitoring is a valuable tool for
planning and management decisions - The process is inexpensive and can be applied
readily by anyone with entry-level training or
experience
51IN CONCLUSION (6)
- Advantages and Disadvantages of Monitoring (cont)
- It includes a flexible set of methods that can be
modified to accommodate the needs of each service
program at both the state and local level - Monitoring requires staffs to develop objectives
that serve as the basis of the service delivery
process and then to plan for necessary data so
that the capability for tracking progress is
assured
52IN CONCLUSION (7)
- Another important advantage is that it encourages
the production of information for critical
management decisions in both short- and long-term
time frames and across all levels of the service
delivery process - Thus, it is compatible with most governmental
programmatic guidelines
53QUESTIONSand Discussion
54References (1)
- Peoples-Sheps, M. D., Byars, E., Rogers, M. M.,
Finerty, E. J., Farel, A. (2001). Setting
objectives (revised). Chapel Hill, NC The
University of North Carolina at Chapel Hill. - Peoples-Sheps, M. D., Telfair, J (2005).
Maternal and Child Health Program Monitoring and
Performance Appraisal in J. Kotch (ed). Maternal
And Child Health Programs, Problems And Policies
In Public Health, 2nd. Edition (Chapter 16).
Boston, MA Jones Bartlett Publishers
55References (2)
- Grembowski, D. (2001). The practice of health
program evaluation. Thousand Oaks, CA Sage
Publications. Hatry, H. P. (1999). Performance
measurement Getting results. Washington, DC The
Urban Institute Press. - Kettner, P. M., Moroney, R. M., Martin, L. L.
(1999). Designing and managing programs An
effectiveness-based approach (2nd ed.). Thousand
Oaks, CA Sage Publications.
56References (3)
- Durch, J. S., Bailey, L. A., Stoto, M. A.
(Eds.). (1997). Improving health in the
community A role for performance monitoring.
Washington, DC National Academy Press. - Mattessich, PW (2003). The Managers Guide to
Program Evaluation. Saint Paul, MN Wilder
Publishing Center