Rate of Improvement Version 2.0: Research Based Calculation and Decision Making - PowerPoint PPT Presentation

About This Presentation
Title:

Rate of Improvement Version 2.0: Research Based Calculation and Decision Making

Description:

Rate of Improvement Version 2.0: Research Based Calculation and Decision Making Caitlin S. Flinn, EdS, NCSP Andrew E. McCrea, MS, NCSP Matthew Ferchalk EdS, NCSP – PowerPoint PPT presentation

Number of Views:352
Avg rating:3.0/5.0
Slides: 178
Provided by: mccr96
Category:

less

Transcript and Presenter's Notes

Title: Rate of Improvement Version 2.0: Research Based Calculation and Decision Making


1
Rate of Improvement Version 2.0 Research Based
Calculation and Decision Making
  • Caitlin S. Flinn, EdS, NCSP
  • Andrew E. McCrea, MS, NCSP
  • Matthew Ferchalk EdS, NCSP
  • ASPP Conference 2010

2
Todays Objectives
  • Explain what RoI is, why it is important, and how
    to compute it.
  • Establish that Simple Linear Regression should be
    the standardized procedure for calculating RoI.
  • Discuss how to use RoI within a problem
    solving/school improvement model.

3
RoI Definition
  • Algebraic term Slope of a line
  • Vertical change over the horizontal change
  • Rise over run
  • m (y2 - y1) / (x2 - x1)
  • Describes the steepness of a line (Gall Gall,
    2007)

4
RoI Definition
  • Finding a students RoI finding the slope of a
    line
  • Using two data points on that line
  • Finding the line itself
  • Linear regression
  • Ordinary Least Squares

5
How does Rate of Improvement Fit into the Larger
Context?
6
School Improvement/Comprehensive School Reform
Response to Intervention
Dual Discrepancy Level Growth
Rate of Improvement
7
School Improvement/Comprehensive School Reform
  • Grade level content expectations (ELA, math,
    science, social studies, etc.).
  • Work toward these expectations through classroom
    instruction.
  • Understand impact of instruction through
    assessment.

8
Assessment
  • Formative Assessments/High Stakes Tests
  • Does student have command of content expectation
    (standard)?
  • Universal Screening using CBM
  • Does student have basic skills appropriate for
    age/grade?

9
Assessment
  • Q For students who are not proficient on grade
    level content standards, do they have the basic
    reading/writing/math skills necessary?
  • A Look at Universal Screening if above
    criteria, intervention geared toward content
    standard, if below criteria, intervention geared
    toward basic skill.

10
Progress Monitoring
  • Frequent measurement of knowledge to inform our
    understanding of the impact of instruction/interve
    ntion.
  • Measures of basic skills (CBM) have demonstrated
    reliability validity (see table at
    www.rti4success.org).

11
Classroom Instruction (Content Expectations)
Measure Impact (Test)
Proficient!
Non Proficient
Content Need?
Basic Skill Need?
Use Diagnostic Test to Differentiate
Intervention Progress Monitor With CBM
Intervention Progress Monitor
If CBM is Appropriate Measure
Rate of Improvement
12
So
  • Rate of Improvement (RoI) is how we understand
    student growth (learning).
  • RoI is reliable and valid (psychometrically
    speaking) for use with CBM data.
  • RoI is best used when we have CBM data, most
    often when dealing with basic skills in
    reading/writing/math.
  • RoI can be applied to other data (like behavior)
    with confidence too!
  • RoI is not yet tested on typical Tier I formative
    classroom data.

13
RoI is usually applied to
  • Tier One students in the early grades at risk for
    academic failure (low green kids).
  • Tier Two Three Intervention Groups.
  • Special Education Students (and IEP goals)
  • Students with Behavior Plans

14
RoI Foundations
  • Deno, 1985
  • Curriculum-based measurement
  • General outcome measures
  • Short
  • Standardized
  • Repeatable
  • Sensitive to change

15
RoI Foundations
  • Fuchs Fuchs, 1998
  • Hallmark components of Response to Intervention
  • Ongoing formative assessment
  • Identifying non-responding students
  • Treatment fidelity of instruction
  • Dual discrepancy model
  • One standard deviation from typically performing
    peers in level and rate

16
RoI Foundations
  • Ardoin Christ, 2008
  • Slope for benchmarks (3x per year)
  • More growth from fall to winter than winter to
    spring
  • Might be helpful to use RoI for fall to winter
  • And a separate RoI for winter to spring

17
RoI Foundations
  • Fuchs, Fuchs, Walz, Germann, 1993
  • Typical weekly growth rates
  • Needed growth
  • 1.5 to 2.0 times typical slope to close gap in a
    reasonable amount of time

18
RoI Foundations
  • Deno, Fuchs, Marston, Shin, 2001
  • Slope of frequently non-responsive children
    approximated slope of children already identified
    as having a specific learning disability

19
RoI Statistics
  • Gall Gall, 2007
  • 10 data points are a minimum requirement for a
    reliable trendline
  • How does that affect the frequency of
    administering progress monitoring probes?

20
Importance of Graphs
  • Vogel, Dickson, Lehman, 1990
  • Speeches that included visuals, especially in
    color, improved
  • Immediate recall by 8.5
  • Delayed recall (3 days) by 10.1

21
Importance of Graphs
  • Seeing is believing.
  • Useful for communicating large amounts of
    information quickly
  • A picture is worth a thousand words.
  • Transcends language barriers (Karwowski, 2006)
  • Responsibility for accurate graphical
    representations of data

22
Skills Typically Graphed
  • Reading
  • Oral Reading Fluency
  • Word Use Fluency
  • Reading Comprehension
  • MAZE
  • Retell Fluency
  • Early Literacy Skills
  • Initial Sound Fluency
  • Letter Naming Fluency
  • Letter Sound Fluency
  • Phoneme Segmentation Fluency
  • Nonsense Word Fluency
  • Spelling
  • Written Expression
  • Behavior
  • Math
  • Math Computation
  • Math Facts
  • Early Numeracy
  • Oral Counting
  • Missing Number
  • Number Identification
  • Quantity Discrimination

23
Importance of RoI
  • Visual inspection of slope
  • Multiple interpretations
  • Instructional services
  • Need for explicit guidelines

24
Ongoing Research
  • RoI for instructional decisions is not a perfect
    process
  • Research is currently addressing sources of
    error
  • Christ, 2006 standard error of measurement for
    slope
  • Ardoin Christ, 2009 passage difficulty and
    variability
  • Jenkin, Graff, Miglioretti, 2009 frequency of
    progress monitoring

25
Future Considerations
  • Questions yet to be empirically answered
  • What parameters of RoI indicate a lack of RtI?
  • How does standard error of measurement play into
    using RoI for instructional decision making?
  • How does RoI vary between standard protocol
    interventions?
  • How does this apply to non-English speaking
    populations?

26
How is RoI Calculated? Which way is best?
27
Multiple Methods for Calculating Growth
  • Visual Inspection Approaches
  • Eye Ball Approach
  • Split Middle Approach
  • Tukey Method
  • Quantitative Approaches
  • Last point minus First point Approach
  • Split Middle Tukey plus
  • Linear Regression Approach

28
The Visual Inspection Approaches
29
Eye Ball Approach
30
Split Middle Approach
  • Drawing through the two points obtained from the
    median data values and the median days when the
    data are divided into two sections
  • (Shinn, Good, Stein, 1989).

31
Split Middle
X(14)
X (9)
X(9)
32
Tukey Method
  • Divide scores into 3 equal groups
  • Divide groups with vertical lines
  • In 1st and 3rd groups, find median data point and
    median week and mark with an X
  • Draw line between two Xs
  • (Fuchs, et. al., 2005. Summer Institue Student
    progress monitoring for math. http//www.studentpr
    ogress.org/library/training.asp)

33
Tukey Method
X(14)
X(8)
34
The Quantitative Approaches
35
Last minus First
  • Iris Center last probe score minus first probe
    score over last administration period minus first
    administration period.
  • Y2-Y1/X2-X1 RoI
  • http//iris.peabody.vanderbilt.edu/resources.html

36
Last minus First
37
Split Middle Plus
X(14)
X(9)
(14-9)/80.63
38
Tukey Method Plus
X(14)
X(8)
(14-8)/80.75
39
Linear Regression
40
RoI Consistency?
Any Method of Visual Inspection ???
Last minus First 0.75
Split Middle Plus 0.63
Tukey Plus 0.75
Linear Regression 1.10
41
RoI Consistency?
  • If we are not all using the same model to compute
    RoI, we continue to have the same problems as
    past models, where under one approach a student
    meets SLD criteria, but under a different
    approach, the student does not.
  • Hypothetically, if the RoI cut-off was 0.65 or
    0.95, different approaches would come to
    different conclusions on the same student.

42
RoI Consistency?
  • Last minus First (Iris Center) and Linear
    Regression (Shinn, etc.) only quantitative
    methods discussed in CBM literature.
  • Study of 37 at risk 2nd graders

Difference in RoI b/w LmF LR Methods Difference in RoI b/w LmF LR Methods
Whole Year 0.26 WCPM
Fall 0.31 WCPM
Spring 0.24 WCPM
McCrea (2010) Unpublished data McCrea (2010) Unpublished data
43
Technical Adequacy
  • Without a consensus on how to compute RoI, we
    risk falling short of having technical adequacy
    within our model.

44
So, Which RoI Method is Best?
45
Literature shows that Linear Regression is Best
Practice
  • Students daily test scoreswere entered into a
    computer programThe data analysis program
    generated slopes of improvement for each level
    using an Ordinary-Least Squares procedure (Hayes,
    1973) and the line of best fit.
  • This procedure has been demonstrated to represent
    CBM achievement data validly within individual
    treatment phases (Marston, 1988 Shinn, Good,
    Stein, in press Stein, 1987).
  • Shinn, Gleason, Tindal, 1989

46
Growth (RoI) Research using Linear Regression
  • Christ, T. J. (2006). Short-term estimates of
    growth using curriculum based measurement of oral
    reading fluency Estimating standard error of the
    slope to construct confidence intervals. School
    Psychology Review, 35, 128-133.
  • Deno, S. L., Fuchs, L. S., Marston, D., Shin,
    J. (2001). Using curriculum based measurement to
    establish growth standards for students with
    learning disabilities. School Psychology Review,
    30, 507-524.
  • Good, R. H. (1990). Forecasting accuracy of slope
    estimates for reading curriculum based
    measurement Empirical evidence. Behavioral
    Assessment, 12, 179-193.
  • Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L.
    Germann, G. (1993). Formative evaluation of
    academic progress How much growth can we expect?
    School Psychology Review, 22, 27-48.

47
Growth (RoI) Researchusing Linear Regression
  • Jenkins, J. R., Graff, J. J., Miglioretti, D.L.
    (2009). Estimating reading growth using
    intermittent CBM progress monitoring. Exceptional
    Children, 75, 151-163.
  • Shinn, M. R., Gleason, M. M., Tindal, G.
    (1989). Varying the difficulty of testing
    materials Implications for curriculum-based
    measurement. The Journal of Special Education,
    23, 223-233.
  • Shinn, M. R., Good, R. H., Stein, S. (1989).
    Summarizing trend in student achievement A
    comparison of methods. School Psychology Review,
    18, 356-370.

48
So, Why Are There So Many Other RoI Models?
  • Ease of application
  • Focus on Yes/No to goal acquisition, not degree
    of growth
  • How many of us want to calculate OLS Linear
    Regression formulas (or even remember how)?

49
Pros and Cons of Each Approach
Pros Cons
Eye Ball Easy Understandable Subjective
Split Middle Tukey No software needed Compare to Aim/Goal line Yes/No to goal acquisition No statistic provided, no idea of the degree of growth
50
Pros and Cons of Each Approach
Pros Cons
Last minus First Provides a growth statistic Easy to compute Does not consider all data points, only two
Split Middle Tukey Plus Considers all data points. Easy to compute No support for plus part of methodology
Linear Regression All data points Best Practice Calculating the statistic
51
An Easy and Applicable Solution
52
Get Out Your Laptops!
  • Open Microsoft Excel

I love ROI
53
Graphing RoIFor Individual Students
  • Programming Microsoft Excel to Graph Rate of
    Improvement
  • Fall to Winter

54
Setting Up Your Spreadsheet
  • In cell A1, type 3rd Grade ORF
  • In cell A2, type First Semester
  • In cell A3, type School Week
  • In cell A4, type Benchmark
  • In cell A5, type the Students Name (Swiper
    Example)

55
Labeling School Weeks
  • Starting with cell B3, type numbers 1 through 18
    going across row 3 (horizontal).
  • Numbers 1 through 18 represent the number of the
    school week.
  • You will end with week 18 in cell S3.

56
Labeling Dates
  • Note You may choose to enter the date of that
    school week across row 2 to easily identify the
    school week.

57
Entering Benchmarks(3rd Grade ORF)
  • In cell B4, type 77. This is your fall benchmark.
  • In cell S4, type 92. This is your winter
    benchmark.

58
Entering Student Data (Sample)
  • Enter the following numbers, going across row 5,
    under corresponding week numbers.
  • Week 1 41
  • Week 8 62
  • Week 9 63
  • Week 10 75
  • Week 11 64
  • Week 12 80
  • Week 13 83
  • Week 14 83
  • Week 15 56
  • Week 17 104
  • Week 18 74

59
CAUTION
  • If a student was not assessed during a certain
    week, leave that cell blank
  • Do not enter a score of Zero (0) it will be
    calculated into the trendline and interpreted as
    the student having read zero words correct per
    minute during that week.

60
Graphing the Data
  • Highlight cells A4 and A5 through S4 and S5
  • Follow Excel 2003 or Excel 2007 directions from
    here

61
Graphing the Data
  • Excel 2003
  • Across the top of your worksheet, click on
    Insert
  • In that drop-down menu, click on Chart
  • Excel 2007
  • Click Insert
  • Find the icon for Line
  • Click the arrow below Line

62
Graphing the Data
  • Excel 2003
  • A Chart Wizard window will appear
  • Excel 2007
  • 6 graphics appear

63
Graphing the Data
  • Excel 2003
  • Choose Line
  • Choose Line with markers
  • Excel 2007
  • Choose Line with markers

64
Graphing the Data
  • Excel 2003
  • Data Range tab
  • Columns
  • Excel 2007
  • Your graph appears

65
Graphing the Data
  • Excel 2003
  • Chart Title
  • School Week X Axis
  • WPM Y Axis
  • Excel 2007
  • Change your labels by right clicking on the graph

66
Graphing the Data
  • Excel 2003
  • Choose where you want your graph
  • Excel 2007
  • Your graph was automatically put into your data
    spreadsheet

67
Graphing the Trendline
  • Excel 2003
  • Right click on any of the student data points
  • Excel 2007

68
Graphing the Trendline
  • Excel 2003
  • Choose Linear
  • Excel 2007

69
Graphing the Trendline
  • Excel 2003
  • Choose Custom and check box next to Display
    equation on chart
  • Excel 2007

70
Graphing the Trendline
  • Clicking on the equation highlights a box around
    it
  • Clicking on the box allows you to move it to a
    place where you can see it better

71
Graphing the Trendline
  • You can repeat the same procedure to have a
    trendline for the benchmark data points
  • Suggestion label the trendline Expected ROI
  • Move this equation under the first

72
Individual Student Graph
73
Individual Student Graph
  • The equation indicates the slope, or rate of
    improvement.
  • The number, or coefficient, before "x" is the
    average improvement, which in this case is the
    average number of words per minute per week
    gained by the student.

74
Individual Student Graph
  • The rate of improvement, or trendline, is
    calculated using a linear regression, a simple
    equation of least squares.
  • To add additional progress monitoring/benchmark
    scores once youve already created a graph, enter
    additional scores in Row 5 in the corresponding
    school week.

75
Individual Student Graph
  • The slope can change depending on which week
    (where) you put the benchmark scores on your
    chart.
  • Enter benchmark scores based on when your school
    administers their benchmark assessments for the
    most accurate depiction of expected student
    progress.

76
Assuming Linear Growth
Why Graph only 18 Weeks at a Time?
  • Finding Curve-linear Growth

77
Non-Educational Example of Curve-linear Growth
78
Academic Example of Curvilinear Growth
79
McCrea, 2010
  • Looked at Rate of Improvement in small 2nd grade
    sample
  • Found differences in RoI when computed for fall
    and spring
  • Ave RoI for fall 1.47 WCPM
  • Ave RoI for spring 1.21 WCPM

80
Ardoin Christ, 2008
  • Slope for benchmarks (3x per year)
  • More growth from fall to winter than winter to
    spring

81
Christ, Yeo, Silberglitt, in press
  • Growth across benchmarks (3X per year)
  • More growth from fall to winter than winter to
    spring
  • Disaggregated special education population

82
Graney, Missall, Martinez, 2009
  • Growth across benchmarks (3X per year)
  • More growth from winter to spring than fall to
    winter with R-CBM.

83
Fien, Park, Smith, Baker, 2010
  • Investigated relationship b/w NWF gains and
    ORF/Comprehension
  • Found greater NWF gains in fall than in spring.

84
DIBELS (6th) ORF Change in Criteria
Fall to Winter Winter to Spring
2nd 24 22
3rd 15 18
4th 13 13
5th 11 9
6th 11 5
85
AIMSweb Norms
Based on 50th Percentile Fall to Winter Winter to Spring
1st 18 31
2nd 25 17
3rd 22 15
4th 16 13
5th 17 15
6th 13 12
86
Speculation as to why Differences in RoI within
the Year
  • Relax instruction after high stakes testing in
    March/April a PSSA effect.
  • Depressed BOY benchmark scores due to summer
    break a rebound effect (Clemens).
  • Instructional variables could explain differences
    in Graney (2009) and Ardoin (2008) Christ (in
    press) results (Silberglitt).
  • Variability within progress monitoring probes
    (Ardoin Christ, 2008) (Lent).

87
Programming Excel
  • Calculating Needed RoI
  • Calculating Actual (Expected) RoI Benchmark
  • Calculating Actual RoI - Student

88
Calculating Needed RoI
  • In cell T3, type Needed RoI
  • Click on cell T5
  • In the fx line (at top of sheet) type this
    formula ((S4-B5)/18)
  • Then hit enter
  • Your result should read 2
  • This formula simply subtracts the students
    actual middle of year (MOY) benchmark from the
    expected end of year (EOY) benchmark, then
    dividing by 18 for the first 18 weeks (1st
    semester).

89
Calculating Actual (Expected) RoI - Benchmark
  • In cell U3, type Actual RoI
  • Click on cell U4
  • In the fx line (at top of sheet) type this
    formula SLOPE(B4S4,B3S3)
  • Then hit enter
  • Your result should read 1.06
  • This formula considers 18 weeks of benchmark data
    and provides an average growth or change per week.

90
Calculating Actual RoI - Student
  • Click on cell U5
  • In the fx line (at top of sheet) type this
    formula SLOPE(B5S5,B3S3)
  • Then hit enter
  • Your result should read 1.89
  • This formula considers 18 weeks of student data
    and provides an average growth or change per week.

91
ROI as a Decision Tool
  • within a Problem-Solving Model

92
Steps
  1. Gather the data
  2. Ground the data set goals
  3. Interpret the data
  4. Figure out how to fit Best Practice into Public
    Education

93
Step 1 Gather Data
  • Universal Screening
  • Progress Monitoring

94
Common Screenings in PA
  • DIBELS
  • AIMSweb
  • MBSP
  • 4Sight
  • PSSA

95
Validated Progress Monitoring Tools
  • DIBELS
  • AIMSweb
  • MBSP
  • www.studentprogress.org

96
Step 2 Ground the Data
  • 1) To what will we compare our student growth
    data?
  • 2) How will we set goals?

97
Multiple Ways toLook at Growth
  • Needed Growth
  • Expected Growth Percent of Expected Growth
  • Fuchs et. al. (1993) Table of Realistic and
    Ambitious Growth
  • Growth Toward Individual Goal
  • Best Practices in Setting Progress Monitoring
    Goals for Academic Skill Improvement (Shapiro,
    2008)

98
Needed Growth
  • Difference between students BOY (or MOY) score
    and benchmark score at MOY (or EOY).
  • Example MOY ORF 10, EOY benchmark is 40, 18
    weeks of instruction (40-10/181.67). Student
    must gain 1.67 wcpm per week to make EOY
    benchmark.

99
Expected Growth
  • Difference between two benchmarks.
  • Example MOY benchmark is 20, EOY benchmark is
    40, expected growth (40-20)/18 weeks of
    instruction 1.11 wcpm per week.

100
Looking at Percent of Expected Growth
Tier I Tier II Tier III
Greater than 150
Between 110 150 Possible LD
Between 95 110 Likely LD
Between 80 95 May Need More May Need More Likely LD
Below 80 Needs More Needs More Likely LD
101
Oral Reading Fluency Adequate Response Table
Realistic Growth Ambitious Growth
1st 2.0 3.0
2nd 1.5 2.0
3rd 1.0 1.5
4th 0.9 1.1
5th 0.5 0.8
102
Digit Fluency Adequate Response Table
Realistic Growth Ambitious Growth
1st 0.3 0.5
2nd 0.3 0.5
3rd 0.3 0.5
4th 0.75 1.2
5th 0.75 1.2
103
From Where Should Benchmarks/Criteria Come?
  • Appears to be a theoretical convergence on use of
    local criteria (what scores do our students need
    to have a high probability of proficiency?) when
    possible.

104
Test GloballyBenchmark Locally
105
Objectives
  • Rationale for developing Local Benchmarks
  • Fun with Excel!
  • Fun with Algebra!
  • Local Benchmarks in Action

106
Rational for Developing Local Benchmarks
  • Stage Jacobson (2001)
  • Slope in Oral Reading Fluency reliably predicted
    performance on Washington Assessment of Student
    Learning
  • McGlinchy Hixon (2004)
  • Results support the use of CBM for determine
    which students are at risk for reading failure
    and who will fail state tests
  • Hintze Silberglitt (2005)
  • Oral Reading Fluency is highly connected to state
    test performance and is and is accurate at
    predicting those students who are likely to not
    meet proficiency.
  • Shapiro et al. (2006)
  • Results of this study show that CBM and be a
    valuable source to identify which student are
    likely to be successful or fail state tests.
  • Ask Jason Pedersen!

107
Rational for Developing Local Benchmarks
  • Identify and validate problems
  • Creating ideas for instructional grouping, focus,
    or intensity
  • Goal setting
  • Determining the focus and frequency of progress
    monitoring
  • Exiting student or moving students to different
    level or tiers of intervention
  • Systems level resource allocation and evaluation

108
Rationale for Developing Local Benchmarks
  • Silberglitt (2008)
  • Districts should refrain from simply adopting a
    set of national target scores, as these scores
    may or may not be relevant to the high-stakes
    outcomes for which their students must be
    adequately prepared. (p. 1871)
  • By linking local assessments to high-stakes
    tests, users are able to establish target scores
    on these local assessments, scores that divide
    students between those who are likely and those
    who are unlikely to achieve success on the
    high-stakes test. (p. 1870)

109
Rationale for Developing Local Benchmarks
  • Discrepancy across states, in terms of the
    percentile ranks on a nationally administered
    assessment necessary to predict successful state
    test performance (Kingsbury et al., 2004)
  • Using cut scores based on the probability of
    success on an upcoming state-mandated assessment,
    might be a useful alternative to normative date
    for making these decisions. (Silberglitt Hintz,
    2005)
  • Can be used to separate students into groups in
    an RtII framework (Silberglitt, 2008)

110
Rationale for Developing Local Benchmarks
  • Useful in calculating discrepancy in level
    (Burns, 2008)
  • Represent the school population where the
    students are getting their education (Stewart
    Silberglitt, 2008)
  • Teachers often use comparisons between students
    in their classroom, this helps to objectify those
    decisions (Stewart Silberglitt, 2008)

111
Rationale for Developing Local Benchmarks
  • How accurately does it predict proficiency level
    in Third Grade?

112
Rationale for Developing Local Benchmarks
  • Percentage of students in Third Grade predicted
    to be successful on the PSSA who were actually
    Successful

113
Rationale for Developing Local Benchmarks
  • Percentage of Third Grade students predicted to
    be unsuccessful who actually failed to meet
    proficiency on the PSSA

114
Getting Started
  • Collect 3 or more years of student CBM and PSSA
    data
  • Match student data for each student
  • Use data extract and data farming features
    offered through PSSA / DIBELS / AIMSweb websites
  • Download with student ID numbers
  • If you have a data warehousethen use your
    special magiclucky!

115
Getting Started
  • Reliable and valid data
  • Linear / highly correlated data
  • Gather data with integrity
  • Do not teach to the test
  • All students should be included in the norm group
  • Be cautious of cohort effects

116
Getting Started
  • PSSA Cut Scores
  • http//www.portal.state.pa.us/portal/server.pt/com
    munity/cut_scores/7441
  • Use the lower end score for Proficiency
  • Download the data set from
  • http//sites.google.com/site/rateofimprovement/

117
Wisdom from Teachers(especially from our
reading specialists Tina and Kristin!)
  • Children do not equal dots!
  • They are not numbers or data points!
  • Having said that

?
118
Fun with Excel!
119
Fun with Algebra!
  • Matt Burns University of Minnesota
  • X(Y-a)/b
  • Y Proficiency Score on the PSSA
  • a Intercept
  • b Slope
  • XLocal Benchmark Score

120
(No Transcript)
121
(No Transcript)
122
More Fun with Algebra!
  • Predict student Proficiency Score
  • Resolve the equation
  • X(Y-a)/b
  • Y(Xb)a
  • YPredicted PSSA Score
  • Use with Caution!
  • Student
  • 93wcpm in the fall
  • Data Sample
  • Slope 2.56
  • Intercept 1108
  • Y(93X2.56)1108
  • Y1306

123
Local Benchmark Applications
  • Northern Lebanon School District Local Benchmarks

124
Local Benchmark Applications
  • For those that like the DIBELS Graphs

125
Local Benchmark Applications
126
Diagnostic Accuracy
  • Sensitivity
  • Of all the students who failed the PSSA, what
    percentage were accurately predicted to fail
    based on their ORF score
  • Specificity
  • Of all of the students who passed the PSSA, what
    percentage were accurately predicted to pass
    based on their ORF score
  • Negative Predictive Power
  • Percentage of students predicted to be successful
    on the PSSA who were actually Successful
  • Positive Predictive Power
  • Percentage of students predicted to be
    unsuccessful who actually failed to meet
    proficiency on the PSSA

127
(No Transcript)
128
(No Transcript)
129
Local Benchmarks - Method 2
  • Fun with SPSS!
  • Logistic Regression Roc Curves
  • More accurate
  • Helps to balance Sensitivity, Specificity,
    Negative Positive Predictive Power
  • For more information see
  • Best Practices in Using Technology for Data-Based
    Decision Making (Silberglitt, 2008)

130
If Local Criteria are Not an Option
  • Use norms that accompany the measure (DIBELS,
    AIMSweb, etc.).
  • Use national norms.

131
Making Decisions Best Practice
  • Research has yet to establish a blue print for
    grounding student RoI data.
  • At this point, teams should consider multiple
    comparisons when planning and making decisions.

132
Making Decisions Lessons From the Field
  • When tracking on grade level, consider an RoI
    that is 100 of expected growth as a minimum
    requirement, consider an RoI that is at or above
    the needed as optimal.
  • So, 100 of expected and on par with needed
    become the limits of the range within a student
    should be achieving.

133
Is there an easy way to do all of this?
134
(No Transcript)
135
(No Transcript)
136
Access to Spreadsheet Templates
  • http//sites.google.com/site/rateofimprovement/hom
    e
  • Click on Charts and Graphs.
  • Update dates and benchmarks.
  • Enter names and benchmark/progress monitoring
    data.

137
What about Students not on Grade Level?
138
Determining Instructional Level
  • Independent/Instructional/Frustrational
  • Instructional often b/w 40th or 50th percentile
    and 25th percentile.
  • Frustrational level below the 25th percentile.
  • AIMSweb Survey Level Assessment (SLA).

139
Setting Goals off of Grade Level
  • 100 of expected growth not enough.
  • Needed growth only gets to instructional level
    benchmark, not grade level.
  • Risk of not being ambitious enough.
  • Plenty of ideas, but limited research regarding
    Best Practice in goal setting off of grade level.

140
Possible Solution (A)
  • Weekly probe at instructional level and compare
    to expected and needed growth rates at
    instructional level.
  • Ambitious goal 200 of expected RoI

141
(No Transcript)
142
Possible Solution (B)
  • Weekly probe at instructional level for sensitive
    indicator of growth.
  • Monthly probes (give 3, not just 1) at grade
    level to compute RoI.
  • Goal based on grade level growth (more than 100
    of expected).

143
Step 3 Interpreting Growth
144
What do we do when we do not get the growth we
want?
  • When to make a change in instruction and
    intervention?
  • When to consider SLD?

145
When to make a change in instruction and
intervention?
  • Enough data points (6 to 10)?
  • Less than 100 of expected growth.
  • Not on track to make benchmark (needed growth).
  • Not on track to reach individual goal.

146
When to consider SLD?
  • Continued inadequate response despite
  • Fidelity with Tier I instruction and Tier II/III
    intervention.
  • Multiple attempts at intervention.
  • Individualized Problem-Solving approach.
  • Evidence of dual discrepancy

147
(No Transcript)
148
Three Levels of Examples
  • Whole Class
  • Small Group
  • Individual Student
  • - Academic Data
  • - Behavior Data

149
Whole Class Example
150
3rd Grade Math Whole Class
  • Whos responding?
  • Effective math instruction?
  • Who needs more?
  • N19
  • 4 gt 100 growth
  • 15 lt 100 growth
  • 9 w/ negative growth

151
Small Group Example
152
Intervention Group
  • Intervention working for how many?
  • Can we assume fidelity of intervention based on
    results?
  • Who needs more?

153
Individual Kid Example
154
Individual Kid
  • Making growth?
  • How much (65 of expected growth).
  • Atypical growth across the year (last 3 data
    points).
  • Continue? Make a change? Need more data?

155
RoI and Behavior?

156
(No Transcript)
157
Step 4 Figure out how to fit Best Practice into
Public Education
158
Things to Consider
  • Who is At-Risk and needs progress monitoring?
  • Who will collect, score, enter the data?
  • Who will monitor student growth, when, and how
    often?
  • What changes should be made to instruction
    intervention?
  • What about monitoring off of grade level?

159
Who is At-Risk and needs progress monitoring?
  • Below level on universal screening

Entering 4th Grade Example Entering 4th Grade Example Entering 4th Grade Example Entering 4th Grade Example Entering 4th Grade Example
DORF (110) ISIP TRWM (55) 4Sight (1235) PSSA (1235)
Student A 115 58 1255 1232
Student B 85 48 1216 1126
Student C 72 35 1056 1048
160
Who will collect, score, and enter the data?
  • Using MBSP for math, teachers can administer
    probes to whole class.
  • DORF probes must be administered one-on-one, and
    creativity pays off (train and use art, music,
    library, etc. specialists).
  • Schedule for progress monitoring math and reading
    every-other week.

161
Week 1 Week 1 Week 2 Week 2
Reading Math Reading Math
1st X X
2nd X X
3rd X X
4th X X
5th X X
162
Who will monitor student growth, when, and how
often?
  • Best Practices in Data-Analysis Teaming
    (Kovaleski Pedersen, 2008)
  • Chambersburg Area School District Elementary
    Response to Intervention Manual (McCrea et. al.,
    2008)
  • Derry Township School District Response to
    Intervention Model (http//www.hershey.k12.pa.us/5
    6039310111408/lib/56039310111408/_files/Microsoft_
    Word_-_Response_to_Intervention_Overview_of_Hershe
    y_Elementary_Model.pdf)

163
What changes should be made to instruction
intervention?
  • Ensure treatment fidelity!!!!!!!!
  • Increase instructional time (active and engaged)
  • Decrease group size
  • Gather additional, diagnostic, information
  • Change the intervention

164
Final Exam
  • Student Data 27, 29, 26, 34, 27, 32, 39, 45, 43,
    49, 51, --, --, 56, 51, 52, --, 57.
  • Benchmark Data BOY 40, MOY 68.
  • What is students RoI?
  • How does RoI compare to expected and needed RoIs?
  • What steps would your team take next?
  • What if Benchmarks were 68 and 90 instead?

165
Questions? Comments!
166
The RoI Web Site
  • http//sites.google.com/site/rateofimprovement/
  • Download powerpoints, handouts, Excel graphs,
    charts, articles, etc.
  • Caitlin Flinn
  • CaitlinFlinn_at_hotmail.com
  • Andy McCrea
  • andymccrea70_at_gmail.com
  • Matt Ferchalk
  • mferchalk_at_norleb.k12.pa.us

167
Resources
  • www.interventioncentral.com
  • www.aimsweb.com
  • http//dibels.uoregon.edu
  • www.nasponline.org

168
Resources
  • www.fcrr.org
  • Florida Center for Reading Research
  • http//ies.ed.gov/ncee/wwc//
  • What Works Clearinghouse
  • http//www.rti4success.org
  • National Center on RtI

169
References
  • Ardoin, S. P., Christ, T. J. (2009).
    Curriculum-based measurement of oral reading
    Standard errors associated with progress
    monitoring outcomes from DIBELS, AIMSweb, and an
    experimental passage set. School Psychology
    Review, 38(2), 266-283.
  • Ardoin, S. P. Christ, T. J. (2008). Evaluating
    curriculum-based measurement slope estimates
    using triannual universal screenings. School
    Psychology Review, 37(1), 109-125.

170
References
  • Christ, T. J. (2006). Short-term estimates of
    growth using curriculum-based measurement of oral
    reading fluency Estimating standard error of the
    slope to construct confidence intervals. School
    Psychology Review, 35(1), 128-133.
  • Deno, S. L. (1985). Curriculum-based measurement
    The emerging alternative. Exceptional Children,
    52, 219-232.

171
References
  • Deno, S. L., Fuchs, L.S., Marston, D., Shin, J.
    (2001). Using curriculum-based measurement to
    establish growth standards for students with
    learning disabilities. School Psychology Review,
    30, 507-524.
  • Flinn, C. S. (2008). Graphing rate of improvement
    for individual students. InSight, 28(3), 10-12.

172
References
  • Fuchs, L. S., Fuchs, D. (1998). Treatment
    validity A unifying concept for
    reconceptualizing the identification of learning
    disabilities. Learning Disabilities Research and
    Practice, 13, 204-219.
  • Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz,
    L., Germann, G. (1993). Formative evaluation of
    academic progress How much growth can we expect?
    School Psychology Review, 22, 27-48.

173
References
  • Gall, M.D., Gall, J.P. (2007). Educational
    research An introduction (8th ed.). New York
    Pearson.
  • Jenkins, J. R., Graff, J. J., Miglioretti, D.L.
    (2009). Estimating reading growth using
    intermittent CBM progress monitoring. Exceptional
    Children, 75, 151-163.

174
References
  • Karwowski, W. (2006). International encyclopedia
    of ergonomics and human factors. Boca Raton, FL
    Taylor Francis Group, LLC.
  • Shapiro, E. S. (2008). Best practices in setting
    progress monitoring goals for academic skill
    improvement. In A. Thomas and J. Grimes (Eds.),
    Best practices in school psychology V (Vol. 2,
    pp. 141-157). Bethesda, MD National Association
    of School Psychologists.

175
References
  • Vogel, D. R., Dickson, G. W., Lehman, J. A.
    (1990). Persuasion and the role of visual
    presentation support. The UM/3M study. In M.
    Antonoff (Ed.), Presentations that persuade.
    Personal Computing, 14.

176
References
  • Burns, M. (2008, October). Data-based problem
    analysis and interventions within RTI Isnt that
    what school psychology is all about? Paper
    presented at the Association of School
    Psychologists of Pennsylvania Annual Conference,
    State College, PA.
  • Ferchalk, M. R., Richardson, F. Cogan-Ferchalk,
    J.R. (2010, October). Using oral reading fluency
    data to create an accurate prediction model for
    PSSA Performance. Poster session presented at the
    Association of School Psychologists of
    Pennsylvania Annual Conference, State College,
    PA.
  • Hintze, J., Silberglitt, B. (2005). A
    Longitudinal Examination of the Diagnostic
    Accuracy and Predictive Validity of R-CBM and
    High-Stakes Testing. School Psychology Review,
    34(3), 372-386.
  • McGlinchey, M., Hixson, M. (2004). Using
    Curriculum-Based Measurement to Predict
    Performance on State Assessments in Reading.
    School Psychology Review, 33(2), 193-203.
  • Shapiro, E., Keller, M., Lutz, J., Santoro, L.,
    Hintze, J. (2006). Curriculum-Based Measures and
    Performance on State Assessment and Standardized
    Tests Reading and Math Performance in
    Pennsylvania. Journal of Psychoeducational
    Assessment, 24(1), 19-35.

177
References
  • Silberglitt, B. (2008). Best practices in Using
    Technology for Data-Based Decision Making. In A.
    Thomas and J. Grimes (eds.) Best practices in
    school psychology V. Bethesda, MD National
    Association of School Psychologists.
  • Silberglitt, B., Burns, M., Madyun, N., Lail,
    K. (2006). Relationship of reading fluency
    assessment data with state accountability test
    scores A longitudinal comparison of grade
    levels. Psychology in the Schools, 43(5),
    527-535.
  • Stage, S., Jacobsen, M. (2001). Predicting
    Student Success on a State-mandated
    Performance-based Assessment Using Oral Reading
    Fluency. School Psychology Review, 30(3), 407.
  • Stewart, L.H. Silberglitt, B. (2008). Best
    practices in Developing Academic Local Norms. In
    A. Thomas and J. Grimes (eds.) Best practices in
    school psychology V. Bethesda, MD National
    Association of School Psychologists.
Write a Comment
User Comments (0)
About PowerShow.com