Progress Monitoring- Overview- Part 1

This chapter will look at the big picture and scope of progress monitoring . The first section begins by looking at progress monitoring of student learning  in the context of assessing literacy skills. This is an example of tier 2 or secondary level instruction. In the second section of the chapter tier 3 intensive interventions will be examined through the research based process of Data-Based Individualization (DBI).

Key terms

  • Progress monitoring– is an ongoing series of measures to gauge how the student is responding to instruction. Are they making adequate progress? Does their instruction need to be adjusted?
  • Baseline– current knowledge and skill level of the student.
  • Screeners- used to determine the starting point for instruction and identifying the need for further assessment.
  • Curriculum-based measurement (CBM)- an assessment tool that is used for periodic  progress monitoring.. This can range from weekly, month to quarterly. Eg. AIMSweb, DIBELs Next
  • Data points- progress monitoring data collected from a probe or learning activity.
  • Slope– indicates rate of growth (or not). Students probe scores are graphed and calculated over a period of time. See Slope Calculator

Steps to Success: Crossing the Bridge Between Literacy Research and PracticeAuthored by: Kristen A. Munger, Ed.. Provided by: Open SUNY Textbooks. Located athttp://textbooks.opensuny.org/steps-to-success/LicenseCC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Progress-Monitoring Literacy Assessments

To monitor a student’s progress in literacy, assessments are needed that actually measure growth. Rather than just taking a snapshot of the student’s achievement at a single point in time, progress-monitoring assessments provide a baseline (i.e., the starting point) of a student’s achievement, along with periodic reassessment as he or she is progressing toward learning outcomes. Such outcomes might include achieving a benchmark score of correctly reading 52 words per minute on oral reading fluency passages or a goal of learning to “ask and answer key details in a text” (CCSS.ELA-Literacy.RL.1.2) when prompted, with 85% accuracy. The first outcome of correctly reading 52 words per minute would likely be measured using progress-monitoring assessments, such as DIBELS Next and AIMSweb. These screeners are not only designed to measure the extent to which students are at risk for future literacy-related problems at the beginning of the school year, but also to monitor changes in progress over time, sometimes as often as every one or two weeks, depending on individual student factors. The second outcome of being able to “ask and answer key details in a text” could be monitored over time using assessments such as state tests or responses on a qualitative reading inventory. Being able to work with key details in a text could also be informally assessed by observing students engaged in classroom activities where this task is practiced.

Unlike assessments that are completed only one time, progress-monitoring assessments such as DIBELS Next and AIMSweb feature multiple, equivalent versions of the same tasks, such as having 20 oral reading fluency passages that can be used for reassessments. Using different but equivalent passages prevents artificial increases in scores that would result from students rereading the same passage. Progress-monitoring assessments can be contrasted with diagnostic assessments, which are not designed to be administered frequently. Administering the same subtests repeatedly would not be an effective way to monitor progress. Some diagnostic tests have two equivalent versions of subtests to monitor progress infrequently—perhaps on a yearly basis—but they are simply not designed for frequent reassessments. This limitation of diagnostic assessments is one reason why screeners like DIBELS Next and AIMSweb are so useful for determining how students respond to intervention and why diagnostic tests are often reserved for making other educational decisions, such as whether a student may have an educational disability.

Progress-monitoring assessments have transformed how schools determine how a student is responding to intervention. For example, consider the hypothetical example of Jaime’s progress-monitoring assessment results in second grade, shown in Figure 2. Jaime was given oral reading fluency passages from a universal literacy screener, and then his progress was monitored to determine his response to a small group literacy intervention started in mid-October. Data points show the number of words Jaime read correctly on each of the one-minute reading passages. Notice how at the beginning of the school year, his baseline scores were extremely low, and when compared to the beginning of the year second grade benchmark (Dynamic Measurement Group, 2010) of 52 words per minute (Good & Kaminski, 2011), they signaled he was “at risk” of not reaching later benchmarks without receiving intensive intervention. Based on Jaime’s baseline scores, intervention team members decided that he should receive a research-based literacy intervention to help him read words more easily, so that his oral reading fluency would increase at least one word per week. This learning goal is represented by the “target slope” seen in Figure 2. During the intervention phase, progress-monitoring data points show that Jaime began making improvements toward this goal, and the line labeled “slope during intervention” shows that he was gaining at a rate slightly faster than his one word per week goal.

Ch 5 figure 2

Figure 2. Progress-monitoring graph of response to a reading intervention.

When looking at Jaime’s baseline data, notice how the data points form a plateau. If his progress continued at this same rate, by the end of the school year, he would be even farther behind his peers and be at even greater risk for future reading problems. When interpreting the graph in Figure 2, it becomes clear that intensive reading intervention was needed. Notice after the intervention began how Jaime’s growth began to climb steeply. Although he appeared to be responding positively to intervention, in reality, by the end of second grade, students whose reading ability progresses adequately should be reading approximately 90 words correctly per minute (Good & Kaminski, 2011). Based on this information, Jaime is not likely to reach the level of reading 90 words correctly by the end of second grade and will probably only reach the benchmark expected for a student at the beginning of second grade. These assessment data suggest that Jaime’s intervention should be intensified for the remainder of second grade to accelerate his progress further. It is also likely that Jaime will need to continue receiving intervention into third grade, and progress monitoring can determine, along with other assessment information, when his oral reading fluency improves to the point where intervention may be changed, reduced, or even discontinued. You may wonder how the intervention team would determine whether Jaime is progressing at an adequate pace when he is in third grade. Team members would continue to monitor Jaime’s progress and check to make sure his growth line shows that he will meet benchmark at the end of third grade (i.e., correctly reading approximately 100 words per minute; Good & Kaminski, 2011). If his slope shows a lack of adequate progress, his teachers can revisit the need for intervention to ensure that Jaime does not fall behind again.

Some schools monitor their students’ progress using computer-adapted assessments, which involve students responding to test items delivered on a computer. Computer-adapted assessments are designed to deliver specific test items to students, and then adapt the number and difficulty of items administered according to how students respond (Mitchell, Truckenmiller, & Petscher, 2015). Computer-adapted assessments are increasing in popularity in schools, in part, because they do not require a lot of time or effort to administer and score, but they do require schools to have an adequate technology infrastructure. The reasoning behind using these assessments is similar to other literacy screeners and progress-monitoring assessments—to provide effective instruction and intervention to meet all students’ needs (Mitchell et al., 2014).

Universal Literacy Screeners Links to additional information
Table 1. Examples of Commonly Used Universal Literacy Screeners
AIMSweb http://www.aimsweb.com/
Dynamic Indicators of Basic Early Literacy Skills—Next https://dibels.uoregon.edu/
STAR Reading http://www.renaissance.com/assess
Phonological Awareness Literacy Screening (PALS) https://pals.virginia.edu/
See Academic Progress Monitoring Tools Chart from the National Center on Intensive Intervention.  The tools chart presents information about progress monitoring tools across three ratings; Performance Level Standards (Reliability), Growth Standards (Validity) and Usability (Bias Analysis Conducted).
See examples of Progress Monitoring Handouts in Reading and Mathematics, from the National Center on Intensive Intervention.

Although many literacy screening and progress-monitoring assessment scores have been shown to be well-correlated with a variety of measures of reading comprehension (see, for example, Goffreda & DiPerna, 2010) and serve as reasonably good indicators of which students are at risk for reading difficulties, a persistent problem with these assessments is that they provide little guidance to teachers about what kind of literacy instruction and/or intervention a student actually needs. A student who scores low at baseline and makes inadequate progress on oral reading fluency tasks may need an intervention designed to increase reading fluency, but there is also a chance that the student lacks the ability to decode words and really needs a decoding intervention (Murray, Munger, & Clonan, 2012). Or it could be that the student does not know the meaning of many vocabulary words and needs to build background knowledge to read fluently (Adams, 2010-2011), which would require the use of different assessment procedures specifically designed to assess and monitor progress related to these skills. Even more vexing is when low oral reading fluency scores are caused by multiple, intermingling factors that need to be identified before intervention begins. When the problem is more complex, more specialized assessments are needed to disentangle the factors contributing to it.

A final note related to progress-monitoring procedures is the emergence of studies suggesting that there may be better ways to measure students’ progress on instruments such as DIBELS Next compared to using slope (Good, Powell-Smith, & Dewey, 2015), which was depicted in the example using Jaime’s data. In a recent conference presentation, Good (2015) argued that the slope of a student’s progress may be too inconsistent to monitor and adjust instruction, and he suggested a new (and somewhat mathematically complex) alternative using an index called a student growth percentile. A student growth percentile compares the rate at which a student’s achievement is improving in reference to how other students with the same baseline score are improving. For example, a student reading 10 correct words per minute on an oral reading fluency measure whose growth is at the 5th percentile is improving much more slowly compared to the other children who also started out reading only 10 words correctly per minute. In this case, a growth percentile of five means that the student is progressing only as well as or better than five percent of peers who started at the same score, and also means that the current instruction is not meeting the student’s needs. Preliminary research shows some promise in using growth percentiles to measure progress as an alternative to slope, and teachers should be on the lookout for more research related to improving ways to monitor student progress.

Linking Assessment to Intervention

How can teachers figure out the details of what a student needs in terms of intervention? They would likely use a variety of informal and formal assessment techniques to determine the student’s strengths and needs. The situation might require the use of diagnostic assessments, a reading or writing inventory, the use of observations to determine whether the student is engaged during instruction, and/or the use of assessments to better understand the student’s problem-solving and other thinking skills. It may be a combination of assessment techniques that are needed to match research-based interventions to the student’s needs.

You may be starting to recognize some overlap among different types of assessments across categories. For example, state tests are usually both formal and summative. Literacy screeners and progress-monitoring assessments are often formal and formative. And some assessments, such as portfolio assessments, have many overlapping qualities across the various assessment categories (e.g., portfolios can be used formatively to guide teaching and used summatively to determine if students met an academic outcome).


Tools to make your own Progress Monitoring Assessment /Probes

Response to Intervention -RTI Resources from Intervention Central, https://www.interventioncentral.org/index.php/cbm-warehouse Includes tools to make progress monitoring assessments, Dolch Wordlist Fluency Generator, Early Math Fluency Generator, Letter Name Fluency Generator, Math Work- Math Worksheet Generator, Reading Fluency Generator, Writing Probe Generator and more.


Data-Based Individualization (DBI) (tier 3)

One solution to supporting students with severe and persistent learning difficulties is the research based process of data-based individualization or DBI

DBI, an Intensive Intervention, (tier 3 instruction) is provided in smaller group setting of 1-3 students. This instruction is provided in addition to primary instruction in the general education classroom.

How can Data Based Individualization (DBI) help special educators improve outcomes for students with disabilities?

In this video, Amy McKenna, a special educator in Bristol Warren Regional School District shares her experience with data-based individualization (DBI). Amy discusses how she learned about DBI, the impact her use of the DBI process had on the students she worked with, and how DBI helped change her practice as a special educator.

[National Center on Intensive Intervention], (2018, Dec. 13). How can DBI help special educators improve outcomes for students with disabilities? [Video File]. Retrieved from https://youtu.be/0v-IuZ5KdUw  (8:08 minutes)

These two IRIS modules on Intensive Interventions and Data-Based Individualization were developed in collaboration with the National Center on Intensive Intervention at American Institutes for Research and the CEEDAR Center, These resources are designed for individuals who will be implementing intensive interventions (e.g., special education teachers, reading specialists, interventionists)
There two modules will focus on the student who is not responding to tier 2 interventions which  are typically conducted in small groups in addition to general education classroom instruction.

Resource: the IRIS Center. (2015). Intensive intervention (part 1): Using data-based individualization to intensify instruction. Retrieved from https://iris.peabody.vanderbilt.edu/module/dbi1/

Learning Objectives

  • Understand the purpose of providing intensive intervention
  • Be familiar with the data-based individualization process
  • Understand how to intensify and individualize academic interventions
  • Understand the difference between quantitative and qualitative adaptations

 

Resource: The IRIS Center. (2015). Intensive intervention (part 2): Collecting and analyzing data for data-based individualization. Retrieved from https://iris.peabody.vanderbilt.edu/dbi2/

Learning Objectives

  • Be familiar with the data-based individualization process
  • Understand how to make data-based instructional decisions
  • Be familiar with processes for collecting and analyzing progress monitoring data and diagnostic assessment data
  • Understand how to use these data to make instructional adaptations

 


Additional Resources

Brown, J., Skow, K., & the IRIS Center. (2009). RTI: Progress monitoring. Retrieved from
http://iris.peabody.vanderbilt.edu/wp-content/uploads/pdf_case_studies/ics_rtipm.pdf 

Common Core State Standards CCSS IEP Goal Probes and online assessment tools

National Center on Intensive Interventions, (n.d.) Progress Monitor, https://intensiveintervention.org/intensive-intervention/progress-monitor

National Center on Response to Intervention (n.d), Using Progress Monitoring Data for Decision Making. Retrieved from https://rti4success.org/sites/default/files/Transcript%20Progress%20Monitoring%20for%20DBDM_2.pdf 

The IRIS Center. (2019). Progress monitoring: Mathematics. Retrieved from https://iris.peabody.vanderbilt.edu/module/pmm/

The IRIS Center. (2005, Rev. 2019). Progress monitoring: Reading. Retrieved from https://iris.peabody.vanderbilt.edu/module/pmr/