Evaluating the Effectiveness of Interventions

Fundamentals of Intervention Evaluation

One of the important parts of an MTSS system is providing interventions to students who struggle. An imperative of what we do is to close the gaps for students so that they will be successful readers (and mathematicians, etc.). However, to do that the interventions must be effective. There are two ways that we need to attend to this: 

First, determine whether the intervention is working for the child, which basically means confirming that the gap is narrowing and the student is on track to meet or exceed the benchmarks that show risk. Interventions need to match with the student’s learning needs. Progress monitoring data is used to keep a close eye on the intervention so that it is possible to make mid-course corrections if the intervention isn’t having the desired effect. See also: Review an Individual Student Intervention and Review Group Interventions

Second, periodically look at the system level data and make judgments about the general effectiveness of interventions across multiple students. This can help spot strong interventions that get results and need to be used more, but also to spot interventions that aren’t working well and might need to be adjusted or abandoned for more effective options. While essential, this part of the process is often a sensitive subject and has the potential to make the intervention provider feel criticized, when that's not the case at all. It's important to approach the topic with a team mindset and stress the sense of urgency to help students close the gaps and become successful readers. Ask, “Is it possible that this intervention isn't the best fit?” These considerations are similar to the Learner Steps in the Intervention System Guide. Additional questions to consider, and related tools can be found in the current version of the Intervention System Guide (5.1).

Using Progress Monitoring Data to Evaluate Interventions

Progress monitoring is all about monitoring progress. That may seem a foolish statement, but collecting the weekly data is not the important thing - it’s USING that data to keep an eye on the student’s growth that makes a difference. The person responsible for delivering the intervention or monitoring its delivery should check the data regularly, focusing on the student’s rate of progress relative to the goal line. Is the gap closing? Is it closing fast enough to reach the benchmark by the end of the year? Or is the gap constant, where the student is growing, but not quickly enough? Or, worst-case scenario, is the gap widening over time? Researchers don’t always agree on whether there must be a certain number of data points in the graph, but they all agree that when a student’s trend is not heading in the right direction, it’s a powerful sign that something needs to change. If the trend is widening the gap, and if there are at least 4-6 progress monitoring (PM) data points showing that trend, that’s a good indication of a problem with the way things are going.

Below is an example of a year-long progress monitoring graph for a student. The dotted line represents the goal line from the start score to the end-of-year benchmark. The blue line represents the trend for the student over time, and the vertical purple lines represent the interventions. In this case, while the student is improving, the rate of improvement is far from what is needed to reach the end-of-year goal. For best practice, the team monitoring this student’s intervention would have altered or replaced interventions somewhere around January when the trend was flattening out. 

In the next example, the student is making strong progress, nearly meeting the goal line. A new intervention was entered around the end of January and may be helping with the observed higher scores more recently. It looks like the student’s trend line is likely to come very close to intersecting with the goal by the end of the year. 

In the example below, a Kindergarten student was monitored on letter sounds from early in the year and the trend shows the student is clearly falling farther behind. In best practice, the person providing the monitoring would have taken advantage of the early evidence that the child wasn’t making growth and implemented an intervention. In this example, that didn’t happen until after the winter screening required it and three months of potential intervention time was lost. In addition, the continued monitoring data illustrates that the child is still not making the growth needed to be successful. 

We’ve all heard the old saw that a definition of insanity is continuing to do the same thing and expecting to get different results. The example above demonstrates why staying the course for months on end with an intervention that is clearly not improving a child’s skills is not the way to help them be successful.

The Importance of Accurate Goals and Start Scores

It is important to have goals and start scores set correctly. An incorrect start or goal data in FastBridge will cause poor data to populate into the year-long progress monitoring and resulting intervention graph in Student Success. If the start score defaults to zero, go to Edit Group and enter the first PM result as the start score. The updated start score will transfer from FAST into Student Success overnight. We recommend editing the end-of-year goal to match the end-of-year benchmark. This will provide an accurate display of the student gain needed for the gap to be narrowed. See: How to Ensure Intervention and PM Graph Accuracy

Evaluating Intervention Effectiveness at the System Level

If progress monitoring goals are set correctly, data are regularly collected, and teachers use interventions set up in the district library, there are reporting tools that can help identify which interventions get better results and those that may not be as effective. 

The intervention summary view on the Students tab gathers data for all students in each intervention and aggregates information about the rate of improvement of each child participating in the intervention (On track) and whether the most recent progress update falls within the expected monitoring frequency on the plan. In the example shown here, looking at only In-Progress interventions, 34% of plans are on track, while only 5% are up to date. 

Expand the monitoring view by clicking on the <> arrows in the box showing the percentage of plans up to date. The expanded details show that 72% of the students have updates due, meaning there is not yet a monitoring score for the current week. There are also 23% that are well overdue (more than one week behind in collecting PM data). The more that progress monitoring is overdue, the less meaningful the on track statistics will be, so caution should be used in interpreting results. See the Weekly Progress Monitoring Appears Low article to explore issues with progress monitoring completion.

Next, open the Tier 2 segment and expand the On track view to see the details. Overall, among all the interventions in place, 34% are on track, meaning that the progress monitoring shows the student is on track to exceed the end-of-year benchmark (assuming correctly set start scores and goals). Another 27% are making progress that is not sufficient to meet the goal. Another 30% are losing ground against the goal, meaning the gap is widening. No status is available for 10%, which likely means that there are no measurable progress monitoring assessments tied to the intervention or none collected yet.

Looking at the individual interventions, the intervention that is getting the best results based on the data is Read Naturally, while the SIPPS intervention is not having the desired effect for most students. There are eight students that have custom interventions, and most of those are not effective. Those are interventions that were created individually and not part of the district’s preset library of interventions. We can also see that Wilson Book 1 has no status, likely because there is no progress monitoring data connected to these interventions, or it is defined, but not being collected by the teacher. 

Another feature of this report is to look at completed interventions. In this display, the summarized information is goals met or not met. This is based on the status that is recorded when the intervention is marked complete by a member of the intervention team. Accuracy in reporting the goal status (including having accurate start score and goals), are important for the results displayed here to be meaningful. The display shown above suggests that just over half of interventions resulted in the student’s goal being met.

What to do with the results of your evaluation

A logical next step would be to ask questions about why some interventions appear to be working, while others are not. Maybe an intervention is not being delivered as it should (i.e., incorrect content or methods, not enough time, etc.), or that it simply isn’t a match with the students’ needs. To dig into this question, consider Learner Step 4: Data Analysis of the Intervention System Guide to determine next steps. This includes consideration of implementation fidelity and intensification of interventions, assuming the interventions were well-matched to students’ needs initially. The Intervention System Guide includes additional details and related tools to support data analysis and next steps.