First, just in case you have come to this article for a quick answer…There is no ding. That’s right. No Ding. The concept of getting "dinged" related to Healthy Indicators is something that seems to have grown up in people’s minds as a thing, even though it isn’t really a "thing." We've tried very hard to eliminate the idea, but we haven’t been successful in dispelling the worry or confusion.
Often there is the misconception that the indicators are part of an identification formula for ESSA. They are not. Or people think that the percentage goals presented on the indicators are mandates. They are not. Instead, consider them to be indicators to help with data interpretation. Here's an explanation that should help frame healthy indicator reports in a better way.
Generally speaking, the healthy indicators were built as tools to assist school teams in evaluating the effectiveness of their MTSS system as a part of school improvement. Each report answers a question about a portion of that system, and each part of the system is about improving outcomes for students. Each report was built to explore data related to a question about a part of the MTSS system. The reports below are ordered according to the question they answer, not the report number.
- Healthy Indicator #1 answers the question: Does our universal screening include most/all of our students or are we missing groups of students?
- Healthy Indicator #3 answers the question: Does our screening indicate that universal instruction is meeting the needs of the majority of our students?
- Healthy Indicator #2 answers the question: Are we collecting progress monitoring data for students who need monitoring?
- Healthy Indicator #5 answers the question: Are we providing interventions for students who need them?
- Healthy Indicator #4 answers the question: Are students who are not at risk remaining above benchmark? This is an indicator of the effectiveness of universal instruction.
- Healthy Indicator #6 answers the question: Are students who were below benchmark improving and now scoring above benchmark?
We screen because we want to systematically find students at risk for reading difficulty in order to provide early intervention to get the students back on track before the gaps get larger. We intervene with high quality, effective interventions because we want to make a difference early so children don’t fall farther behind. We monitor progress frequently because we want regular feedback so we can tell whether or not the intervention is working… and we change the intervention when the monitoring tells us that it is not closing the gap. We also use the feedback to help us spot when universal instruction (the core) is not effective and needs further attention. We check periodically to be sure that the combination of universal instruction and interventions is keeping successful students on track, and moving more struggling students back on track.
It is true that screening, intervention and progress monitoring ARE requirements for ELI, but there are no explicit consequences tied to those requirements. The real dings are actually on the students. Failing to screen students can leave places where they can fall through the cracks and not get the support necessary to be successful readers. Failing to provide high quality, intense interventions means that students are not getting the targeted help needed to close the gap. Failing to collect consistent monitoring data means there is no way to know if the interventions are working. Failing to make changes in instruction and interventions when the PM data say that the current instruction is not closing the gap is a ding on the well-being of the students that schools have been entrusted to teach; and a waste of teacher and student time when they could be providing more effective instruction. And finally, failure to screen essentially all students means that your healthy indicator reports will not represent the entire student population, skewing the summary statistics. This would have an effect on systems-level data analysis. The real drivers and the real “dings” are about going through the motions to comply but not using the data to ensure students become successful readers.