The short answer is no, there isn't a way to remove or exclude students on Alternate Assessment from either the FastBridge or Student Success systems or Healthy Indicator report. It's unlikely that it will ever become an option due to the fuzzy nature of defining criteria around which students to exclude and why. It is also a slippery slope when you start picking and choosing which data to use. The whys of this are perhaps not as important as how to think about and shape conversations around why it matters, although we'll address a little of both in this article. While students on alternate assessment is the main group referenced in this article, the same reasoning applies to other students that may not be screened for one reason or another.
It's important to note that Healthy Indicator report #1, which displays the percentage of students screened, is the only place that students that were not screened with one of the approved assessments would be counted. The other Healthy Indicators are based on students with a screening result, so would not be impacted. This is something that often causes confusion because the reports are so nuanced and answer different questions.
Part of the reason behind the decision to include these students in Healthy Indicators, specifically #1, is because that indicator is about the completeness of the safety net for screening, in a typical school, there really should be less than 5% of students on alternate assessment - the typical estimate is closer to 2-3%. That's well within the 95% target. If a school was the host to programs that draw extremely low performing students from a larger area it would be lower than this, but generally people in a building of that type would understand the nuances of their population and account for that.
Re-framing conversations about Healthy Indicators and Alternate Assessments
That's a bit of the 'why' of it but people often ask how to explain this to others or have conversations about the inclusion of Alternate Assessment data so here are some high points to help:
- Let's be crystal clear about one thing. There is no accountability formula tied to the Healthy Indicators, and there is definitely no expectation for screening 100% of students. If you have people who are operating with this misunderstanding or unrealistic expectation it's a good idea to explore and remove those incorrect assumptions. Perhaps suggest that they show you where it says in writing that this is part of any accountability decision. They won't find it anywhere. It's not part of the report card, it's not part of ESSA, it's not part of any accountability. Their fears come from an incomplete understanding of the systems and accountability formulas. See: Don't Fear the "Ding"... How to think about Healthy Indicators for more on this topic.
- The indicators are intended to help the district reflect on practices that might need to be improved. If the building understands why some students are legitimately not tested, that's all that matters. However, if they realize that there are a bunch of kids who were missed, that's a place to improve. That's the sole function of the HI#1 report. The other HI reports provide data on the outcomes for the students who were screened. None of those displays include students who were not screened.
- Avoid falling into the trap of becoming more focused on how this affects adults and ignoring how it affects kids. It's human nature for people to worry about "looking bad". It may be helpful to explore what the real fears prompting this mindset are and to try and dispel them. Perhaps this could be an admin team topic so that the district's top administration can set the record straight and help refocus on more meaningful issues. We have found that misplaced motivators like fearing an accountability "gotcha" can sometimes result in ill advised actions that don't actually help improve student outcomes and also resenting the entire process instead of embracing it as an effective practice for improving outcomes.
- If there is an expectation coming from somewhere to test 100% of students, that should be surfaced and corrected. We have run into this in a few admin teams, where a poorly informed superintendent or curriculum director says "anything less than 100% is unacceptable." There are reasons to encourage thorough screening, but there are also reasons why some students simply cannot be screened and have valid, useful data.
In some cases where people still resist accepting this, it may be beneficial to have a more direct (perhaps even blunt) conversation that asks, "Why are we obsessing over the tiny percentage of students on Alternate Assessment (or HSAP, dual enrolled, etc.) being included in our numbers when a large percentage of our students are not meeting benchmark?" Or only 30% of students below benchmark last screening window have an intervention, or any number of other pieces of data that demonstrate less than desirable outcomes for students. Open conversations about whether teams are wasting time and energy on the wrong things can help refocus on the bigger issues. Changing HI #1 to hide students who did not screen because they are on the alternate assessment will not make any difference in the outcomes for students. Instead, use the information to figure out what's wrong with core instruction that causes students to perform below benchmark, and also look at whether the interventions that are being delivered are effective in closing the gap. Those are the things that will actually make a difference. Working to get that energy moved toward being concerned about what the data say about student performance is critical, especially because the fear of adult consequences is totally unfounded... that is, unless you think about the consequences to adults who fail to attend to the achievement needs of the students within their responsibility.
As a final thought, keep in mind that it may be that the team actually doesn't know what to do differently for reading instruction, or perhaps doesn't feel they have sufficient resources. Sometimes it feels safer to try to cast doubt on the validity of the data than to acknowledge a limit to current skill or the capacity to grow those skills to address the issues. (Think of it as arguing with the bathroom scales.) It may be that they are simply not focused on the right things and need to change some of the instructional and intervention practices that are ineffective. There's a certain amount of vulnerability that has to be dealt with in these conversations and therefore a little finesse is needed to keep the conversations productive and aimed towards the real needs of the students.