The education world seems to be up in arms recently about the new federal report which seems to indicate that, on the surface, RtI doesn’t work. The report used an interesting statistical model to compare students who received vs did not receive RtI services, and found that students who received RtI services either did not benefit, or ended up doing worse. This, on the surface, appears to be a true claim. To generalize this finding as support that RtI “doesn’t work,” however, simply doesn’t work. Here’s why:
First, the study only compared students right around the cutoff score for receiving services. It did not examine whether RtI was an effective model for the rest of students receiving RtI. There’s simply no data to support this. Second, the study found that 40% of students who received RtI services were in schools in which all students received RtI services – not just struggling students.
More importantly, this report does NOT suggest that RtI failed as a model. It failed, for the specific students in the study, as implemented. Granted, the researchers did focus on schools who had appeared to implement RtI with fidelity, but even if it had been implemented with fidelity, fidelity to what? Looking at the practices of some of the schools (e.g., all students receiving RtI services, schools using single assessment scores as criteria for inclusion, etc.), it’s questionable that RtI was implemented with fidelity to best practice.
So, what this report does say is that – for the students served – RtI didn’t work. It doesn’t say RtI didn’t work for the rest of students served, nor does it suggest that RtI can’t work. It suggests we’ve got more work to do.