In education, we’re a fan of magic bullets. Single-gender classrooms, personalized learning, standardized/state testing, etc. – the list goes on. Educators is hard, so when we come across something that works, our optimism kicks in and we want to think that it will work really well – so well, in fact, that said strategy will transform education. What inevitably happens is that the strategy in question does not magically fix all of education, so critics come in and start to argue that it didn’t work, we’ve been wrong all along, and if we just now do this, all will be well. Of course, that doesn’t work, and so the cycle continues.

So, this may be obvious – we all know the problems with expecting something to be a cure-all. What we don’t often catch as often is that we essentially do the same thing in a lot of more specific research scenarios by expecting something to singularly affect change, and drawing the conclusion that that something didn’t work if it didn’t.

Case in point: Head Start. Head Start, in the research community, has an infamous reputation of being something that doesn’t work. Any initial gains noticed, cite researchers, fade quickly over time, with there being no real difference between kids who did or did not participate past a certain point. The central point of my blog post today is that I believe this is a faulty conclusion: Just because an independent variable is not powerful enough to affect the dependent variable by itself does not mean the strategy either 1) did not work, or 2) is not necessary.

I’m a fan of car analogies, so let’s go down that path for an example: If a car’s engine is broken and the tires are flat, you have problems. If you fix the engine, the car will still not function correctly. However, it would be incorrect to assume that fixing the engine is unimportant or unnecessary – it just wasn’t enough. Head Start could very well fit into this category – Head Start may not be enough to transform the educational career of a child from an at-risk background, but that form of early educational intervention may very well be necessary.

To be sure, I’m not making the case for Head Start here (nor am I making a case against it, though). My point is simply that discovering something wasn’t enough by itself doesn’t mean we should throw it out.

Let me take a more specific, and slightly more contemporary against. In this study, Marcus Winters & Jay Greene discovered that certain educational placement strategies had an initial affect, but faded over time. They were studying retention based on state tests, along with assignment to a “more effective” teacher, summer supports, etc. In one intervention condition, they noted that the effect of the condition faded over time. Their conclusion? That the intervention condition didn’t work. I disagree, at least with drawing that conclusion based solely on the knowledge the intervention condition didn’t sustain.

In reality, that educational placement could have been huge – it could have provided the child exactly what s/he needed at the time, and been the difference between failure, at that point in time, and success.

Let me take a step back and give another example, this time in medicine. Let’s say you take a medication that lowers blood pressure, and for 5 years this helps keep your blood pressure down, preventing a heart attack. Then, in year 6, you stop taking that medication and have a heart attack. Could you reasonably conclude that the medication didn’t work in Year 1 because it didn’t provide a benefit in Year 6?

In short, I think we need to be reasonable with what contributions we expect an intervention to make, and not rule them out as a failure simply because they aren’t a panacea, or because they don’t live up to some other expectation we might have. If we aren’t going to expect an oil change to last 5 years, and we aren’t going to expect blood pressure medication to last 5 years, why would we expect Head Start to last 5 years? Just because we want it to?