Bobby Caples (Education)

Education & Youth Development Consultant

Menu Close

Author: bobbycaples (page 1 of 87)

Strength-based approach – not just a “feel good” or “look good” approach anymore

When I was in graduate school, which was not that long ago but long enough to actually reference a time in the past, there was a sense, at least to me, that strength-based approaches to education/psychology were something you did to make sure people felt good and bought into the intervention plan. You listed a bunch of positives or strengths of the child in an FBA not because it was actually integral to the intervention plan, but because it was just sort of the right thing to do. I’m oversimplifying here a bit, and there have always been folks that have agreed with what I’m about to say, but I think there’s bit a tipping point in positive psychology.

Let me start off with the theory, very simply put: Preventing bad things, or going around and metaphorically “picking up pieces,” is just a lot more difficult and cumbersome than getting a ball rolling in a certain direction and watching it take off on its own through momentum.

To give credit where credit is due, I remember the concept of “replacement behaviors” in grad school – the idea that it’s easier to build positives than it is to just get rid of negatives, with no alternative. However, that tended to be more of a tit-for-tat approach – every replacement behavior was tied to a very specific undesirable target behavior. There wasn’t really a broad, generative focus on building strengths & resiliency. That was seen as more of a layman’s task – something parents did, teachers, and maybe a PBS kids show.

My argument, very rudimentarily put, is that we should remove the limitation on technical intervention plans that all behavioral targets/goals be based on a deficit-reduction model of psychology. Rather than “addressing referral concerns,” let’s address issues of behavior problems by zooming out well past the scope of the acute behavioral problem, and examine the child’s life as a whole. Here’s something not novel: When kids have what they want and need, they tend to not seek those needs through less desirable means. So, if we zoom out to the “whole person” level of analysis, rather than just seeing the behavioral targets of “non-compliance” and “reactive verbal aggression,” we run the risk of helping kids meet their needs. The small behavior problems, then, tend to go away. And, and this is a BIG and, we take care of a bunch of other stuff at the same time, even stuff that may not have come up yet.

In short, when we focus on target behaviors, even target replacement behaviors, rarely are we addressing underlying cause. Even when we zoom out and address underlying cause, we often aren’t developing a support plan that addresses the child’s whole system. And, just like families, schools, etc. kids are systems – emotional, cognitive, social, etc. components and subcomponents that all work together. When one little thing appears broken, chances are there is a larger issue at play.

So, how does this relate to a strengths-based approach? After all, couldn’t you “zoom out” and take a “systems approach” by still focusing on deficit reduction? Yes. However, the more complicated and multi-dimensional the problem is, the more the problem tends to be solved by relatively fundamental solutions. Strength-based approaches tend to be more fundamental and self-organizing, similar to the idea of keystone behaviors, when an intervention has cascading effects on subsequent behaviors that are “downstream” from the source.

From a practical perspective, when way to go about doing this strength-based approach is not to just focus on things the child is already good at, but things that the child is NOT actually already good at, but needs to be. I realize that this is sort of just an inverse of a deficit-reduction approach, and still focuses on what the child lacks, but the focus is still on build resiliency & strength rather than just eliminating deficit.

So, next time you’re designing an intervention plan, consider widening your scope.

New RtI Study: Not What It Seems

The education world seems to be up in arms recently about the new federal report which seems to indicate that, on the surface, RtI doesn’t work. The report used an interesting statistical model to compare students who received vs did not receive RtI services, and found that students who received RtI services either did not benefit, or ended up doing worse. This, on the surface, appears to be a true claim. To generalize this finding as support that RtI “doesn’t work,” however, simply doesn’t work. Here’s why:

First, the study only compared students right around the cutoff score for receiving services. It did not examine whether RtI was an effective model for the rest of students receiving RtI. There’s simply no data to support this. Second, the study found that 40% of students who received RtI services were in schools in which all students received RtI services – not just struggling students. 

More importantly, this report does NOT suggest that RtI failed as a model. It failed, for the specific students in the study, as implemented. Granted, the researchers did focus on schools who had appeared to implement RtI with fidelity, but even if it had been implemented with fidelity, fidelity to what? Looking at the practices of some of the schools (e.g., all students receiving RtI services, schools using single assessment scores as criteria for inclusion, etc.), it’s questionable that RtI was implemented with fidelity to best practice.

So, what this report does say is that – for the students served – RtI didn’t work. It doesn’t say RtI didn’t work for the rest of students served, nor does it suggest that RtI can’t work. It suggests we’ve got more work to do.

When research gets in the way

First, a “disclaimer” – research is great, and I’m not going to go into the obvious reasons why. I’m also not going to go into the obvious reasons why, sometimes, it’s not helpful. However, when reading this article by Sarah Sparks, more broadly reporting on retention, it struck me that sometimes research doesn’t lead us in the right or wrong direction – it just befuddles things a bit and obscures decision-making.

Here’s a brief synopsis of the research in question in the Sparks article: It was postulated that, because kids’ brains don’t make some sort of dramatic shift in processing information related to reading when entering 4th grade, we should continue to value beginning reading instruction for kids who need it.

This all sounds fine, and I’m on board with the conclusion – we should keep teaching beginning reading for kids who need it after 3rd grade. However, what if the research has said something else – for example, that kids’ brains had undergone some sort of shift in terms of processing reading-related information? We would then have drawn a different conclusion and used that information to support termination of beginning reading at 4th grade? I doubt the various authors and scholars reported on by Sparks in her article would have made the conclusion, but here’s my point – why even consider brain research at all? Why is it not enough to simply understand that beginning reading instruction for 4th & 5th graders works just because it does?

I doubt, again, that the folks referenced in the Sparks article meant what I’m about to say, but I think sometimes in education we just like to throw around research because it makes us look good. Somehow if we quote brain research we must be right, right? In fact, no – even more important that doing research is applying it meaningfully in actual settings.

I wanted to bring this point up with this article in particular because I actually agree with the conclusions. I didn’t want any readers to assume I was dismissing research because it was contrary to my view. I disagree with retention, and agree with the conclusions drawn by Sparks and the folks she references, including Tim Shanahan. But the brain research cited doesn’t really help us understand those points any more that what we already knew.

Necessity & sufficiency with educational interventions: The myth of heightened expectation

In education, we’re a fan of magic bullets. Single-gender classrooms, personalized learning, standardized/state testing, etc. – the list goes on. Educators is hard, so when we come across something that works, our optimism kicks in and we want to think that it will work really well – so well, in fact, that said strategy will transform education. What inevitably happens is that the strategy in question does not magically fix all of education, so critics come in and start to argue that it didn’t work, we’ve been wrong all along, and if we just now do this, all will be well. Of course, that doesn’t work, and so the cycle continues.

So, this may be obvious – we all know the problems with expecting something to be a cure-all. What we don’t often catch as often is that we essentially do the same thing in a lot of more specific research scenarios by expecting something to singularly affect change, and drawing the conclusion that that something didn’t work if it didn’t.

Case in point: Head Start. Head Start, in the research community, has an infamous reputation of being something that doesn’t work. Any initial gains noticed, cite researchers, fade quickly over time, with there being no real difference between kids who did or did not participate past a certain point. The central point of my blog post today is that I believe this is a faulty conclusion: Just because an independent variable is not powerful enough to affect the dependent variable by itself does not mean the strategy either 1) did not work, or 2) is not necessary.

I’m a fan of car analogies, so let’s go down that path for an example: If a car’s engine is broken and the tires are flat, you have problems. If you fix the engine, the car will still not function correctly. However, it would be incorrect to assume that fixing the engine is unimportant or unnecessary – it just wasn’t enough. Head Start could very well fit into this category – Head Start may not be enough to transform the educational career of a child from an at-risk background, but that form of early educational intervention may very well be necessary.

To be sure, I’m not making the case for Head Start here (nor am I making a case against it, though). My point is simply that discovering something wasn’t enough by itself doesn’t mean we should throw it out.

Let me take a more specific, and slightly more contemporary against. In this study, Marcus Winters & Jay Greene discovered that certain educational placement strategies had an initial affect, but faded over time. They were studying retention based on state tests, along with assignment to a “more effective” teacher, summer supports, etc. In one intervention condition, they noted that the effect of the condition faded over time. Their conclusion? That the intervention condition didn’t work. I disagree, at least with drawing that conclusion based solely on the knowledge the intervention condition didn’t sustain.

In reality, that educational placement could have been huge – it could have provided the child exactly what s/he needed at the time, and been the difference between failure, at that point in time, and success.

Let me take a step back and give another example, this time in medicine. Let’s say you take a medication that lowers blood pressure, and for 5 years this helps keep your blood pressure down, preventing a heart attack. Then, in year 6, you stop taking that medication and have a heart attack. Could you reasonably conclude that the medication didn’t work in Year 1 because it didn’t provide a benefit in Year 6?

In short, I think we need to be reasonable with what contributions we expect an intervention to make, and not rule them out as a failure simply because they aren’t a panacea, or because they don’t live up to some other expectation we might have. If we aren’t going to expect an oil change to last 5 years, and we aren’t going to expect blood pressure medication to last 5 years, why would we expect Head Start to last 5 years? Just because we want it to?

“Pruning” Systems

I love to modify programs. I love to change them, tweak them, grow them, add procedures, add forms, add spreadsheets, etc. This often frustrates (to no end) those I work with, even though what I do is in the spirit of continuous improvement. Procedures & systems, after all, are just as much about efficiency as they are about efficacy. To the extent that they change constantly – even for the better – they hinder efficiency because the people that use and rely on those systems have to keep re-learning those systems, and lose fluidity with how they move throughout the system.

So, if systems and procedures are to change, I think they need to change effectively – in a certain way – and not just because the outcome may be better (i.e., the system is technically better). Rather, the way in which they change for the better is vital if you’re going to keep staff happy, or keep them at all.

First, a disclaimer since I’m throwing around the term “systems” here: “Systems change” often refers to large scale (and accompanying small scale) structural changes to a system (e.g., a new special education model in an elementary school). What I’m talking about in the article may be related, but it’s not the same – I’m talking about procedural systems – not what’s happening, but the way in which they’re happening – the forms, routines, expectations, and protocol used by users to accomplish the same (or similar) tasks in the same large-scale “system” that was in place before. What I’m referring to here could be anything from how users navigate the internal web system to how special education processes are documented.

Back to effective procedural systems change, then: There are better and worse ways to do it, and I’d like to spend a few minutes in this article highlighting one particularly helpful element to include in procedural change: pruning.

Do you remember TPS reports from the movie Office Space? If you haven’t seen it, see it – worth it beyond just understanding this analogy. The issues with TPS reports, and all of the office procedures the concept was satirizing, were duplication, redundancy, and over-documentation. The interesting thing is that TPS reports (using the term metaphorically here) don’t actually (always) start off bad. Generally, there is some need that’s noticed that needs fixing, so a procedure is developed to address that issue. The problem is that the procedure is often just tacked on to the existing procedural structure without considering the overall context in which the TPS report is placed. There may already be other forms that address the same issue, for example.

So, over time, there become tons and tons of TPS reports. Teachers know them well – many call them “assessments.” All sarcasm aside, assessments are crucial to education, but many teachers have experienced that assessments upon assessments are tacked on in a seemingly haphazard methods, leading to duplication of assessments, over-testing, etc. On a smaller scale, teachers are asked to document so many things in so many ways, that it becomes almost a major part of the job just to document. (Side note: If you teachers think you have it bad, talk to someone who has to bill medicaid).

Needless to say, when too many TPS reports build up, something needs to happen. One such thing that could happen is pruning. If you’ve studied neurobiology (which I haven’t really), you’re probably already familiar with the analogy of “pruning.” I’m not a neurologist, so pardon my paraphrasing here, but the idea is that the brain grows tons of connections over the first years of life, then eventually starts to “prune” away less used or needed connections to promote efficiency.

The same is a vital task that needs to happen within systems. The problem, though, is that pruning isn’t automatic like with your brain. There is no natural system of checks and balance within organizations that automatically triggers pruning once a certain number of TPS reports build up. The leaders/change agents need to do this manually. That being said, good systems managers tend to do this naturally – they tend to remove forms, merge forms, simplify procedures, streamline processes, etc. Sometimes it’s simple, sometimes it’s complex – in terms of execution. But, the good news is that – at least conceptually – the process is relatively straightforward.

In short, the idea of pruning is simple: Find the easiest, shortest, and most efficient path from Point A to Point B. Period. Here’s any example:

A few years back, I was working for an organization in which we needed to track details of kids’ behavior across various contexts. We wanted to know who kids behaved well for, when they behaved well, in which activities, etc., and exactly which behaviors were happening – in what frequency – each day of the week. We then wanted a way to analyze these data strategically and flexibly, for example being able to run reports to answer specific problem-solving questions generated in our behavioral assessment process. What we ended up doing during the first year was creating an elaborate electronic data collection system in which 5-10 data points were entered for each behavioral incident, then aggregated in Microsoft Access and analyzed via Crystal Reports. The system was, in short, beautiful. It was unlike anything I had ever seen or worked with, and we created it. We were proud, and could give you all kinds of information about all kinds of behavior – crazy levels of detail about behavior you’d never previously been able to have. The problem? It was cumbersome. It took staff forever to enter data, and encouraged them to focus more on data-entry (and problem behavior, since that’s what we were recording) than their actual interactions with kids. In short, we had created a system and procedural structure that was amazingly powerful, but needed pruning.

Over the next few months, then, we pruned. We ended up moving to a manual entry system (rather than digital) because we found that our fancy iPods were actually slowing us down. We collected less information about each behavioral incident, because we found that – while pretty cool – we weren’t really using some data as much as we thought we would. We also restructured data entry at the end of the day, as well as data aggregation and reporting afterward. In short, we pruned. We started off building up an elaborate system of procedures that were pretty effective, but not really efficient. We then went through the necessary pruning stages to reduce inefficiencies and duplications, and wound up with something that was not only effective, but usable. Don’t get me wrong – it was still a lot of work, but the work at least seemed commensurate with the result – worth our time.

Our organization was, undoubtedly, still change-prone and unstable (in a good way) because we were focused on continuous improvement. However, our pruning – in my opinion and experience – provided some level of counter-balance against the added systems and procedures we developed over time as well. This led to a net result of changed procedures, not additional ones, and a system that was ultimately usable.

Must Read for All Youth Workers: New Report from University of Chicago

Great new report from the University of Chicago that comprehensively integrates youth development theory, broad educational objectives, and youth-oriented public policy. My review:

Teachers: Summer Camp. Do it!

I’m about to say something that’s probably going to be fairly unpopular to teachers out there who are used to being asked to do more for less: This summer, give even more. Volunteer at a summer camp for a week and get paid absolutely nothing to do it.

Now that I’ve gotten all of the unpopular and borderline offensive phrases and commands out of the way, here’s what I mean:

First, a bit about my backstory. I got my start in youth work at summer camps back in college. If for no other reason than primacy (it happened first), it’s really influenced the way I see my professional world. It’s the reason I’m cool with long hours, and see going above and beyond as the norm, rather than a district initiative to cheat me out of pay. It’s the reason I’ve chosen my discipline approach that I have, and it informs the way I approach staff development & training.

The simplest, and first, argument I’ll start with is that summer camp serves as a great compliment to what you do during the school year, and can serve as a great “reset button” for your approach with kids. If you’re anything like me, sometimes you get frustrated and tired of the shenanigans that kids pull throughout the year. Sadly, over the course of a few years, that point of fatigue & exhaustion most teachers feel throughout the year creeps earlier and earlier through the school year. If you find yourself so very ready for the last day of school before you hit the 100-day celebration, this may be a sign you know what I’m talking about.

So, give that, why in the world would I argue for doing more, and for free, during the summer? The simple reason is that I don’t believe fatigue is the mere result of effort exerted. I believe it has to do more with our perspective. We aren’t fatigued because we’ve done too much – we’re fatigued because we feel we’ve done too much. Sure, there is such a thing as actually getting tired, but we work long and hard at the beginning of the school year too and somehow feel recharged and refreshed in the morning when we come back.

So, summer camp – it’s a reset button that helps us reset our perspective about how and why we work with kids. It helps us connect with them as people again – not as sponges that need to soak up our lessons. We can start to reconnect with their goals and dreams, rather than our needs and pass-through district directives. At summer camp, the clock moves differently, and the entire goal structure is different. Have you ever gone out for a happy hour drink with colleagues and felt those work relationships rejuvenate? Sure, you just spent more time with that particular co-worker you’ve been detesting, but you shared a few laughs, personal stories, & margaritas, and things seem to be a bit lighter come Monday morning. It’s sort of the same with summer camp & kids – it’s the kid version your Friday afternoon happy hour.

Beyond just being able to tolerate kids, I think it really helps us improve as educators. So much of teaching is about relationships, from being able to challenge kids without them shutting down, to being able to deal with kids who have already shut down because of peer conflict. Let’s face it, as much as we want to maintain strong relationships with kids throughout the school year, increasing demands on teachers make it increasingly more difficult to do so. We still do maintain relationships, but it becomes more difficult, and relationships more strained – at least for me. Summer camp gives us the opportunity to reconnect with kids and focus just on relationships – not lessons, learning, etc.

I guess I can’t really ignore why I’m advocating for the free part of things. First, I’d say that I don’t think it’s absolutely essential. All of the things I mentioned above are certainly possible with paid positions, but there’s something about a volunteering that reaffirms exactly the reasons we do things as educators – for reasons other than the money. It’s almost like renewing our vows in a way – when you volunteer, you are completely and utterly choosing that interaction, and that makes it different than when you’re getting paid to do it.

So, summer camp folks – do it. For free. Or not, but I will!

Merit Pay

Merit pay is one of those strategies dreamt up by the corporate world – by folks who don’t really understand why teachers would be willing to do well at their jobs. In this article, I dare to agree with Diane Ravitch and post a few more thoughts on the topic.

Behavioral Momentum

One of the fundamental behavioral principles used with kids with depression is something called behavioral momentum, which basically means getting the ball rolling. Turns out, not surprisingly, it’s easier to keep a ball rolling once it already is.

As an example, take academic success. Kids who struggle academically tend to know it, and as a result learn to hate academics. Getting the ball rolling is tough. You’ll often spend hours each week fighting against just getting them to engage in the material, much less actually learn anything.

If you can actually get them past the initial hurdle of actually starting, though, the whole process can be a whole lot more smooth. For example, give them something that they immediately sense will be very easy. Given the choice of completing the really easy task or getting in a fight over it, they may be more likely to roll with you. Then, you can gradually increase the difficulty over time – sneaking in the tough part after the ball is already rolling.

Just like Mary Poppins, sometimes the medicine goes down a bit smoother with something sweet. In this case, the ordering of the something sweet is fairly important!

Authentic Choice

Note: This blog post has been cross-published on

Most of the time when we give kids a choice, it’s pretty clear to them what we want them to choose. Sure, you’re giving them the choice between finishing their work and missing recess, and they can choose, but my experience has been that giving such choice is only step 1 of building authentic choice. Step 2 is, not shockingly, respecting that choice and communicating such respect.

Easier said than done, right? Too often, when a child makes the choice we don’t want them to, we get angry or otherwise try to convince them of the better option. Here’s the problem – in a discipline model in which the end goal is building accountability to self rather than accountability to authority, your voiced disapproval actually hurts rather than helps. By coercing them into your preferred choice, you’re actually taking away from their ability to make their choice, thereby moving agency/authority from their sphere into yours. You’re teaching them to behave for you, and make choices that you like, not ones that actually benefit them.

Again, back to “easier said than done.” So, one thing that makes it “easier done” is doing a bit of “under the hood” work in your own mind with expectations. The first step is to stop wanting them to make your choice, and seeing wrong choices as potentially right ones. What does this mean? This means that really, truly letting kids make any choice presented to them, then experiencing the results, builds a stronger perceived connection between their own agency with decision-making and the end result. In short, they start to see themselves as the ones responsible for good or bad results of their choices. They start to remove you as the middle man, and start to see themselves as the ones responsible for their actions.

© 2018 Bobby Caples (Education). All rights reserved.

Theme by Anders Norén.