Cross-posted from Education Week
The American “system” for improving the performance of our schools is bizarre and, judging by the results, very ineffective.
Anyone who has an idea about how to improve schools is free to try it out as long as they can get funding for it in our funding bazaar. Very little of that funding comes from the states, which are responsible for running our schools.
The result is a teeming assortment of unrelated initiatives, unrelated to each other and unrelated to any overall analysis of the needs of the system endorsed by the people responsible for our schools. Many of the interventions that get funded operate at cross-purposes with one another. Any harmony among them is entirely accidental. The effectiveness of these interventions is often not evaluated at all. And, of those that are, very few employ a rigorous methodology. Few are systematically implemented and very, very few get to scale.
To the extent that there are competent evaluations of these interventions, they typically show very small effects. That is mainly because the interventions are placed into a system the parts and pieces of which were never designed to function in harmony. The result is that any given intervention is implemented in an environment in which other features of the system of which it is a part are functioning in a way that might support the intervention, might not support it, or, just as likely, might work actively to undermine it.
The reality is that most interventions, whatever their merits, will only work well if they are used in systems the elements of which are designed to support and complement that intervention. But few if any interventions enjoy that kind of support in the United States. The single biggest difference between education in the United States and education in the countries with the most effective education systems is that the latter actually function like systems and ours does not. The American style of education research, by focusing on the intervention and not the system, is part of the problem, not the solution.
After many years during which education research was roundly criticized for producing ideologically motivated findings using weak methodology, leaders in the research community have embraced the best research methods from the health sciences, specifically the random assignment of subjects to treatments, using, where possible, double blind experimental methods. This is certainly an advance over what preceded it, but these methods, too, are part of the problem.
I submit that the most serious impediment to running a first class education system is our seeming inability to focus on the design of the system itself. The gold standard education research methods are singularly unsuited to the task. It is not possible to randomly assign state populations to state education systems. Education systems, it turns out, cannot be studied in the same way that most health treatments can.
But there is something deeper that is wrong here. The ideal for the education researcher is to show that a particular treatment works better than the status quo or particular alternatives. The aim is to fully specify a particular superior treatment in the same way that one can specify the formula for a drug or the details of a medical procedure, and then show that it works better than all other available drugs or procedures. But education doesn’t work that way, once we get beyond a few areas like the teaching of reading. No capable superintendent of schools is going to replicate exactly the way another district does much of anything. She is much more likely to do what the best corporations do—find out what her best competitors do and put together an amalgam of best practices that fit the aims, values, strengths and weaknesses of her own community, adding a bit of her own secret sauce. The replication model may fit the formulation of a drug or procedures for conducting an open-heart operation, but does not work so well when it comes to improving the performance of low-performing schools or improving middle school science performance.
The gold standard method of education research should be used where it is appropriate, but it is not the universal solution it is cracked up to be. We do need effective practices, but we have an even greater need for effective systems. Many of the top-performing nations spend less on research than we do and yet get much better results. I don’t think these facts are unrelated and it is not because they produce superior research. What happens in those countries is that improvement initiatives are not randomly initiated by anyone who is lucky enough to get someone to provide money for them. They are generated by the people who run the system in a systematic effort to shore up weaknesses in the system. Whether seeking significant changes or incremental refinements, the interventions are designed to fit into the system, not to upend it, circumvent it or ignore it. The people responsible for this function typically start off by doing a systematic review of the best practices in the world, not with a view to picking one best practice and replicating it, but rather with the intention of putting together their own unique design, based on an analysis of their own needs and context, the best of what they have seen and their own ideas. Once the design is formulated, they find some very good teachers to try it out, modify it until it works the way they want it to, and then implement it at scale.
The American system for education improvement strikes me as full of random motion. It is terribly wasteful and designed in such a way as to make it very unlikely that we will make much progress on our greatest weakness, which is our inability to create effective education systems.
Our education researchers are as good or better than those in the countries that are out-performing the United States. The problem is not lack of capacity, but rather the way we think about the task. We have, I think, a chance to rethink the nature of the challenge in education research. I am confident that, once we get that right, the leaders in our field will think afresh and productively about the best methods for addressing that challenge.