In my last blog, I pointed to what I take to be a fatal flaw in the prevailing paradigm of education research. While people with many disciplinary backgrounds do education research, the paradigm on which it is largely based is built on clinical medical research. A disease-free and trauma-free human is by definition a healthy human and needs no treatment. The point of the research is to identify the causes of symptoms and establish the effectiveness of treatments that can reliably address the symptoms and relieve the trauma. The dominant research method is to establish conditions under which it is possible to reliably attribute a return to health to particular treatments by controlling for the effects of everything except the treatment, usually with statistical methods.
This whole approach—or paradigm, if you will—assumes that the aim is to restore the human being to the normal operation of the human body, which, considered as a biological system, is the most successful such system ever evolved.
But suppose it wasn’t. Suppose it had once been such a system, but the environment had changed to the point that it was now so poorly adapted to its environment that its survival was in doubt if it was not redesigned so as to enable it to successfully adapt to that changing environment.
That, in my view, is precisely the condition of American education. The proof, as I pointed out in my last blog, lies in the statistics on the performance of our education system compared to that of the education systems of countries whose systems were designed more recently and are therefore much better adapted to the conditions of modern times. While, half a century ago, the U.S. had, by common consent, the best education system in the world and the best-educated workforce, our students today are far behind those in the countries that now have the top-performing education systems and our workers now tie for last place in the global rankings of workforce quality among the industrialized nations. We have a very high cost system that produces mediocre performance.
In my last blog, I argued that we won’t be able to fix this until we refocus American education research on the determinants of effective education systems. It is time for us to recognize the limitations of the clinical model of medical research when it comes to public education and to adopt a perspective on education research that is based on systems design, a perspective that comes more naturally to engineering than to medicine.
Does this mean that I am rejecting all the research that has been done so far and urging a massive shift of resources to research on education system design as such?
No. Surely, we need to know as much as possible about how we can teach reading and mathematics effectively, what the effects of different methods of allocating funds for schooling will be on student achievement of different groups of students, how best to help students living in concentrated poverty who routinely experience trauma of much the same kind that produces post-traumatic stress disorder (PTSD) in soldiers returning from war zones and how the latest developments in brain research can be used to improve student learning.
But I am urging that we spend much more on research on high-performance education systems than we have up to now. This would not be hard, because we have spent almost nothing on such research. Twenty-five years after the two most successful redesigns of state education systems in the U.S.—those in Kentucky in 1990 and in Massachusetts in 1994—there is not one serious scholarly study of either set of reforms. A substantial sum should be appropriated by Congress to do this kind of research both here in this country and in the countries with the top-performing systems. It is by systematically comparing our typical education systems in the U.S. to the highest-performing systems in our nation and the world that we will learn how to design our own high-performing systems, based on evidence from all over the world, but adapted to our own values, goals, history and political context.
Ideally, a well-conceived program of research on high-performance education systems would be coupled with a new federal program designed to provide support to states interested in following in the footsteps of Kentucky, Massachusetts and, more recently, Maryland, in redesigning their whole education system for high performance, using evidence based on the study of such systems worldwide, and for the implementation of such designs. The goal of the comparative research program would be twofold: first, to create a mechanism that would enable our states to continuously identify and monitor the highest-performing systems nationwide and worldwide to establish the global performance benchmarks as they evolve; second, to analyze the factors that account for their success; and, third, to analyze and report on the performance of U.S. state education systems in relation to those benchmarks.
It might be helpful to think of these functions in relation to the National Assessment of Educational Progress—or NAEP. NAEP functions as a mechanism to enable states to compare their performance in core areas of the curriculum over time using common measures of performance. The policy of expanding NAEP to require that each state be separately sampled and reported was based on the assumption that these comparisons would stimulate the poor performers to do what was needed to improve their performance. But two things were missing from this design. First, the real competition was not in the United States; it was, with the exception of Massachusetts, from other countries. And, second, there was no organized effort in the research community to uncover the secrets of high-performing systems and therefore no reliable guidance for states trying to do a better job of building effective systems.
The only continuing guidance on system design and state system performance came from Education Week’s annual rankings of the states. When our organization was asked to provide research, analysis and recommendations for the Maryland Commission on Innovation and Excellence in Education, the Commission members started from the premise that Maryland already had one of the best education systems in the United States. That was because of a string of top placements in the Education Week rankings. Commission members were stunned when we presented the data on Maryland students’ performance on NAEP, which was right in the middle of the national rankings in the very same period that Education Week had given it top rankings. How could this be?
The answer was that Education Week had assembled leading education researchers to tell the journal what the research said about the most effective features of education systems. The criteria that Education Week used to make their annual judgments were based on advice they received from American experts who had been looking at the American education system. The indicators used by Education Week covered issues like early childhood education access and affordability, academic achievement and post-secondary enrollment and completion, but neglected several critical elements of high-performing systems. Because these experts did not consider these high-performing systems, they did not investigate what specific features of such systems and the most important subsystems contribute the most to superior results. Nor did they consider how the subsystems needed to be woven together to achieve those results. They had only studied their parts and pieces. In fact—and this is the fatal error—they had only studied the relatively successful parts and pieces of what were nonetheless dysfunctional systems.
The answer was that Education Week had assembled a group of experts to create criteria that could be used to judge the state policies and practices that were summed to produce the rankings. That, of course, was not a problem by itself. The problem was that the question as to whether, in any given state, those policies and practices added up to a coherent strategy for raising student performance was, apparently, never asked. A powerful practice or policy, unsupported by others that would provide the right sort of prerequisite experience or follow-on for students and the right kind of incentives for their teachers to implement it as intended, for example, will produce weak results at best. Education Week’s approach was in no way unusual in the United States.
It turns out that studying the parts and pieces of our system in isolation from each other and in a dysfunctional system to boot tells us almost nothing about what it takes to construct a high-performance system. The only way to conduct research that will enable government to build and maintain high-performance systems is to study high-performance systems.
The reason this is so is not obvious. Years ago, a series of famous research studies appeared to establish the effectiveness of high-quality early childhood education programs, especially for improving outcomes for children from low-income families. Then, years later, researchers debunked these studies by showing that the effects of these investments often wore off before the students involved graduated from high school. This, it turns out, is a uniquely American way of thinking about education reform. Policymakers in the top-performing countries never imagined that high-quality early childhood education by itself would equalize performance outcomes for low-income children. It would only make an important difference if it were combined with other supports for low-income families, if something were done to raise teachers’ expectations for children from low-income and minority families, if school funding was adjusted to provide more support once these students were in school, and so on. In other words, they saw high-quality early childhood education as being a key element in a closely woven tapestry of interventions, not as a silver bullet that would change everything.
Or take teacher quality. Many Democratic candidates for President are talking about substantial raises for teachers. It is certainly true that they need raises. All the top performers pay their teachers well and greatly emphasize high teacher quality as a building block of effective education systems. But they also know that professional level compensation is a necessary but not sufficient condition for attracting to the teaching profession high school graduates and college students who could go into the high prestige professions. To get a high-quality teaching force, a country or state needs to greatly raise the rigor of teacher education programs, ramp up the selectivity of those programs, greatly raise licensure standards, strengthen the clinical component of those programs and, most important, reorganize the way work is done in schools to create a true profession of teaching. If you don’t do these things, and only increase teacher compensation, you will greatly increase the cost of the public schools while producing only a marginal increase in teacher quality and student achievement.
One more illustration of the point. A few decades ago, many researchers came to believe that the key to improving the performance of inner-city students was the quality of school leadership. Consistent with the still-prevailing paradigm, they decided to identify urban schools that were outperforming other inner-city schools with similar student bodies, and then identify the characteristics of the leaders of those schools and their leadership styles. And they did that, in the expectation that, if they carried out such a program of research, system leaders would use their research to identify the right people to lead such schools and train them to do what these outstanding leaders had done.
But it turns out that such schools are run by driven mavericks who don’t care what the system tells them to do, who work 24/7, who inspire their staffs to greatness and who eventually burn out, leaving their schools to return eventually to the level of performance that prevailed before they took over.
This, of course, is not an argument against effective leadership training, but against an approach to research—and training—that accepts the dysfunctional system as context and ends up describing a way to cope with it that is appropriate only for a handful of exceptional people whose efforts in the end produce only modest results.
There are two big points here. The first is that top performance at scale is never the result of the kinds of silver bullet initiatives that characterize American education reform. It is the result of the careful construction over time of well-conceived collections of initiatives designed to work in harmony with one another to produce the desired result. Systems like this do not well up from the bottom. They are designed by the leaders of the system, but they only work when they are designed and implemented in close partnership with the people at all the other levels of the system.
The second big point is double-faced. The things that work in dysfunctional systems—like driven principals who are willing and able to ignore all the incentives provided by a dysfunctional system—may not work at all in a well-designed system. And the obverse is also true: initiatives that don’t work very well or don’t work at all in a dysfunctional system—like high standards for students from low-income and minority families—may work very well in a well-designed system.
The lesson here is not that we don’t need education research that looks at narrow-gauge issues. It is that, in addition to much more research on high-performance systems as such, we need education research that focuses on all the narrow-gauge issues in high-performance systems: how you finance them, staff them, organize them, create curriculum for them, organize learning in them, and so on. But that research must be conducted in high-performance systems, not dysfunctional systems, or you will wind up with findings that produce only marginal gains in dysfunctional systems and, in many cases, no gains at all in high-performance systems, which is what we have now.
The short form of this message: U.S. education research needs a stem to stern makeover if it is going to make a major contribution to the improvement of high school student performance in a country in which high school student performance has not improved in half a century.