The Brown Center on Education Policy, part of the nonprofit research organization the Brookings Institute, just released the 2009 edition of their annual Report on American Education, 'How Well Are Students Learning?' Drawing from data as far back as 1971, the analysis takes a long-term view of several hot topics in today's climate of education reform: student assessment, school turnarounds and conversion charter schools. The study's author, Tom Loveless, hopes that by providing a historical perspective, the report can offer useful, in-depth information on subjects that are often obscured by controversy and short-term political agendas.
The report is split into three parts, all of which show what we can learn from student test scores. Part I looks at decades of national test data from the National Assessment of Educational Progress (NAEP), exploring long term trends in student test scores and what they can tell us about shifts in student achievement. Both parts II and III draw from test scores in California, a state which has the longest history of regular testing. Part II uses student test data to examine the effectiveness of attempts to turn around failing schools, and part III looks at the strengths and weaknesses of conversion charter schools through the filter of student achievement.
From the Brown Center's 2009 Report on American Education, page 9.
The Real State of Student Achievement
The first section of the 2009 Report on American Education is in response to the 'hand-wringing' over the 2009 NAEP scores. Often called the 'nation's report card,' the NAEP primarily assesses the nationwide mathematics and reading achievement of fourth and eighth graders every two years. Math scores were released in October 2009 (reading scores are due this spring), and they got a lot of attention. After years of steady improvement, the latest data showed that fourth grade math scores had 'stagnated' between 2007 and 2009. Many experts fretted over the flat scores, so the Brown Center decided to look at the 19-year history of the main NAEP to see if the latest data is truly cause for concern.
As the graph above shows, fourth graders' NAEP scores in mathematics have climbed steadily from 213 in 1990 to 240 in 2007 and 2009. A gain of 10-11 points is typically considered one year's worth of learning, so the 27 point gain since 1990 suggests that fourth graders' math skills have increased by more than two and a half grade levels. That's an almost miraculous gain. To suggest that no gain between 2007 and 2009, only one test cycle, is cause for alarm is at best premature. A flattening or decreasing trend would be worrisome, but without several more cycles of test data all we have is a fluctuation. In fact, the report points out that eighth grade scores climbed a small but statistically significant two points, which is consistent with the average gain (one point) registered for that age group per year since 1990. So if there's such a problem with math education since 2007, why hasn't it affected older students?
The report suggests an alternate explanation for the flattened scores - the main NAEP is simple 'coming back to Earth.' There are three tests regularly employed to monitor American students' math achievement: the main NAEP, the long-term trend (LTT) NAEP and the Trends in International Mathematics and Science Study (TIMSS). Their analysis shows that the main NAEP has shown gains that are between twice and seven times larger than the other tests in comparable age categories. The difference is due to the content of the main NAEP. Because it's designed to reflect changes in national math curricula, the items on the main NAEP are more frequently taught in the classroom. The Center proposes that perhaps what changed between 2007 and 2009 is the test and the curricula it's evaluating, not the quality of math instruction.
The Brown Center's report did identify another trend, however: The shrinking of the 90-10 achievement gap. The '90-10 gap' is a relative measure that refers to the difference between students scoring in the 90th percentile (the top 10 percent) and those scoring in the 10th percentile (the bottom 10 percent). Continuing a previous study, Loveless looked at math and reading scores from both the main NAEP and the long-term NAEP throughout the history of the tests. He found that both tests are showing that the 90-10 achievement gap has been shrinking since the late 1990s as the bottom 10 percent's improvements have outpaced the top 10 percent's.
The report cautions that the NAEP data doesn't allow for causal connections, but Loveless does offer a hypothesis for the shrinking gap: the widespread adoption of accountability systems. These systems typically focus on fixing low-performing schools and offering incentives for improving test scores. Opponents of programs like the No Child Left Behind (NCLB) Act contend that such trends are simply evidence of 'teaching to the test,' not genuine improvements in student achievement. While the scope of this report doesn't extend to the efficacy of the tests themselves, it does seem to show that whatever the tests are measuring is, for low-achieving students, improving.
From the Brown Center's 2009 Report on American Education, page 22.
Challenges Ahead for Failing Schools
The next trendy education reform topic that the report tackles is school turnarounds. The Obama administration has placed a high premium on fixing failing schools, making turnaround plans a key element in state's applications for Race to the Top funds. So the Brown Center set out to determine what the odds are that failing schools can actually change.
The report looks at eighth grade student test scores from 1989 and 2009 in 1,156 California public schools. The analysis focused on California because they have a remarkably long and consistent history of testing and have tried almost every form of school reform imaginable, from a slew of accountability systems to incorporating 'reform mathematics' into their curriculum. Their research showed that schools do change. The graph above compares composite scores for schools in 1989 and 2009. The general trend along the 45 degree line indicates that scores are related, but if the relationship were perfect then all the dots would fall along that line. Put in numbers, they found that about half the schools changed about 12 percentile points or less, and the rest changed more.
The real issue for school reformers, however, is whether under-performing schools show improvement, or a positive statistical change. On this question, the Center's findings were disappointing. Of the 115 schools that scored at the 10th percentile or below in 1989, only four (about 3.5%) scored at or above the state average in 2009. Loveless is quick to point out that their data supports what many school reformers already knew: Turning around failing schools is extremely difficult, but not impossible. The fact that only 3.5% of their sample showed significant improvement should be a caution to overzealous policy wonks looking for a magic solution to school turnarounds. But it should not discourage the efforts of dedicated reformers. The report suggests that more research into why schools' test scores aren't improving might yet yield insight into how failing schools can be turned around.
From the Brown Center's 2009 Report on American Education, page 28.
Conversion charter schools, which make up about 10% of charter schools nationwide, are one of the more popular turnaround solutions. The Obama administration favors this method as a strategy for restructuring failing schools. The idea behind a conversion charter school is that if you shut down a failing traditional public school, fire or reassign the staff and then reopen it as a charter with new teachers and administrators, the school has a second lease on life.
Part III of the 2009 Report on American Education also takes all of its data from California, which has the largest number of conversion charters. The report looked at test scores from 2004 at 49 schools and from 2008 at 60 schools. Because test score data from 1986 was also available for all the schools, they were able to compare scores before and after the schools were converted. The author cautions that their findings are purely descriptive because there's not enough information to draw causal connections from the data.
Ultimately, they found that test scores looked pretty similar before and after the conversions. As the table above shows, the 2004 group gained two to three percentage points as charters, but the 2008 group showed a slight (under two point) decline. In fact, they found that overall, conversion charter schools tend to look more like traditional public schools than start-up charters. They typically have larger student enrollments, more black and Hispanic students and tend to be more concentrated in urban areas. Teachers at conversions are also likely to have more experience and to hold teaching certificates, especially in bilingual education. The report concludes that the institutional differences between conversions and start-up charters will make it necessary to distinguish the two as research into charter schools continues.