preview

Five things to know about Tennessee’s 2015 test scores, out today

PHOTO: Tennessee Department of Education
Gov. Bill Haslam announces the release of state TCAP scores in 2014.

Tennessee officials’ annual test-score announcement on Thursday will mark the end of an era.

This year’s scores are the last for the multiple-choice tests known as TCAP that the state has administered for more than two decades. Next year, students are set to take a new exam that officials say will be a better measure of students’ skills.

The impending test switch doesn’t mean this year’s results aren’t important. Indeed, the scores will be used to evaluate students, teachers, schools, and districts alike.

Here’s what you need to know about the new test scores.

1. The state is coming off of years of gains — and exultance about them.

For the past four years, students’ TCAP scores improved in most subjects. A major question in this year’s scores will be whether and to what degree that trend continues.

Another question is how top officials talk about the scores.

Last year, Gov. Bill Haslam and then-commissioner Kevin Huffman credited the recent gains to a slew of education policy changes triggered by a 2010 state law called “First to the Top,” which included adopting new standards, mandating the use of test scores to evaluate teachers, and targeting resources to the neediest schools. But Huffman resigned in January amid sharp criticism about the way he rolled out those initiatives, and Haslam appointed Lipscomb University dean Candice McQueen to replace him.

So far, the new commissioner has stayed the course when it comes to teacher evaluations and other policies instituted by her predecessor. McQueen’s first test score announcement could hint at whether that will continue to be the case, or if she’ll call for new changes to influence next year’s scores.

2. This year’s test was out of step with what was supposed to happen in classrooms.

When Tennessee adopted the Common Core, it also planned for students to start taking an exam that was tied to the standards this year. But lawmakers — concerned about the fact that the standards and exam, known as PARCC, had been developed outside the state — mandated in 2014 that the state hold on to TCAP for another year, then switch to another Tennessee-only exam.

That means this year’s exam was not designed to test what students are expected to know. TCAP was never updated to reflect the standards, only culled to remove questions that explicitly contradict the Common Core.

Officials say the TCAP is still a fair measure of student learning. But they’ve also acknowledged the discrepancy between what the test asks students to know and what teachers are asked to teach.

“We are teaching standards that are challenging students’ higher order thinking skills, and we have a test that’s still a bubble test,” Erin O’Hara, then assistant commissioner for data and research, said last summer. “Until we transition to assessments that are based more fully on the Common Core, we’ll continue to see people struggle on how to adjust.”

That transition begins next year, when students are set to take a new exam known as TNReady that is costing the state $108 million to roll out. That test will be aligned with the Common Core for at least two years, until Tennessee adopts new standards in 2017 after a review that Haslam initiated last year.

“The new TNReady assessment is going to be significantly more meaningful, especially for students and parents, but also for teachers,” said Teresa Wasson, communications director of the advocacy group State Collaborative On Reforming Education, or SCORE. “It’s going to provide a fair opportunity for students to show skills that they’ve learned — real world skills like critical thinking and problem solving, rather than test-taking tricks.”

In other states, the switch to Common Core tests has been accompanied by a drop-off in scores. Wasson said that while that could happen in Tennessee, she was hopeful that the state’s strong showing in 2013 on a national exam that tests skills similar to those called for under the Common Core meant that it would not.

3. For the first time in years, students’ grades held no clues about test scores.

The state provided information about this year’s test scores to teachers so they could factor them into students’ end-of-course grades, as the law has requires them to since 2010. But those “quick scores” did not offer indications of students’ TCAP performance the way they have in the past.

That’s because the way the state calculates quick scores changed mysteriously and quietly since last year. The scores that educators received last month were higher than many expected, given their students’ past test performance and current skill level.

Officials quickly clarified that because of a policy change that they had not communicated publicly, the higher quick scores did not necessarily represent higher proficiency rates. For example, a student in the fourth grade who had quick score of 88 — previously a suggestion of a “proficient” TCAP score — might still be considered “basic” on this year’s test.

As a result, educators have less information than they might have had in the past about test scores. And the confusion around quick scores means that state officials might face a tougher road than in the past to convincing Tennesseans that they are accurately describing changes in students’ skills.

4. The big picture is likely to show significant achievement gaps — and potentially to reflect efforts to close them.

As is true across the country, broad statewide trends tend to mask widely disparate performance among different groups of students.

Last year, the state’s achievement gaps between white students and non-white students narrowed slightly. But the performance gap between low-income students and other students did not shrink, and the gap between students with disabilities and their peers actually grew in a majority of subjects.

This year, the state rolled out a new program, Response to Instruction and Intervention, to target the lowest-performing students in hopes of closing those gaps. The new scores will offer insight into that program’s progress.

5. Lots of important information won’t come out until later.

Unusually, Tennessee releases test scores in three waves each year. The first data dump shows only statewide numbers, which are useful for assessing broad trends but not for answering more detailed questions about local change.

District- and school-level results will be released in the coming weeks. Those will allow for a closer analysis of how individual teachers and students performed, and of how local school improvement efforts, such as the Innovation Zone in Memphis and the state-run Achievement School District, are going.

And an update about how Tennessee students are faring compared to students in other states won’t arrive until this fall, when the latest results of a test known as the nation’s report card are released. That exam, the National Assessment of Educational Progress or NAEP, is given to students in all 50 states and has been the only way to compare students in an era of state-specific annual tests. The last time scores came out, in 2013, Tennessee students had made the biggest gains in the country, although students’ absolute scores were still low. Whether Tennessee continues to set the pace now that many other states have begun testing students on the Common Core standards, which more closely reflect what NAEP assesses, is a big question.

Update: The scores are now available. Read about them here. 

What are you looking for in this year’s statewide scores? Let us know in the comments.

failing grade

Why one Harvard professor calls American schools’ focus on testing a ‘charade’

PHOTO: Alan Petersime

Harvard professor Daniel Koretz is on a mission: to convince policymakers that standardized tests have been widely misused.

In his new book, “The Testing Charade,” Koretz argues that federal education policy over the last couple of decades — starting with No Child Left Behind, and continuing with the Obama administration’s push to evaluate teachers in part by test scores — has been a barely mitigated disaster.

The focus on testing in particular has hurt schools and students, Koretz argues. Meanwhile, Koretz says the tests are of little help for accurately identifying which schools are struggling because excessive test prep inflates students’ scores.

“Neither good intentions nor the value of well-used tests justifies continuing to ignore the absurdities and failures of the current system and the real harms it is causing,” Koretz writes in the book’s first chapter.

Daniel Koretz, Harvard Graduate School of Education

His skepticism will be welcome to families of students who have opted out of state tests across the country and others who have led a testing backlash in recent years. That sentiment helped shape the new federal education law, ESSA.

Koretz has another set of allies in some conservative charter and voucher advocates, including — to an extent — Secretary of Education Betsy DeVos, who criticized No Child Left Behind in a recent speech. “As states and districts scrambled to avoid the law’s sanctions and maintain their federal funding, some resorted to focusing specifically on math and reading at the expense of other subjects,” she said. “Others simply inflated scores or lowered standards.”

But national civil rights groups and some Democratic politicians have made a different case: That it’s the government’s responsibility to continue to use test scores to hold schools accountable for serving their students, especially students of color, poor students, and students with disabilities. (ESSA continues to require testing in grades three through eight and for states to identify their lowest performing schools, largely by using test scores.)

We talked to Koretz about his book and asked him to explain how he reached his conclusions and what to make of research that paints a more positive picture of tests and No Child Left Behind.

The interview has been edited for clarity and length.

Do you want to walk me through the central thesis of your book?

The reason I wrote the book is really the subtitle: we’re “pretending to make schools better.”

Most of the bad news that’s in this book is old news. We’ve been collecting evidence of various kinds about the impact of the very heavy handed, high-stakes testing that we use in this country for a long time. I lost patience with people pretending that these facts aren’t present. So I decided it would be worth writing a book that summarizes the evidence both good and bad about the effects of test-based accountability. When you do that, you end up with an awful lot on the bad side and not very much on the good side.

Can you talk about some of the bad effects?

There are a few that are particularly important. One is absolutely rampant bad test prep. It’s just everywhere. One of the consequences of that is that test scores are often very badly inflated.

There aren’t all that many studies of this because it’s not really a welcome suggestion. When you go to the superintendent and say, “Gee, I’d like to see whether your scores are inflated,” they rarely say, “Boy, we’ve been waiting for you to show up.” There aren’t that many studies, but they’re very consistent. The inflation that does show up is sometimes absolutely massive. Worse, there is growing evidence that that problem is more severe for disadvantaged kids, creating the illusion of improved equity.

Another is increasingly widespread cheating. We, of course, will never know just how widespread because there aren’t resources to examine the data from 13,000 school districts. Everyone knows about Atlanta, a few people know about El Paso, but that’s just the tip of the iceberg.

There’s obviously also — and perhaps this should be on the same par — enormous amounts of stress for teachers, for kids, and for parents. That’s the bad side.

I want to ask a little more about test score inflation. What is the strongest evidence for inflation? And let me give you two pieces that to me seem like potentially countervailing evidence. One piece is when I’m looking at research on school turnaround — like the most recent School Improvement Grant program and also turnaround efforts in New York City — these schools have been under intensive pressure to raise test scores. And yet their test scores gains on high-stakes tests have been pretty modest at best. The other example is the Smarter Balanced exam. The scores on the Smarter Balanced exam don’t seem to be going up. If anything, they’re going down.

The main issue is that score inflation doesn’t occur in the same amount everywhere. You’ve come up with two examples where there is apparently very little. There are other examples that are much worse than the aggregate data suggest.

In the case of Smarter Balanced, I would wait and see. Score inflation can only occur when people become sufficiently aware of predictable patterns in the test. You can’t game a test when you don’t know what irrelevant things are going to recur, and that just may take some time.

I’m wondering your take on why some of the strongest advocates for test-based accountability have been national civil rights groups.

One of the rationales for some of the most draconian test-based accountability programs we’ve had has been to improve equity. If you got back to the enactment of NCLB, you had [then-Massachusetts Sen.] Teddy Kennedy and [then-California Rep.] George Miller actively lobbying their colleagues in support of a Republican bill. George Miller summed that up in one sentence in a meeting I went to. He said, “It will shed some light in the corners.” He said that schools had been getting away with giving lousy services to disadvantaged kids by showing good performance among advantaged kids, and this would make it in theory impossible to do that.

Even going back before NCLB, I think that’s why there was so much support in the disability community for including disabled kids in test-based accountability in the 1990s — so they couldn’t be hidden away in the basement anymore. I think that’s absolutely laudable. It’s the thing I praise the most strongly about NCLB.

It just didn’t work. That’s really clear from the evidence.

I think the intention was laudable and I think the intention was why high-stakes testing has gotten so much support in the minority community, but it just has failed.

You mention in your book probably the most widely cited study on the achievement effects of No Child Left Behind, showing that there were big gains in fourth grade math and some gains in eighth grade math, but there wasn’t anything good or bad in reading.

Pretty much. There was a little bit of improvement in some years in reading but nothing to write home about.

So the math gains — and that was on the low-stakes federal NAEP test — they’re just not worth it in your view?

I think the gains are real. But there are some reasons not be terribly excited about these. One is that they don’t persist. They decline a little bit by eighth grade, they disappear by the time kids are out of high school. We don’t have good data about kids as they graduate from high school, but what we do have doesn’t show any improvement.

The biggest reason I’m not as excited as some people are about those gains is we’ve had evidence going back to the 1980s that one of the responses that teachers have had to test-based accountability is to take time out of untested subjects and to put it into math and reading. We don’t know how much of that gain in math is because people are teaching math better and how much is because kids aren’t learning about civics.

That’s, in my view, not enough to justify all of the stuff on the other side of the ledger.

When I’ve looked at some studies on the impact of NCLB on students’ social-emotional skills, the impact on teachers’ attitudes in the classrooms, and the impact on voluntary teacher turnover, they haven’t found any negative effects. They also haven’t found positive effects in most cases. But that would seem to at least in one sense undermine the argument that NCLB had big harmful effects on these other outcomes.

I haven’t seen those studies, but I don’t think what you describe does undermine it. What I would like to see is an analysis of long-term trends not just on teacher attrition but on teacher selection. A lot of what I have heard has really been, frankly, anecdotal. I was once a public school teacher and teaching now is utterly unlike what it was when I taught. It seems unlikely that that had no effect on who opts in and who opts out to be a teacher.

I don’t have evidence of this but I suspect that to some extent different types of people are selecting into teaching now than were teaching 30 years ago.

Can you talk about what you see as good versus bad test prep?

Something that Audrey Qualls at the University of Iowa said was, “A student has only mastered something if she can do it when confronted with unfamiliar particulars.”

Think about training pilots — you would never train pilots by putting them in a simulator and then always running exactly the same set of conditions because next time you were in the plane and the conditions were different you’d die. What you want to know is that the pilot has enough understanding and a good enough command of the physical motions and whatnot that he or she can respond to whatever happens to you while you’re up there. That’s not all that distant an analogy from testing.

Bad test prep is test prep that is designed to raise scores on the particular test rather than give kids the underlying knowledge and skills that the test is supposed to capture. It’s absolutely endemic. In fact, districts and states peddle this stuff themselves.

I take it it’s very hard to quantify this test prep phenomenon, though?

It is extremely hard, and there’s a big hole in the research in this area.

Let’s turn from a backward-looking to a forward-looking discussion. What is your take on ESSA? Do you think it’s a step in the right direction?

This may be a little bit simplistic, but I think of ESSA as giving states back a portion of the flexibility they had before No Child Left Behind. It doesn’t give them as much flexibility as they had in 2000.  

It has the potential to substantially reduce pressure, but it doesn’t seem to be changing the basic logic of the system, which is that the thing that will drive school improvement is pushing people to improve test scores. So I’m not optimistic.

One of things that I argue very strongly at the end of the book is that we need to look at a far broader range of, not just outcomes, but aspects of schooling to create an accountability system that will generate more of what we want. ESSA takes one tiny step in that direction: it says you have to have one measure beyond testing and graduation rates. But if you read the statute it almost doesn’t matter what that measure is. The one mandate is that it can’t count as much as test scores — that’s written in the statute. The notion that it means the same thing to monitor the quality of practice or to monitor attendance rates is just absurd

As I’m sure you know, research — including from some of your colleagues at Harvard — has shown that so-called “no-excuses” charter schools in places like Boston, Chicago, and New York City, have led to substantial test score gains and in some cases improvements in four-year college enrollment. Are you skeptical that those gains are the result of genuine learning?

It depends on which test you’re talking about. Some of the no-excuses charter schools drill kids on the state test, so I don’t trust the state test scores for some of those schools. I think it’s entirely plausible that some of those schools are going to affect long-term outcomes because they’re in some cases replacing a very disorderly environment with a very orderly one. In fact, I would say too orderly by quite a margin.

But those reforms are much bigger than just test-based accountability or just the control structure we call charters. It’s a whole host of different things that are going on: different disciplinary policies, different kinds of teacher selection, different kinds of behavioral requirements, all sorts of things.

A lot of the discussion around accountability, including in your book, is about the measures we should be using to identify schools. I’m interested in your take on what happens when a school is identified by whatever system — perhaps by the holistic system you described in the book — as low performing.

The first step is to figure out why is it bad. I would use scores as an opening to a better evaluation of schools. If scores on a good test are low, something is wrong, but we don’t know what. Before we intervene we ought to find out what’s wrong.

This is the Dutch model: school inspections are concentrated on schools that shows signs of having problems, because that’s where the payoff is. I would want to know what’s wrong and then you can design an alternative. In some cases, it may be the teaching staff is too weak. It may be in some cases the teaching staff needs supports they don’t have. It may be like in the case of Baltimore, they need to turn the heat on. Who knows? But I don’t think we can design sensible interventions until we know what the problems are.

Testing reboot

ACT do-overs pay off for 40 percent of Tennessee high school seniors who tried

PHOTO: Alan Petersime

Tennessee’s $2 million investment in helping high school seniors retake the ACT test appears to be paying off for a second year in a row.

Almost three-fourths of the class of 2018 took the national college entrance test last fall for a second time, doubling the participation rate in Tennessee’s ACT Senior Retake Day for public schools. State officials announced Wednesday that 40 percent of the do-overs resulted in a higher overall score.

Of the 52,000 students who participated in the initiative’s second year, 2,333 raised their average composite to a 21 or higher, making them eligible for HOPE Scholarship funds of up to $16,000 for tuition. That’s potentially $37 million in state-funded scholarships.

In addition, Tennessee students are expected to save almost $8 million in remedial course costs — and a lot of time — since more of them hit college-readiness benchmarks that allow direct enrollment into credit-bearing coursework.

But besides the benefits to students, the early results suggest that Tennessee is inching closer to raising its ACT average to the national average of 21 by 2020, one of four goals in Tennessee’s five-year strategic plan.

After years of mostly stagnant scores, the state finally cracked 20 last year when the class of 2017 scored an average of 20.1, buoyed in part by the senior retake strategy.

(The ACT testing organization will release its annual report of state-by-state scores in August, based on the most recent test taken. Tennessee will release its own report based on the highest score, which is what colleges use.)

Tennessee is one of 13 states that require its juniors to take the ACT or SAT and, in an effort to boost scores, became the first to pay for public school seniors to retake their ACTs in 2016. Only a third of that class took advantage of the opportunity, but enough students scored higher to make it worth expanding the voluntary program in its second year.

Last fall, the state worked with local districts to make it easier for seniors to participate. The retake happened during the school day in students’ own schools, instead of on a Saturday morning at an ACT testing site.

Education Commissioner Candice McQueen said the expanded access has paid off tenfold. “Now, more Tennessee students are able to access scholarship funding, gain admission to colleges and universities, and earn credit for their work from day one,” she said.

Of the state’s four urban districts, Metropolitan Nashville Public Schools, which serves Davidson County, increased its average composite score the most (up .5 to 18.4), followed by Hamilton County (up .3 to 19.4), and Shelby County Schools, (up .2 to 17.1). Knox County Schools and the state-run Achievement School District, which operates high schools in Memphis, saw slight drops from their retakes and will retain their higher average scores taken earlier.

Statewide, 10 school systems logged a half point or more of growth from their junior test day to the senior retake:

  • Anderson County, up .6 to 19.3
  • Arlington City, up .6 to 22.5
  • Collierville City, up .6 to 24.3
  • Davidson County, up .5 to 18.4
  • Franklin County, up .6 to 20.1
  • Haywood County, up .5 to 17.5
  • Henderson County, up .5 to 21.2
  • Humboldt City, up .8 to 17.4
  • Maryville City, up .5 to 22.1
  • Williamson County, up .6 to 24.1

Tennessee set aside up to $2.5 million to pay for its 2017 Retake Day, and Gov. Bill Haslam is expected to fund the initiative in the upcoming year as well. The state already pays for the first ACT testing day statewide, which it’s done since 2009.

Correction: January 17, 2018: This story has been corrected to show that, while the state set aside $2.5 million for its ACT retake initiative, it spent only $2 million on the program this fiscal year.