Board members favor counting test scores more than growth

Indiana education officials took a step forward by deciding in 2015 to count growth as equal to proficiency when using test scores to calculate school A-to-F school grades. Now it sounds like members of the State Board of Education want to turn back the clock.

At least five of the 11 members said last week that they favor giving more weight to proficiency – the number of students who pass state-mandated tests – than to year-to-year growth.

“I think we reached some consensus on some core values. Proficiency is more important than growth,” board member David Freitas said, according a story in to the Indianapolis Star.

“Growth, to me, is much less important than proficiency,” added B.J. Watts, another board member. Members Tony Walker, Byron Ernest and Kathleen Mote agreed, according to the Star.

Freitas and Watts made the same argument but didn’t prevail when the board approved the current A-to-F formula. Mote and Ernest weren’t on the board in at the time. Walker missed the meeting.

Superintendent of Public Instruction Jennifer McCormick favors keeping the equal weight for growth and proficiency, said Adam Baker, spokesman for the Indiana Department of Education. But she would probably agree to a formula that gave a little more weight to proficiency than to growth, he said.

Until 2014-15, Indiana relied heavily on test-score proficiency in determining grades; growth wasn’t a factor. The result was what you’d expect: Low-poverty schools reliably were rewarded with As. High-poverty schools struggled to avoid getting Fs. Schools with poor students were labeled as failing schools. Continue reading

No comparing school grades to previous years

Indiana school grades for 2015-16 were released this week, marking the first time the state has used a new grading system designed to count test-score growth as much as performance.

First, let’s note that comparing the new grades to grades from the previous year is meaningless. For one thing, we’re using a new system: It’s supposed to produce different results. Comparing the newly released grades to the previous year’s grades is comparing apples to oranges.

But more to the point, the previous year’s grades were largely bogus. They would have been a lot worse, but lawmakers passed “hold-harmless” legislation that said no school could get a lower grade in 2014-15 than it did in 2013-14.

Remember that Indiana adopted new, more rigorous academic standards in 2014-15, so the ISTEP exams got a lot tougher. Before the hold-harmless legislation passed, state officials said more than half of all schools could receive D’s or F’s. The Indiana Department of Education refused to make public the grades that schools actually would have received last year, even though the state public access counselor said it should.

So if you see that a certain school’s grade dropped from an A to a B this year … well, technically that may be correct. But there’s a good chance the school earned a D or F in 2014-15 but had its grade boosted by the legislature. Continue reading

The debate on tying test scores to evaluations: teachers vs. policy analysts

The New York Times did a “room for debate” feature this week on the growing practice of using student test scores to evaluate teachers. And it included a couple of teachers among the eight people selected to debate the issue – something that seems to almost never happen when education policy is discussed.

Not surprisingly, the teachers weren’t exactly crazy about the idea.

“This testing-students-to-grade-teachers initiative is not coming out of what people who actually work with children in schools know,” writes New York City teacher Francesa Burns. “It is not even research-based … Instead, the plans are based on politics and sound bites, corporate sleight of hand … and high talk. In short: nothing.”

Molly Putnam, a high-school teacher in Brooklyn, says the money going to develop more tests should instead “be spent on methods that have been proven to improve teacher quality and retention rates — like intensive student teaching and training in lesson planning, instruction and classroom management. A culture change would also mean having principals and senior teachers become even more engaged in mentoring and guiding younger teachers.”

A couple of policy analysts, Kevin Carey of Education Sector and Marcus Winters of the Manhattan Institute, argue that it only makes sense to use test scores to gauge how well teachers do their job. Carey approvingly cites a recent National Education Association policy statement that allows for using “valid, reliable, high quality standardized tests” to help evaluate teachers.

But another analyst, Michael Petrilli of the Thomas B. Fordham Institute, says using centralized, bureaucratic tests to evaluate teachers is like “attacking a fly with a sledgehammer.” His alternative: Give principals the power to fire bad teachers and increase pay for good teachers. “If we can’t trust school leaders to identify their best and worst teachers, then the whole project of school reform is sunk,” he says.

Indiana will use test results as a “significant” part of teacher evaluations beginning with the 2012-13 school year. But the state has a long way to go to figure out how this will work. To that end, the Department of Education chose six school districts – Bloomfield, Greensburg, Fort Wayne, Beech Grove, Bremen and Warren Township (Indianapolis) – to test out the new evaluation systems in 2011-12.

“Things are changing in Indiana in education,” Bloomfield superintendent Dan Sichting tells Bethany Nolan of the Bloomington Herald-Times. “Some people would argue it’s changing for the better, others not for the better. But if we sit back and don’t participate, we’re not going to have any kind of input in what the final product will be.”

Note to journalists: It could make an interesting project to follow one of those districts over the next year and track the results, good or bad, of changing how teachers are evaluated.

‘Value-added’ evaluation of teachers: A flawed model?

A recent report from the Economic Policy Institute raises questions about the current push to closely tie decisions about teacher evaluation, discipline and pay to the gains that students make in standardized test scores – and, secondarily, about the value of making teacher effectiveness scores public.

The report, titled Problems with the Use of Student Test Scores to Evaluate Teachers, takes aim at “value-added” models, which rely on measures of test-score improvement from year to year and make allowances for the students’ socio-economic status and other factors.

The Indiana Department of Education’s “growth model” for measuring student and teacher performance appears to be sort of a poor cousin to a value-added model. It compares a student’s one-year growth in test scores with that of other students who started at the same place; but it doesn’t adjust for non-classroom factors that might influence how well kids perform.

The authors of the EPI report are a crew of heavy hitters in the world of education policy and research. They include Linda Darling-Hammond, a well-known education researcher at Stanford; Diane Ravitch, who was an assistant secretary of education in the first Bush Administration; and the institute’s Richard Rothstein, a former national education columnist with the New York Times and the author of several books on student achievement.

Citing studies by the National Research Council, Educational Testing Service and others, they argue that value-added modeling produces results that are too unstable and inconsistent for high-stakes decisions about whether teachers will be fired or promoted. Teachers who are effective in one year, according to value-added growth data, may appear to be ineffective the next year, and vice versa. Continue reading