Following up on last week’s post, here are some articles and studies about the pros and cons of using test-score data to measure the effectiveness of teachers.
The topic is timely, because Indiana Gov. Mitch Daniels and Superintendent of Public Instruction Tony Bennett want to make such data a major part of teacher evaluations. Evaluations that rely on student test scores, they say, should be used “to inform decisions about hiring, firing, professional development, compensation, placement, transfers and reductions in force.”
This is a national issue, and much is being written about assessing teacher effectiveness with “value-added” measures, which employ sophisticated statistical techniques to rate teachers at improving the test scores of students. (Indiana will apparently use a “growth model,” a less complex measure than value-added, to gauge teacher effectiveness).
An article in District Administration magazine provides an overview. It connects value-added analysis with issues such as merit pay and teacher retention and examines how the approach has been used in New York, Houston and Winston-Salem, N.C.
A New York Times story reveals problems with a teacher ranking system in New York City, where the school district is caught in a battle between the news media and the teachers’ union over whether value-added rankings for individual teachers should be made public.
One concern is that value-added rankings of teachers vary a lot from year to year, calling into question their reliability. Typically, only about one-third of the teachers who rank in the top 25 percent in one year will again rank in the top quarter the next year, the Times reports.
A report from the Brookings Institution looks at the positive side. Sure, value-added is flawed, several Brookings education scholars argue, but no more than most methods of evaluating employees. They say it provides useful information, even if it sometimes leads to false conclusions about individual teachers.
On the other hand, a study by NYU professor Sean Corcoran for the Annenberg Institute for School Reform is more critical. Corcoran writes that value-added assessments “are, at best, a crude indicator of the contribution that teachers make to their students’ academic outcomes.”
Corcoran points out that most state tests – such as Indiana’s ISTEP-Plus – are designed to measure whether students meet grade-level standards, not whether they improve along a consistent scale, so it can be misleading to use them to judge year-to-year gains by students. Furthermore, annual standardized tests are typically given in grades 3-8 for English and math and provide no information about, for example, most early elementary, art, music, science and social studies teachers.
Finally, here’s a lengthy report of preliminary findings from the Bill and Melinda Gates Foundation’s Measures of Effective Teaching Project, which is premised partly on the idea that teacher evaluations should include student achievement gains “when feasible.”
The report concludes that using value-added measures to evaluate teachers, while imperfect, is better than current evaluations that don’t distinguish between effective and ineffective teachers. “Better information will lead to fewer mistakes, not more,” the authors write. “Better information will also allow schools to make decisions which will lead to higher student achievement.”