‘Value-added’ evaluation of teachers: A flawed model?

A recent report from the Economic Policy Institute raises questions about the current push to closely tie decisions about teacher evaluation, discipline and pay to the gains that students make in standardized test scores – and, secondarily, about the value of making teacher effectiveness scores public.

The report, titled Problems with the Use of Student Test Scores to Evaluate Teachers, takes aim at “value-added” models, which rely on measures of test-score improvement from year to year and make allowances for the students’ socio-economic status and other factors.

The Indiana Department of Education’s “growth model” for measuring student and teacher performance appears to be sort of a poor cousin to a value-added model. It compares a student’s one-year growth in test scores with that of other students who started at the same place; but it doesn’t adjust for non-classroom factors that might influence how well kids perform.

The authors of the EPI report are a crew of heavy hitters in the world of education policy and research. They include Linda Darling-Hammond, a well-known education researcher at Stanford; Diane Ravitch, who was an assistant secretary of education in the first Bush Administration; and the institute’s Richard Rothstein, a former national education columnist with the New York Times and the author of several books on student achievement.

Citing studies by the National Research Council, Educational Testing Service and others, they argue that value-added modeling produces results that are too unstable and inconsistent for high-stakes decisions about whether teachers will be fired or promoted. Teachers who are effective in one year, according to value-added growth data, may appear to be ineffective the next year, and vice versa.

“One study found that across five large urban districts, among teachers who were ranked in the top 20 percent of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40 percent,” the report says.

The authors point to several problems with value-added models. The number of students in a classroom is too small to produce statistically reliable data, they say. Multiple factors, some of them beyond a classroom teacher’s control, influence what students learn. State tests aren’t designed to measure growth at the top and bottom of the performance scale. And students are not randomly assigned to teachers – in some schools, one grade-level teacher may get all the “difficult” kids and would have a hard time adding value through improved test scores across the board.

The report says teacher evaluations certainly need to be improved, and there is a place for using test-score improvement to assess how teachers are doing. But it argues that relying too heavily on such data would produce “perverse and unintended consequences.” For example, teachers could find themselves competing for performance-based raises, rather than collaborating for the benefit of all students.

“Some states are now considering plans that would give as much as 50 percent of the weight in teacher evaluation and compensation decisions to scores on existing tests of basic skills in math and reading. Based on the evidence, we consider this unwise,” the report says.

Indiana is one of those states. The state Department of Education’s “Fast Forward” plan says teacher evaluations should be at least 51 percent based on performance data derived from student test scores. The Obama Administration has pushed the approach through its Race to the Top competition. Its “blueprint” for reauthorizing the Elementary and Secondary Education Act calls for measures of teacher effectiveness “that are based in significant part on student growth and also include other measures, such as classroom observations of practice.”

The Economic Policy Institute report doesn’t directly address the controversy about a recent Los Angeles Times investigative reporting project that used test-score data and a value-added model to rate the effectiveness of teachers in the Los Angeles Unified School Districts. The paper created a searchable database that lets parents check the scores of their children’s teachers.

The LA teachers’ union was upset, but others, including Education Secretary Arne Duncan, praised the project. For a discussion of the disclosure issue, see Rob Manwaring’s blog at the Education Sector website. He supports disclosure of value-added data for schools but questions its usefulness for rating individual teachers, citing the same issues raised in the Economic Policy Institute report.

Advertisements

One thought on “‘Value-added’ evaluation of teachers: A flawed model?

  1. Pingback: Daniels’ education agenda: Is thoughtful debate too much to ask? « School Matters

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s