‘Value-added’ evaluation of teachers: A flawed model?

A recent report from the Economic Policy Institute raises questions about the current push to closely tie decisions about teacher evaluation, discipline and pay to the gains that students make in standardized test scores – and, secondarily, about the value of making teacher effectiveness scores public.

The report, titled Problems with the Use of Student Test Scores to Evaluate Teachers, takes aim at “value-added” models, which rely on measures of test-score improvement from year to year and make allowances for the students’ socio-economic status and other factors.

The Indiana Department of Education’s “growth model” for measuring student and teacher performance appears to be sort of a poor cousin to a value-added model. It compares a student’s one-year growth in test scores with that of other students who started at the same place; but it doesn’t adjust for non-classroom factors that might influence how well kids perform.

The authors of the EPI report are a crew of heavy hitters in the world of education policy and research. They include Linda Darling-Hammond, a well-known education researcher at Stanford; Diane Ravitch, who was an assistant secretary of education in the first Bush Administration; and the institute’s Richard Rothstein, a former national education columnist with the New York Times and the author of several books on student achievement.

Citing studies by the National Research Council, Educational Testing Service and others, they argue that value-added modeling produces results that are too unstable and inconsistent for high-stakes decisions about whether teachers will be fired or promoted. Teachers who are effective in one year, according to value-added growth data, may appear to be ineffective the next year, and vice versa. Continue reading

Advertisements