Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains
The study, by Peter Schochet and Hanley Chiang at Mathematica Policy Research (which develops schemes for value-added measures for school districts, including the District of Columbia Public Schools, where teacher evaluation came into play on the recent firings), has some interesting findings:
- "Type I and II error rates for comparing a teacher’s performance to the average are likely to be about 25 percent with three years of data and 35 percent with one year of data." [Type I errors are "false positives" -- you think the hypothesis is true when really it isn't; Type II errors are "false negatives" -- you think the hypothesis is false when it isn't.]
- "These results strongly support the notion that policymakers must carefully consider system error rates in designing and implementing teacher performance measurement systems based on value- added models, especially when using these estimates to make high-stakes decisions regarding teachers (such as tenure and firing decisions)."
- And this powerful statement: "Our results are largely driven by findings from the literature and new analyses that more than 90 percent of the variation in student gain scores is due to the variation in student-level factors that are not under the control of the teacher."
jd
Some related links on the report:
Study: Error rates high when student test scores used to evaluate teachers from The Answer Sheet blog (very good blog!)
Rolling Dice: If I roll a “6″ you’re fired! from the School Finance 101 blog.
No comments:
Post a Comment