In education policy discussions, there is little disagreement that teachers matter. When the conversation shifts to measuring teacher performance, however, consensus is harder to find.
Despite the lack of definitive research on how best to assess teachers, officials in states across the nation have relied on classroom observation data and complex statistical models designed to quantify a teacher’s impact on student achievement.
Pennsylvania’s approach to teacher evaluation is now defined and applicable to every school building in the state. It includes multiple measures of teacher effectiveness, including classroom observations, building-level student performance and attendance data, and data selected by schools from a list of possible measurement options.
The system also includes value-added measures (VAM), a particular point of contention in policy debates. When used to evaluate teachers, VAM leverages a student’s previous assessment scores to predict their performance on future assessments. Many have expressed concern about relying on high-stakes student testing to assess teacher effectiveness.
RFA’s latest issue brief details some of the factors leading to the development of teacher evaluation systems statewide and in Pittsburgh, along with research and policy considerations facing officials and stakeholders. It builds on an earlier brief on this issue released in September 2011 when the Pennsylvania Department of Education was piloting its system in one in five public schools across the state.