• email
  • rss
  • Facebook
  • YouTube
  • Twitter

Assessing the Assessment: NYC's School Survey

Katie Brohawn

Katie Brohawn is TASC's Director of Research.

Research Corner

In an era of high-stakes testing for accountability purposes and a seemingly laser-like focus on math and English, it is easy to overlook other factors that indicate a school’s growth or decline. However, there’s a recent trend towards the collection and analysis of non-test-score data for the purposes of measuring school growth.  While at the individual student level, this is perhaps most apparent in the area of social-emotional learning measures, at the school level, researchers and policymakers are building interest in school climate.

The New York City Department of Education (NYC DOE) is quite familiar with the topic of school climate. Each year, for more than five years, NYC DOE has administered the School Survey to teachers, parents and grade 6-12 students. Scores are collected in four main categories: safety, communication, engagement and academic expectations. These survey results are the only non-academic factors on NYC’s widely known Progress Report.

Given the size of the NYC public school system, the survey is one of the largest in the country (second only to the U.S. Census), with nearly 1 million responses annually. Due to the incredible investment of both time and money involved in administering the survey, as well as the heavy weight of accountability attached to results, it was important for the DOE to assess the reliability and validity of the measure.

In their new research brief, Strengthening Assessments of School Climate: Lessons from the NYC School Survey, Dr. Lori Nathanson and colleagues from the Research Alliance for New York City Schools report on an analysis of three years worth of NYC School Survey data. Their goal was to assess the degree to which the survey results are generalizable, valid and reliable. They produced four key findings:

  1. Given the high (and ever increasing) response rates by teachers, students and (to a lesser degree) parents, the public can be confident that responses represent opinions of the population at large.
  2. The existing survey items were reliable indicators of the four categories. However the categories were nearly indistinguishable from one another, supporting the possibility of combing all four into a single "school environment" measure in future surveys.
  3. Given the high correlation among items, the survey could be substantially shorter while still reaching the same conclusions.
  4. Teacher responses were best at distinguishing between schools, thus supporting the possibility of weighting teachers’ opinions more heavily when it comes to using results for accountability purposes.

Highlighting the benefits of research-practice partnerships and data-driven decision making, the brief concludes with a policymaker perspective response from DOE’s Director of the School Survey, Lauren Sypek. She explains how these important findings have been used in the latest version of the survey to improve both the survey respondents’ experience (i.e. reducing the number of questions and making it more user-friendly) and in making the survey a more appropriate tool for assessing schools (i.e. adding new, previously piloted questions to the teacher survey to better measure school performance). The 2013 School Survey results will be released in the coming weeks.