METHODOLOGY & RESOURCESCitywide Education Progress Reports
The 18 cities in this study include Atlanta, Boston, Camden, Cleveland, Chicago, Denver, Houston, Indianapolis, Kansas City, Los Angeles, Memphis, New Orleans, New York City, Oakland, Philadelphia, San Antonio, Tulsa, and Washington, D.C.
We chose to study these cities because they are pursuing an improvement strategy that gives families choice among public school options, both district and charter. In all the cities, district and charter schools are held to high standards of accountability, resulting in intervention or possibly closure if they do not meet those standards. In most of the cities, at least a subset of district schools is also given some degree of decisionmaking autonomy over staffing, curriculum, and/or budget.
|System Reforms Data|
|Student and School Outcomes Data|
For more information about the interviews and parent survey, see the technical report, Scoring and Measures.
- The Edunomics Lab provided analysis for our equitable funding indicator. Researchers used district interviews and a budget analysis to determine the percent of district funds being dispersed to schools using a student-based allocation formula (SBA). For more information about Edunomics Lab’s SBA metric, please see this report.
- Education Cities and GreatSchools developed the Education Equity Index (EEI), which we used for our indicator that identifies how well low-income students in the city are performing. Researchers built the EEI using state test results for free and reduced-price lunch students from 2010-11 to 2014-15 and National Assessment of Educational Progress (NAEP) results from the same years.
Scoring the System Reforms
We scored each of the 12 system reforms on a 4-point scale. To score each indicator, we used a rubric developed from research and practitioner feedback. For more information about the rubrics, see the technical report, Scoring and Measures.
Each of the 12 system reform indicators received a score of 0, 1, 2, or 3. Because this study addresses how a city is doing overall, we analyzed policy and implementation for both the district and charter sectors, using the lower of the two scores. A city that received a score of 0 in the reform component has little in place in terms of both policy and implementation. A score of 1 means that the strategy is in a developing state. A score of 2 means the city has good policy, but the reform falls short of achieving its goal. A score of 3 means the city excels at meeting the indicator’s standards.
We calculated the goal scores by adding the individual indicator scores together to arrive at a raw score for each goal. For example, the goal “the education system is continuously improving” has three indicators for a total possible raw score of 0 to 9. For each of our goals, we distributed the possible raw scores into four groups to arrive at one of four final goal scores: Little in Place, Developing, Good, or Exemplar. If the total raw score equals 0 or 1, the city has little in place. A raw score of 2, 3 or 4 indicates that the city is developing. If the raw score is a 5, 6, or 7 the city has good policies. If the raw score is 8 or 9, the city is exemplar.
Definitions & Sources for Student and School Outcomes
School performance is continuously improving
This indicator demonstrates whether cities made school-level gains in proficiency rates from 2011-12 to 2014-15.
For 14 of the 18 cities, we calculated the average cohort change made in math and reading proficiency over a three- or four-year period.
We estimated a separate linear regression model for each city in our sample, in which the outcome variable is the mean-centered (by state and year) proportion of students in a school scoring at or above “proficient” on each state’s standardized math test (we also did the same for reading).
To show gains, a city’s schools had to improve relative to its state’s performance. School proficiency rates were standardized by state and year to account for differences in state proficiency standards and year-to-year shifts in state assessments or standards. We also adjusted the results by using state enrollment data to account for each school’s student composition.
Only elementary and middle schools were used to calculate proficiency. High schools were excluded from the analysis as these grades are inconsistently included in testing across states.
On the individual city pages, we reported statistically significant gains (p <0.05) relative to the state, i.e., when gains were clearly different from 0. When a city’s proficiency in math or reading made a statistically significant decline relative to the state (controlling for student population), we reported it as “falling behind relative to the state.” When the city made no statistically significant gains relative to the state, we reported it as “no improvement relative to the state.” When the city made statistically significant gains relative to the state, we reported it as “improvement relative to the state.”
Importantly, this model estimates changes in the proficiency rate from one cohort of students to the next in a school controlling for student demographics and select school characteristics. Cohorts will change from year to year as students enter and exit schools. While these measures reflect trends in the proficiency rate across schools in a city, we cannot attribute these trends to the actions of schools. Therefore, these measures do not indicate whether schools are “getting better” or “getting worse.” These models also can’t speak to how school openings and closings affect overall performance. For example, if a city closed many low-quality schools and replaced them with high-quality schools, the system would be improving in a way that does not show up in these results.
We did not use this indicator for Memphis, Oakland, and Los Angeles because of problems with the state data. We did not use this indicator for Washington, D.C., because there is no state-level comparison data.
Year range: For all cities except Denver, Kansas City, New York City, Philadelphia, and Tulsa we used the date range 2011-12 to 2014-15. Because of missing or unusable state data we used different years of data for some cities. Denver: 2011-12 to 2013-14, New York City and Philadelphia: 2012-13 to 2014-15; Tulsa: 2014-15 to 2016-17. For Kansas City we used seven years of data, from 2009-10 to 2015-16.
Source: State agency school performance files.
Low-scoring schools do not remain low scoring for several consecutive years
This indicator gives the percent of each city’s elementary and middle schools that scored in the lowest 5 percent of schools statewide in math and reading in the first year of our data (2011-12), and the share of those same schools that remained in the bottom 5 percent for each of four subsequent years ending in 2014-15. (See below for cases where we did not have all four years of data.)
We used statewide, school-level standardized assessment results to identify schools that ranked in the bottom 5 percent of their state in terms of math proficiency (we did the same for reading). We identified schools that ranked in the bottom 5 percent of their state in terms of proficiency in each year for both math and then reading. We used each school’s unique identifier to identify how many of the schools that started in the bottom 5 percent of the state in year 1 remained in the bottom 5 percent in years 2, 3, and 4.
This measure only identifies the schools that are among the lowest scoring in the state. It is important to acknowledge that the lowest-scoring schools may also be serving high concentrations of the state’s most challenging or most at-risk students. Because we can only look at the percent of students meeting proficiency standards and were not able to measure the value-add of schools, we should be careful to not interpret these schools as the lowest-performing schools in the state. Some low-scoring schools may in fact be posting strong gains by moving students nearer to proficiency but not yet to the level of proficiency. Also, some schools may show strong value added to student performance though they remain low scoring. This measure is also relative: schools in a city may have stayed the same or even gotten worse, but their ranking relative to other schools in the state may have risen, making it appear that their scores improved.
Despite these drawbacks, we believe the measure provides an indication of the dynamism in the school system as a whole. In cities where low-scoring schools do not remain among the bottom 5 percent of the state, the schools may have improved or closed, but it may also reflect changing student demographics or a decline in proficiency rates statewide. But in cities where schools defy the odds and remain among the lowest scoring in the state for several consecutive years, it is safe to conclude that these schools really are stuck, and that the education system lacks appropriate levers for addressing them.
We did not use this indicator for Memphis, Oakland, and Los Angeles because of problems with the state data.
Year range: For all cities except Denver, New York City, Philadelphia, and Tulsa we used the date range 2011-12 to 2014-15. Because of unusable state data, for Denver we used 2011-12 to 2013-14, for New York City and Philadelphia we used 2012-13 to 2014-15, and for Tulsa we used 2014-15 to 2016-17.
Source: State agency school performance files.
Graduation rates are improving
We used the National Center for Education Statistics’ definition of Adjusted Cohort Graduation Rate (ACGR). State agencies calculate the ACGR by identifying the number of first-time 9th graders in the fall of 2011 (starting cohort) plus students who transferred in, minus students who transferred out, emigrated, or died between 2011 and 2015. The ACGR is then the number of cohort members who earned a regular high school diploma within four years, or by the end of the 2014-15 school year. For more information, see the National Center for Education Statistics.
When the data provided a numerical range for a school’s rate, we used the mid-point of the range provided by EDFacts (e.g., if a school’s rate was given as between 50 and 54 percent, we recoded it as 52 percent). When a school provided a “greater than” or “lesser than” range (e.g., greater than 80 percent), we used the number rather than the average of the range (e.g., we reported the graduation rate as 80 percent). We calculated the citywide graduation rate from school cohort-adjusted graduation rates and weighting by the size of each school’s cohort.
To arrive at graduation rate differences, we took EDFacts four-year cohort graduation rates and subtracted the change in state rates from the change in city rates to produce a percentage point change. This can be expressed through the following:
Change in the gap = (2015 city rate – 2015 state rate) – (2012 city rate – 2012 state rate).
For example, in Camden, the citywide graduation rate for all district and charter schools in 2011-12 was 56 percent, while the state’s rate was 87 percent. The difference between the two is 31 percentage points. In 2014-15, Camden’s rate was 70 percent, while the state’s rate was 90 percent, for a difference of 20 percentage points. We subtracted the percentage point difference of 20 (2014-15) from 31 (2011-12) to find an 11 percentage-point gain on the state.
We did not control for income or demographic differences between the city and state, so these gaps reflect, in part, differences in the types of students who enroll in the city versus demographics of the statewide student population. However, we feel that at a minimum, cities should be expected to graduate all students. Therefore, this measure reflects a gap that will be important to close.
Year range: For all cities except Memphis, New Orleans, and Tulsa, we used the date range 2011-12 to 2014-15. For Memphis, we used 2011-12, 2012-13, and 2014-15. For New Orleans, we used 2011-11 to 2013-14, and for Tulsa we used 2012-13 to 2014-15.
Source: The EDFacts Initiative, U.S. Department of Education, Assessment and Adjusted Cohort Graduation Rates (ACGR) Data.
Low-income students in the city are performing better than their peers nationally
To determine the performance of low-income students in relation to their peers, we used GreatSchools and Education Cities developed the Education Equality Index (EEI).
The EEI used state data from 2010-11 to 2014-15 to measure the percent proficiency of free and reduced-price lunch (FRL) students in one school in one grade/subject.
They then used assessments from the NAEP for the same years to adjust for the difference between the NAEP score and state proficiency scores in order to make the results nationally comparable. To account for differences in proficiency based on grade, scores were standardized nationally by grade/subject/year.
Schools and cities were then placed into four categories based on the performance of the FRL students compared to all students nationally: (1) Low-income students are performing worse than the national average for low-income students (2) Low-income students are performing better than the national average for low-income students, but worse than the national average for all students (3) Low-income students are performing better than the national average for all students but worse than the national average for non-low-income students and (4) Low-income students are performing better than the national average for non-low-income students.
Final adjustments were made for FRL concentration to neutralize the known correlation between FRL status and performance, and to ensure that they were highlighting schools where low-income students were beating the odds.
EEI scores were created by converting the adjusted scores for each grade and subject into percentiles on a 0 to 100 scale, with 100 being the best. Finally, data was aggregated at the grade/subject, school, district, and city levels, weighed by the number of students tested. The EEI Scores were grouped into 5 categories to provide a snapshot comparison. The categories are far below average (1-10), below average (11-30), average (31-69), above average (70-89) and far above average (90-100).
We used this indicator for the following cities: Boston, Chicago, Houston, Indianapolis, Kansas City, Los Angeles, Memphis, New Orleans, New York City, Oakland, Philadelphia, and San Antonio.
Source: The Education Equality Index
Student sub-groups are enrolling in the city’s top-scoring schools at similar rates
Because we did not have data for all cities to report the indicator B1a, we used an alternative indicator, B1b. Note that we did not have the data to report either indicator for two cities: Atlanta and Tulsa.
This measure looks at the enrollment of different demographic student groups in a city’s highest-scoring elementary and middle schools. We define a high-scoring school as one with proficiency rates in the top 20 percent of schools citywide in the most recent year of available data, either 2013-14 or 2014-15 (see below for the date range of each city). After using school-level proficiency rates to identify schools in the top 20 percent of each city’s performance distribution, we looked at the share of students citywide enrolled in those schools. As expected, this tended to be around 20 percent. We compared each city’s specific enrollment share in the top-scoring schools with the enrollment rates of different student sub-groups, including free and reduced-price lunch and racial and ethnic minority sub-groups. This tells us two things: (1) Whether student sub-groups were enrolling in the highest-performing schools at similar rates to each other, and (2) Whether student sub-groups were enrolling in the highest-performing schools at similar rates as they were enrolling in middle- or low-performing schools.
Although student demographics, student residence, and school performance are all highly correlated, we nevertheless might expect to see variation across the cities due to use of public school choice and the range of students and schools in the cities.
In these analyses, we focus on elementary and middle schools because proficiency data is widely available for 3rd through 8th grades. Because not all state datasets include high school test scores, we can report only the primary and middle school results.
We used this indicator for the following cities: Camden, Cleveland, Denver, Kansas City, and Washington, D.C.
Year range: For Camden, we used data from 2011-12 to 2014-15. For all other cities using this indicator (Cleveland, Denver, Kansas City, and Washington D.C), we used 2011-12 to 2013-14.
Source: State agency school performance and enrollment files.
Students are equitably enrolled in advanced coursework
This indicator focuses on the share of students taking advanced math courses in high school. The data come from the US Department of Education’s Office for Civil Rights’ CRDC survey. The CRDC defines advanced math courses to include topics like analytic geometry and trigonometry.
We calculated the rates of enrollment in these courses by dividing the number of course/test takers in geometry and trigonometry at each high school by the total enrollment at that high school. We calculated sub-group rates by dividing the number of subgroup course/test takers for these subjects at each high school by the total enrollment for that sub-group at that high school. We compared the enrollment rates in math courses with the demographics within the total school population to show whether certain student sub-groups were over- or under-represented in advanced math courses.
Year range: All cities used 2013-14 data.
Source: U.S. Department of Education’s Office for Civil Rights, Civil Rights Data Collection.
- Stepping Up: How Are American Cities Delivering on the Promise of Public School Choice? (full report)
- Executive Summary
- Scoring and Measures for all system reforms and outcomes
- School Composition by City shows the school composition breakdown for all cities
Downloadable City Reports (June 2018)
Stepping Up 2018
In the introduction to our Stepping Up 2018 report, which examines 18 cities offering public school choice, coauthor Georgia Heyward discusses cities’ educational progress, five distinct trends, and recommendations for district and charter leaders and funders.
1. More cities have information guides and simplified enrollment
In the first finding from our Stepping Up 2018 report, coauthor Georgia Heyward explains that more cities have school information guides and simplified enrollment, but they still need family supports.
2. Parent groups are engaging on better openings and closings
Lead author Christine Campbell discusses the second finding from our Stepping Up 2018 report: Parent-led groups are engaging on school openings and closings, but more cities need strong parent groups.
3. Districts increasingly value autonomy for school improvement
In the third finding from our Stepping Up 2018 report, lead author Christine Campbell explains that districts value autonomy to improve their schools, but they struggle with remissioning and don’t hire with autonomy in mind.
4. Charter schools continue
to face barriers
Research analyst Sean Gill describes the fourth finding from our Stepping Up 2018 report: Charter schools continue to face barriers regarding facilities, politics, and state policies, and they need to improve their turnaround and talent strategies.
5. New accountability structures mean cities are collecting plenty of data
In the fifth and final finding from Stepping Up 2018, Georgia Heyward reports that cities are collecting data but are not currently using it to inform accountability or supply decisions, and she lists key recommendations for districts, charters, and funders to spur improvement.