The "Days of Learning" Metric for Education Evaluations
The third National Charter School Study (NCSS III) aimed to test whether charter school were effective and to highlight outcomes on academic progress. The authors reported that typical charter school students outperformed similar students in non-charter public schools by 6 days in mathematics and 16 days in reading. This “days of learning” metric used to claim relatively higher performance in charter schools than in comparable public schools. This logic of this metric is critiqued in this paper, and an alternative method of reporting outcomes is proposed.
💡 Research Summary
The paper provides a thorough critique of the “days of learning” metric employed in the third National Charter School Study (NCSS III) and proposes a more accurate alternative for reporting educational outcomes. NCSS III, based on a massive dataset covering over 6.5 million student‑year observations from 7,288 charter schools and a matched sample of traditional public schools, reported that charter‑school students outperformed their public‑school peers by six days in mathematics and sixteen days in reading. To translate these differences into a more intuitive unit, the study used a conversion factor of 0.01 standard deviations (SD) ≈ 5.78 days, a figure that the authors trace back to National Assessment of Educational Progress (NAEP) data.
The critique is organized around three core problems. First, the conversion assumes that NAEP scores for grades 4 and 8 are on an equivalent scale and that the difference between them can be interpreted as a grade‑level gain. In reality, NAEP scores are not grade‑equivalent; they reflect different test content and difficulty across grades, making any direct mapping to “grade advancement” scientifically unsound. Consequently, the derived 5.78‑day factor rests on a shaky premise.
Second, the study conflates within‑grade growth (the typical fall‑spring gain experienced in a single school year) with between‑grade growth (the cumulative change across multiple grades). Empirical evidence from Oak Park Schools (2015) shows that a 0.89 SD gain occurs over roughly 180 school days in a single grade, implying that 0.01 SD corresponds to about two days, not 5.78. When the authors recalculate the NCSS III results using this within‑grade conversion, the reported six‑day mathematics advantage shrinks to roughly two days, and the sixteen‑day reading advantage drops to about six days. This demonstrates that the original metric substantially overstates the effect size.
Third, the metric ignores the well‑documented non‑linearity of student growth. Growth decelerates as students progress through grades, and seasonal effects such as summer learning loss produce a saw‑tooth pattern rather than a steady linear increase. Assuming a constant 1 SD per school year across all grades therefore misrepresents the actual learning trajectory and inflates the “days of learning” estimate.
Additional concerns include the use of the difference in z‑scores (z₂ − z₁) across multiple grades and subjects, which masks differences in test content (e.g., basic arithmetic versus fractions) and further undermines interpretability. While the sign of the difference still indicates a positive effect, the magnitude becomes ambiguous when aggregated across heterogeneous assessments.
In response, the authors advocate for a “within‑grade days of learning” metric that aligns with the actual calendar days of instruction experienced by students. This metric is more intuitive for educators, parents, and policymakers and can be paired with conventional effect‑size statistics (Cohen’s d, Hedges g) to convey both practical and statistical significance. They also recommend that any conversion factor be transparently documented, specifying the grade, subject, and growth assumptions underlying it. By presenting both the raw effect size and a context‑specific day‑equivalent, researchers can avoid the misleading simplicity of a single aggregated figure.
Overall, the paper warns that the current “days of learning” metric, as applied in NCSS III, likely overstates charter‑school advantages and may misguide policy decisions. A shift toward more granular, within‑grade reporting and explicit methodological disclosure would provide a clearer, more reliable picture of educational outcomes across school types.
Comments & Academic Discussion
Loading comments...
Leave a Comment