How well do online schools serve students?

The first post in this series suggested that the online learning policy disputes are shifting, and gave some background. The second post looked at the recent GAO report and suggested that the report findings, and the response to the report, are evidence of this shift. This post looks further at one element of the online charter school disputes—the question of how well these schools serve students.
 
Arguments about how well online schools serve students have taken place in academic reports, the media, and political arenas. Before we get into the details of these arguments, it’s important to note that similar arguments occur over physical charter schools and traditional schools and districts. The fact that such disputes over educational outcomes have long preceded online schools—and show no signs of lessening—suggests two interpretations. First, no new study or set of studies is going to settle the argument. Second, the disputes over online schools—if they are indeed waning—may be settling into the background noise of disagreements over educational policy. By “background noise” I mean that these arguments take place consistently, but they have impact only at the margins.
 
(That’s not to say that the margins are unimportant. About 20 states significantly limit or outright don’t allow online schools operating statewide, and these academic arguments have some impact on whether those states will expand online learning opportunities. But even there, the discussions about academic outcomes often take a back seat to issues related to funding and the overall structure of public education in each state.)
 
In broad and simplistic terms, the disputes about academic outcomes in online charter schools have centered on a few themes.

  • Critics point to state proficiency tests and graduation rates, which often show online charter schools performing below state averages.

  • Online school advocates respond that their student population is not captured by state averages.

  • Critics up the ante by citing academic studies that say they are comparing similar groups of students and still showing lower performance on state assessments and in some cases, graduation rates.

  • Advocates respond that their students are still not being captured accurately because those studies almost never account for student mobility.

Hence the standoff. Having much more to say on this topic requires spending many hours studying these reports, which means that most discussions go no deeper than these points.
 
The GAO report detailed in the previous post looks at this issue and provides a valuable synopsis—although without some background knowledge, the significance of its statements may not be clear. From the report:
 
“…together these nine studies consistently find virtual charter students have lower scores on state standardized assessments compared to brick-and-mortar students. All of the studies found a statistically significant effect in math proficiency and most found a statistically significant effect in reading. All of the studies selected controlled for prior student achievement and one study controlled for student mobility. One study examined virtual charter schools across 17 states and the District of Columbia. Seven studies examined virtual charter schools in a single state, including three studies that examined virtual charter students in Ohio. One study used an anonymous state.”
 
I would change just two words in that paragraph. Where the report says “and one study controlled for student mobility,” I would say “but just one study controlled for student mobility.” In other words, in the GAOs exhaustive research, it found one study that controlled for the variable that online learning advocates believe is most important.
 
Well, the critics might say, it’s still a study that says the advocates are wrong! Yes, but now the advocates have their own response. A recently published working paper from the University of Arkansas College of Education and Health Professions—published but not yet peer reviewed—makes the case that online school studies are not taking into account issues related to the negative aspects of why many students have chosen to switch to an online school. The abstract:
 
“Program evaluations that measure the effects of online charter schools on student achievement will be biased if they fail to account for unobserved differences between online students and students in the comparison group. There are theoretical and empirical reasons to believe that students who enroll in online schools disproportionately face challenges that are not accounted for in administrative data. This paper investigates some of the negative factors that motivate parents to enroll in online schools. We combine data from an online charter school survey—that asked why parents decided to enroll in online schooling—with three years of achievement and demographic data. We find that students whose parents indicated they selected online schools for negative reasons made statistically significantly lower ELA gains, even after controlling for prior achievement, race, gender, free lunch status, and special education status. We conclude that other observational analyses of online charter schools, such as CREDO (2015), will be biased and unreliable if they fail to properly control for reasons students select those schools.”
 
In fact, elsewhere the paper makes clear that the authors aren’t saying that studies are unreliable if they don't control for these issues, but that such studies like CREDO are unreliable for this reason.
 
“the CREDO study cannot control for factors associated with why some students might enroll in an online charter school as opposed to traditional brick-and-mortar district or charter options. There are theoretical and empirical reasons to believe that students drawn to online charter schools are more likely to have pre-existing, unobserved educational challenges. Those who rely on CREDO results may believe controlling for prior achievement accounts for the challenges associated with enrolling in online education. But if students drawn to online schools have systemically lower rates of test score growth, controlling for prior test score levels will be inadequate to parse out the independent effect of online schooling on academic growth.”
 
Is this recent study dispositive? Of course not. Critics will rightly point out that it’s not yet peer reviewed, as a start, and in any case in a field as complex as education one must look at the body of evidence, not a single study. But to my knowledge this is the first university-affiliated study on the topic of online student characteristics, and if the findings hold over time and through additional studies, online charter school advocates will have a stronger argument to add to their contention that online learning critics don’t understand the students that these schools serve.

Previous
Previous

The shifting digital learning policy battles

Next
Next

GAO study suggests online school wars may be decreasing