The CDC’s Flawed Case for Wearing Masks in School

The agency’s director has said, repeatedly, that schools without mask mandates have triple the risk of COVID outbreaks. That claim is based on very shaky science.

Collage of photos of schools, and CDC director Rochelle Walensky
Chip Somodevilla / Getty; INA FASSBENDER / Getty; NurPhoto / Getty; SOPA Images; Getty; The Atlantic

The debate over child masking in schools boiled over again this fall, even above its ongoing high simmer. The approval in late October of COVID-19 vaccines for 5-to-11-year-olds was for many public-health experts an indication that mask mandates could finally be lifted. Yet with cases on the rise in much of the country, along with anxiety regarding the Omicron variant, other experts and some politicians have warned that plans to pull back on the policy should be put on hold.

Scientists generally agree that, according to the research literature, wearing masks can help protect people from the coronavirus, but the precise extent of that protection, particularly in schools, remains unknown—and it might be very small. What data do exist have been interpreted into guidance in many different ways. The World Health Organization, for example, does not recommend masks for children under age 6. The European Centre for Disease Prevention and Control recommends against the use of masks for any children in primary school.

Seen in this context, the CDC has taken an especially aggressive stance, recommending that all kids 2 and older should be masked in school. The agency has argued for this policy amid an atmosphere of persistent backlash and skepticism, but on September 26, its director, Rochelle Walensky, marched out a stunning new statistic: Speaking as a guest on CBS’s Face the Nation, she cited a study published two days earlier, which looked at data from about 1,000 public schools in Arizona. The ones that didn’t have mask mandates, she said, were 3.5 times as likely to experience COVID outbreaks as the ones that did.

This estimated effect of mask requirements—far bigger than others in the research literature—would become a crucial talking point in the weeks to come. On September 28, during a White House briefing, Walensky brought up the 3.5 multiplier again; then she tweeted it that afternoon. In mid-October, with the school year in full swing, Walensky brought up the same statistic one more time.

But the Arizona study at the center of the CDC’s back-to-school blitz turns out to have been profoundly misleading. “You can’t learn anything about the effects of school mask mandates from this study,” Jonathan Ketcham, a public-health economist at Arizona State University, told me. His view echoed the assessment of eight other experts who reviewed the research, and with whom I spoke for this article. Masks may well help prevent the spread of COVID, some of these experts told me, and there may well be contexts in which they should be required in schools. But the data being touted by the CDC—which showed a dramatic more-than-tripling of risk for unmasked students—ought to be excluded from this debate. The Arizona study’s lead authors stand by their work, and so does the CDC. But the critics were forthright in their harsh assessments. Noah Haber, an interdisciplinary scientist and a co-author of a systematic review of COVID-19 mitigation policies, called the research “so unreliable that it probably should not have been entered into the public discourse.”

This is not the only study cited by Walensky in support of masking students, but it’s among the most important, having been deployed repeatedly to justify a policy affecting millions of children—and having been widely covered in the press. The agency’s decision to trumpet the study’s dubious findings, and subsequent lack of transparency, raise questions about its commitment to science-guided policy.


The Arizona study, published in the CDC’s Morbidity and Mortality Weekly Report, looked at school-associated outbreaks in Maricopa and Pima Counties, comparing rates across schools with and without mask mandates for students and staff. “The school year starts very early in Arizona, in mid-July, so we had the advantage of being able to get an early look at data,” one of the lead authors, J. Mac McCullough, told The New York Times. The early look revealed that just 16 outbreaks had occurred among the 210 schools that had a mask mandate in place from the start of classes, versus 113 among the 480 schools that had no mandates at all. According to McCullough and his colleagues, this amounted to a 3.5-fold increase in incidence of outbreaks for the no-mandate schools.

Yet the study’s methodology and data set appear to have significant flaws. The trouble begins with the opening lines of the paper, where the authors say they evaluated the association between school mask policies and school-associated COVID-19 outbreaks “during July 15–August 31, 2021.” After reviewing school calendars and speaking with several school administrators in Maricopa and Pima Counties, I found that only a small proportion of the schools in the study were open at any point during July. Some didn’t begin class until August 10; others were open from July 19 or July 21. That means students in the latter group of schools had twice as much time—six weeks instead of three weeks—in which to develop a COVID outbreak.

When I brought this issue to Megan Jehn, the study’s corresponding author and an epidemiologist at Arizona State University, she acknowledged that exposure times varied across schools. The ones without mask mandates were open longer overall, she told me—but the difference was too small to matter. Their median start date was August 3, versus August 5 for the schools that did have mask mandates. In a follow-up correspondence, Jehn and McCullough wrote, “It is highly improbable that this difference alone could explain the strong association observed between mask policies and school outbreaks.”

Yet Ketcham said that a comparison of median start dates is insufficient. “If schools with mask mandates had fewer school days during the study,” he told me, “that alone could explain the difference in outbreaks.”

Ketcham and others also criticized the Arizona study’s use of school-related outbreaks, rather than cases per student per week, as the relevant outcome. The authors defined an outbreak as being two or more COVID-19 cases among students or staff members at a school within a 14-day period that are epidemiologically linked. “The measure of two cases in a school is problematic,” Louise-Anne McNutt, a former Epidemic Intelligence Service officer for the CDC and an epidemiologist at the State University of New York at Albany, told me. “It doesn’t tell us that transmission occurred in school.” She pointed to the fact that, according to Maricopa County guidelines, students are considered “close contacts” of an infected student—and thus subject to potential testing and quarantine—only if they (or that infected student) were unmasked. As a result, students in Maricopa schools with mask mandates may have been less likely than students in schools without mandates to get tested following an initial exposure. This creates what’s known as a detection bias, she said, which could grossly affect the study’s findings. (Jehn and McCullough called it “highly speculative to make the assumption that identified close contacts are more likely to be tested than other students.”) McNutt believes that masks are an important prevention tool in the pandemic, but she maintained that the Arizona study doesn’t answer the specific question it purports to answer: whether mask mandates for students reduce the spread of SARS-CoV-2.

There are other issues, too. Jason Abaluck, an economics professor at Yale and the lead investigator on a 340,000-person randomized trial of masking in Bangladesh, called the Arizona study “ridiculous” for failing to control for the vaccination status of staff or students. If more people had been immunized at the schools with mask mandates—or if those schools were more likely to have other mitigation measures in place, such as improved ventilation—then they likely would have seen fewer outbreaks regardless. According to the paper, data on vaccination coverage were unavailable on a per-school basis.

Even basic elements of the data set inspire some concerns. According to the paper, 782 of the 999 public, non-charter schools included in the study were in Maricopa County. In response to a public-records request, the Arizona Department of Education sent me what it said was the same list of schools that had been provided to the researchers, with 891 relevant entries for Maricopa. But closer inspection revealed that about 40 of them were virtual learning academies, about 20 were preschools, and about 90 were vocational programs associated with otherwise-listed schools. That left at most roughly 740 schools for inclusion in the study, not 782. If dozens of entries were inappropriately included in the final data set, were “outbreaks” counted for them too?

Starting at the end of October, I reached out to Jehn and MMWR about the number of schools, and repeatedly asked for the list of those included in the study. I also asked about the fact that schools with mask mandates and those without mandates opened at different times. Neither the journal nor the study’s authors agreed to share the list of schools, or any other data from the study. The journal replied: “MMWR is committed to quickly correcting errors when they are identified. We reviewed the specific items that you describe below and found no errors.” This week the authors finally shared their narrowed-down list of Maricopa schools as used for the study. Yet it still included at least three schools in Pima County, along with at least one virtual academy, one preschool, and more than 80 entries for vocational programs that are not actual schools. In response to a follow-up inquiry, they acknowledged having included the online school by mistake, while attributing any other potential misclassifications to the Arizona Department of Education.

A media-relations manager from the lead authors’ university told me that “the data used for this study were entirely appropriate for the study’s objectives,” and that “Drs. Jehn and McCullough stand by the methodology and results from the data analyses of the 999 schools included in the study.”


The extent of the benefits of wearing masks for preventing COVID remains uncertain, but it’s wrong to say we don’t know anything at all. “One thing you can extrapolate well is that masks have some effect,” Haber told me. “But the level of effectiveness depends on an enormous array of very important factors, and high-quality direct evidence is difficult to come by, particularly for schools.”

Given its apparent flaws, the Arizona study would seem to bear out Haber’s point, offering little evidence, one way or another, on whether mask mandates “work” in schools, or to what degree. Even taken at face value, though, its findings don’t appear to fit with those from other research. Abaluck’s huge, randomized trial of mask use in rural Bangladeshi villages, for example, estimated just an 11 percent reduction in confirmed symptomatic SARS-CoV-2 infection among adults wearing surgical masks (and relatively little evidence of any effect for cloth masks).

Another, more similar study, published in MMWR in May, looked at case rates among more than 90,000 students in Georgia, comparing those at schools with and without mask mandates. It found that the incidence of COVID was 37 percent lower in schools where staff were required to wear masks, and 21 percent lower in schools where that rule applied to kids. (The latter difference was not statistically significant, and the authors noted that the data “cannot be used to infer causal relationships.”) Now compare those numbers with the headline finding from the Arizona study, touted repeatedly by Walensky: that a lack of school masking mandates more than tripled the risk of outbreaks.

A number of the experts interviewed for this article said the size of the effect should have caused everyone involved in preparing, publishing, and publicizing the paper to tap the brakes. Instead, they hit the gas. Given that data were collected through August 31, the authors had just a few weeks to complete their analysis and finalize their manuscript before MMWR put it out on September 24. Walensky tweeted out the research four days later.

As the CDC’s outlet for scientific reports, MMWR has long been crucial for assessing and documenting outbreaks of disease, up to and including this pandemic. Yet it’s also been a source of steady controversy. As Politico reported in September 2020, officials in the Trump administration tried to influence MMWR releases so that its messaging on COVID would align with the president’s. Career staffers expended “great effort” to resist this influence and uphold MMWR’s scientific integrity, a former official later told the House Select Subcommittee on the Coronavirus Crisis.

Yet under the Biden administration, the agency has not always been apolitical. In May, it was revealed that the American Federation of Teachers, the nation’s second-largest teachers’ union, had private exchanges with CDC officials prior to new school guidance being issued under Walensky’s tenure, and some of the union’s suggestions were added nearly verbatim. In September, on the same day as the Arizona study’s publication, Walensky overruled her agency’s advisory committee by endorsing the use of COVID-vaccine booster shots for teachers and other workers deemed at high risk of exposure, thereby aligning the CDC more closely with President Joe Biden’s position.

Still, the publication and agency endorsement of the Arizona study is especially demoralizing. How did research with so many obvious flaws make its way through all the layers of internal technical review? And why was it promoted so aggressively by the agency’s director? I reached out to Walensky’s office to ask about the study, noting its evident limitations and outlier result. How, if at all, does this research figure into the agency’s continuing guidance for schools around the country? The CDC did not respond to my inquiries.

With Biden in the White House, the CDC has promised to “follow the science” in its COVID policies. Yet the circumstances around the Arizona study seem to show the opposite. Dubious research has been cited after the fact, without transparency, in support of existing agency guidance. “Research requires trust and the ability to verify work,” Ketcham, the ASU public-health economist, told me. “That’s the heart of science. The saddest part of this is the erosion of trust.”

The Atlantic’s COVID-19 coverage is supported by grants from the Chan Zuckerberg Initiative and the Robert Wood Johnson Foundation.

David Zweig is a writer based in New York. He is the author of the nonfiction book Invisibles and the novel Swimming Inside the Sun.