The danger of Truman's one-handed economists
A retrospective on the use of data in the school closure debate
This edition of The Civic Pulse I want to talk about something close to my heart ā the debate about expertise, empiricism, and their role in public decision making. Iāll explore this topic through the public debate about school closures and COVID-19 because it serves as an excellent example of our current capacity (or lack thereof) to integrate data into fast-moving and hugely consequential policy questions. And, itās a so-called wicked problem that most people have an opinion on. I think how it has unfolded highlights an important challenge for all of us as scholars, funders, publishers, and consumers of information ā how do we evaluate expertise?Ā
In this issue youāll find:
An essay on decisiveness, data, and who gets to be empirical
What weāre reading this snowy winter
As always, thank you for reading. On to this monthās main event.
A retrospective on the use of data in the school closure debate
I recently had the chance to visit Harry Trumanās Little White House in Key West, Florida, and see where he spent about 10% of his presidency. Truman, of course, is famous among social scientists for having said:
āGive me a one-handed economist. All my economists say āon the one handā¦ā, then ābut on the otherā¦āā though whether he said it or not is an interesting read in and of itself, as it so often is with these types of quips.
The term āone-handed economistā was actually part of a widely circulating turn of phrase in the period where āone-armedā or āone-handedā referred to someone who was decisive and was applied to all kinds of members of the professional class ā lawyers, accountants, and even presidents themselves. In an age of great uncertainty and economic, geopolitical, and domestic crises there was a demand for decisiveness.Ā
Sound familiar?
It struck me, as the school closure, isolation timeline, and lockdown debate flared up with omicron, that we are awash with one-handed economists and Iām not sure President Truman would find it of much comfort.Ā
I donāt want to debate the merits of decisiveness itself, but I do want to unpack how we evaluate the sources of authority for the voices that get to be decisive in society today. Tressie McMillan Cottomās recent essay on the expectation that we all become āexpertsā and why that is not good ā hell, itās exhausting ā is a good starting point.Ā
From ugly leggings to a global financial crisis and back, all of these examples have the same social DNA: failing institutions, aspirational people and nefarious actors willing to exploit bothā¦.Ā
It is no wonder that so many of us think that we can parse vaccine trial data, compare personal protective equipment, write school policy and call career scientists idiots on Facebook. We are know-it-alls because we are responsible for knowing everything. And God forbid we should not know something and get scammed. If that happens, it is definitely our fault.
Institutional failures, including our sometimes earned, and sometimes unearned, mistrust for others to be experts on our behalf, open the door for new actors with hidden agendas, one of which is a very old type (the oracle) wrapped up in fancy new clothes (empirical data analysis).
My objection is not with public figures making recommendations but with the claims of authority these new oracles derive from data. The modern version of the one-handed economist is probably most popularly represented in the contemporary hot-takeism of Nate Silver, something covered far better than I could in the best couple paragraphs of writing I have read in 2022 from Lyz Lenzās Men Yell At Me newsletter:
The problem is, we have a society that cleaves to an idea that it is possible to have a dispassionate and purely rational view of anything. We foolishly believe that if only we could look at numbers, charts, graphs, we could rid ourselves of all this pesky other business likeĀ sexism, racism, misogyny, homophobia, transphobia, fat phobiaā¦ But those numbers, those questions, those polls, those charts are compiled by flawed human beings with biases and prejudicesā¦ this analysis is actually entirely ruled by passion and emotion that is unchecked, unfiltered, welded [sic] as a cudgel and hides behind the veneer of dispassionate analysisā¦Ā
[T]he world is a mess. And itās so hard. Anyone providing easy answers is selling you snake oil dressed up in the cheap language of scientific analysis.
But, while it is fun to dunk on Nate Silver, itās important to see this as part of a broader problem ā the problem of the data aesthetic used as a shortcut to authority. Which brings us to school closures, COVID, and Dr. Emily Oster, an economist at Brown University.Ā
Dr. Oster is a bestselling author and many people seem to find her recommendations useful for a range of topics far afield from economics (pregnancy, parenting, childcare, education, etc.). Dr. Osterās brand, if you will, is making recommendations for her audience based on objective clear-headed analysis of the facts.Ā
Oster has published prominent op-eds weighing in on debates about school closures, school masking, and the risks of COVID. Much of this comes from a research effort Dr. Oster leads called āCOVID Explainedā which has this tagline:
COVID-19 is confusing. We are here with facts about the virus. How does it spread? How is it treated? Who does it affect most? Unbiased information to help you make good decisions.
If you read The Civic Pulse, that tagline likely sounds appealing to you ā yes, I want facts and I want to make good decisions ā sign me up! But, how do we distinguish āclear-headed analysis of the factsā from the data aesthetic, dressing up easy answers in āthe cheap language of scientific analysisā? Well, a good place to start is to examine the evidence supporting those facts, and then look at who or what might be missing from the presentation.Ā
Fortunately, Rachel Cohen published a piece in The American Prospect in fall 2020 [wow, that was a long time ago] that checked the source of many of the āfactsā that were supporting Osterās op-eds and that were being circulated to news outlets like The Washington Post, The New York Times, and The Atlantic.
Cohenās article highlights one example op-ed with the headline āSchool Infection Rates Are Low Even in Areas With High COVID-19 Rates, According to Data on 550,000 US Students.ā This example shows how a headline can be simultaneously accurateā¦ and misleading. In the data on 550,000 students the infection rate reported is low. That is a fact. But implied in the headline, and the op-ed, is that we can generalize from this population to the broader U.S. (otherwise, why write the op-ed?). And it is in this slippery space between a true fact and broad conclusion that the aesthetic of data grants power to what is otherwise just another hot take.Ā
Iāll let Cohenās reporting take it from here:Ā
Here are some additional facts about the COVID-19 School Response Dashboard dataset, as it existed on October 25 [2020]. There were 1,322 schools voluntarily reporting data, or about 1 percent of all K-12 schools in the country. It did not say how many school districts were reporting, though Oster told me last week they currently have āabout 150.ā There are 13,500 school districts across the United States.
A more detailed look brought its glaring gaps into sharper relief. There was not a single urban traditional public school reporting data across 27 states in her dataset, including from Florida, Texas, New York, Minnesota, Arkansas, and Mississippi. Some states, like Colorado, Kansas, Ohio, and Alabama, had just one urban traditional public school reporting data. Oster told me theyāre working to make their dataset āmore representativeā and conceded that those who opted to voluntarily report tended to be a āhigher-income sample, and more suburban.ā
(emphasis original, and heartily endorsed by me as well)
One imagines that, if a student in one of Dr. Osterās courses at Brown University used a non-random, voluntary, survey sample of less than 1% of the population to make a broad conclusion with wide-reaching policy impacts, theyād receive a polite note reminding them to review their textbook.
But itās not just about the sample or the data collection (though a trained economist knows better), itās about the context given to the facts; what is and is not said with the authority won through the combination of the academic credential and data collection.Ā The tool initially published by Dr. Oster ā and, as these things go, the version that got almost all of the attention ā did not include any way to disaggregate data on those 500,000 students by race, ethnicity, or school Title I status ā something practically as natural to a researcher like Oster as breathing air. Again, from Cohen:
Osterās story also said nothing about race. āItās myopic. The piece of analysis thatās missing both in her Atlantic article and the dashboard is that Black and brown communities have been disproportionately hit hard by COVID,ā said Constance Lindsay, an education policy researcher at UNC. Black and Latino communities have been contracting COVID and dying of it at higher rates, and while Oster targets Chicago, Los Angeles, and Houston specifically for not reopening schools, there was no mention that these cities have higher concentrations of Black and Latino familiesā¦
Lindsay adds that sheād feel a little better if Osterās database broke out information about Title I schools and the percentage of Black and brown students at each school. Oster told me āthatās something we can certainly do, and I think those kinds of covariates are on our to-do list.ā
So what is going on here? Dr. Oster and team filled a gap left by institutions that could not keep up, but they staked their claim of legitimacy on the data they had gathered ā not on the theory or context of the collection. The theory is left implicit, and all too often the people doing the research team and reporting on that research have the same theory.Ā
But the world is complicated, as Andrew Gelman, one of the best reads on quantitative analysis in the social sciences, recently discussed:Ā
The real world is complicated. Take a generally true theory, gather lots of data, and youāll see some patterns that appear to contradict the theory. Or, take a completely wrong theory, gather lots of data, and youāll see some patterns that are consistent with the theory. Thatās just the way things goā¦Ā Statistics has prestige, so it makes sense that people will try to use statistics as a convincer.
Think for a moment. Would a Black-led research team have de-prioritized looking at the data by race? Would someone who lived in a multi-generational household, whose child had severe asthma, or whose child attended public school with predominantly students of color* collected, analyzed, and reported on data in the same way? (We donāt know which, if any of these apply to Dr. Oster, only that her children attend private school as did she.)
![Twitter avatar for @Dr_ConstanceL](https://substackcdn.com/image/twitter_name/w_96/Dr_ConstanceL.jpg)
I am, by the way, a big fan of Dr. Constance Lindsayās scholarship and Twitter account (and I really appreciate the uncompensated labor she has put into responding to this kind of work).
Again, the problem isnāt the study ā the problem is that this work is leaning on the credentials of the scientific method without subjecting itself to the process of scientific inquiry and peer review. And, in so doing, it is taking the oxygen out of the room for alternative voices, it is shaping policy with prominent op-eds, and it is entrenching a specific set of implicit theories through the use of data as evidence. Science is an act of consensus, not fiat. The data aesthetic as it is deployed here is an attempt to take a shortcut past the messy process of accumulating scientific evidence in a race to make a point.
Unfortunately, we are the losers in all this. When researchers step outside the consensus-building process of scientific peer review, we are forced to step in. To paraphrase Dr. Tressie McMillan Cottom, the peer review has been outsourced to us, which is, unfortunately, exhausting.
But when and where we can, we have to demand better standards of evidence before granting researchers authority on a particular topic, no matter their credentials. And we should demand our public discourse look around a little further than the nearest Ivy League economics department to find expert consensus on important topics.
As a data analyst I take this stuff to heart ā good data work is careful, deliberate, and subjective work that requires a lot of deliberation and consultation. I think a good start would be to pop up this quote from Deborah Stone, from her book Counting,Ā every time a quantitative researcher opens Excel, Stata, or R:Ā
Above all, we shouldnāt use numbers to cover over our deep conflicts and ethical dilemmas.
Good empirical science doesnāt cover these things over ā it states these conflicts and notes its limitations up front. In other words, āon the one handā¦ but on the other handā¦ā
What weāre reading and thinking about:
White scholars colonizing health disparities research ā STAT News
Continuing on the theme of this newsletter about who gets to be experts ā this is a must-read story about what is happening in the field of health disparities research (and I imagine is also happening in research on equity and racial disparities in many other fields). Hereās a good quote:
But a STAT investigation shows a disturbing trend: a gold rush mentality where researchers with little or no background or training in health equity research, often white and already well-funded, are rushing in to scoop up grants and publish papers. STAT has documented dozens of cases where white researchers are building on the work of, or picking the brains of, Black and brown researchers without citing them or offering to include them on grants or as co-authors.
Census Mapper ā BigLocalNews
Lots of people are putting out tools to understand the 2020 Census, but I wanted to give a particular shoutout to the folks at Big Local News. Iām a big fan of their whole approach ā providing data analysis and story development tools to local journalists through collaboration and open analysis. Itās no surprise that their Census Mapper delivers.
Congressional Trading in 2021 ā Unusual Whales
Looking at a different kind of data from a different direction ā I cannot stop staring at how well members of the U.S. House and Senate have done investing in 2021 as visualized in this report by Unusual Whales.
Itās not exactly reading, but I wanted to give a special shoutout to the new ABC show āAbbott Elementaryā ā in a TV lineup full of cops, doctors, and even cop doctors (or doctor cops?), itās good to see teachers making their way to on-screen representation.
Itās a funny, charming, and kind-hearted show about a public elementary school in Philadelphia ā two thumbs up.
Chicagoās Deep Tunnel ā Slate
Finally, weāve been reading a lot about water infrastructure lately as we are gearing up a #rescuetherescuefunds campaign ā infrastructure, especially climate change mitigation infrastructure, is another wicked problem, and this article on Chicagoās flood mitigation investments is a good in-depth look at how if the set of experts are narrow and they all agree, they can get it wrong.
We encourage you to follow along using the hashtag #rescuetherescuefunds. Weāll have more to say on Twitter and in an upcoming newsletter, so if youāre not already, follow us on Twitter:
![Twitter avatar for @civilytics](https://substackcdn.com/image/twitter_name/w_96/civilytics.jpg)
![](https://substackcdn.com/image/fetch/w_600,h_314,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fedbb399f-fe37-467d-aa65-9aaac48f0868_764x430.jpeg)
As always, we appreciate your help spreading the word about this newsletter. If you havenāt already, please subscribe. If you subscribe, send it to a friend, or share it on social media.
With gratitude,
Jared