The idea of a personal "carbon footprint" is an oil company psyop. About 20 years ago, British Petroleum launched an ad campaign popularizing the notion and put out a website letting you calculate your "carbon footprint". They're still at it.

It's an idea with remarkable memetic power, both for individuals and brands. Displaying your concern about your personal carbon footprint lets you show off your prosociality and marks you out as someone virtuous, someone who takes personal responsibility. The idea also feeds into people's narcissistic tendencies, reassuring them that they're actually important and that their actions matter in the world.

Marketers love the concept, and any company trying to appeal to the nature-loving demographic can use and abuse it: Outdoor Brands Get Serious About the Carbon Footprint of Adventure: The North Face and Protect Our Winters unveil an activism-oriented CO2 calculator.

The problem with all of this, and the reason BP pushed the idea in the first place, is that your personal carbon footprint doesn't matter. You're 1 of about 7.6 billion people on earth, so your effect is about 1/7,600,000,000 ≈ 0.000000013%. Your personal carbon footprint is completely irrelevant to climate change. Global warming is a collective issue that requires collective solutions; framing it as a problem that individuals can tackle (and that individuals are responsible for) distracts from the public policy changes that are necessary. Environmentalist signaling is complete nonsense but also deadly serious.

The Parallel

Caring about individual bad studies is a bit like caring about your individual carbon footprint.

People occasionally send me shitty papers, and year or two ago I would care a lot, enjoying the shameful thrill of getting Mad Online about some fraud, or having fun picking apart yet another terrible study. It's an attractive activity, and performing it in public shows how much you care about Good Science. Picking out a single paper to replicate operates at the same level. What's the impact of all this?

In the idealized version of science, a replication failure would raise serious doubts about the veracity of the original study and have all sorts of downstream effects. In the real version of social science, none of that matters. You have to go on active memetic warfare if you want to have any effect, and even then there's no guarantee you'll succeed. As Tal Yarkoni puts it, "the dominant response is some tut-tutting and mild expression of indignation, and then everything reverts to status quo until the next case". People keep citing retracted articles. Brian fucking Wansink has been getting over 7 citations per day in 2021. So what exactly do you think a replication is going to achieve?


A couple years ago Alexey Guzey wrote "Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors", finding not only errors but even egregious data manipulation in Walker's book. Guzey later collaborated with Andrew Gelman on Statistics as Squid Ink: How Prominent Researchers Can Get Away with Misrepresenting Data.

What was the effect of all this? Nothing.

Guzey explains on twitter:

my piece on the book has gotten >250k views by now and still not a single neuroscientist or sleep scientist commented meaningfully on the merits of my accusations. [...] According to UC Berkeley, "there were some minor errors in the book, which Walker intends to correct". The case is closed.

The feedback loops that are supposed to reward people who seek truth and to punish charlatans are just completely broken.

...but a prominent neuroscientist did write to him in private to express his agreement.

Implicit Bias

It is so much harder to get rid of bullshit than it is to prevent its publication in the first place. Let's take a look at some of the literature on implicit bias.

Oswald et al (2013) meta-analyze the relation between the IAT and discrimination: "IATs were poor predictors of every criterion category other than brain activity, and the IATs performed no better than simple explicit measures." Carlsson & Agerström (2016) refine the Oswald et al paper, and find that "the overall effect was close to zero and highly inconsistent across studies [...] little evidence that the IAT can meaningfully predict discrimination".

Meissner et al (2019) review the IAT literature and find that the "predictive value for behavioral criteria is weak and their incremental validity over and above self-report measures is negligible".

Forscher et al (2019) meta-analyze the effect of procedures to change implicit measures, and find that the "generally produced trivial changes in behavior [...] changes in implicit measures did not mediate changes in explicit measures or behavior". Figure 8 from their paper shows the effect of changing implicit measures on actual behavior:

What was the effect of all this? Nothing.

Just within the last few days, the New Jersey Supreme Court announced implicit bias training for all employees of state courts, "Dean Health Plan in Wisconsin implemented new strategies to address health equity in maternal health, including implicit bias training for employees", the Auburn Human Rights Commission has "offered implicit bias training to supervisory personnel in Auburn city government, the Cayuga County Sheriff's Office, public schools and other local organizations", and California's Attorney General is making sure that healthcare facilities are complying with a law requiring anti-implicit bias training.

You can debunk, and (fail to) replicate all you want, but it don't mean a thing. Mitchell & Tetlock (2017) write:

once employers, health care providers, police forces, and policy-makers seek to develop real solutions to real problems and then monitor the costs and benefits of these proposed solutions, the shortcomings of implicit prejudice research will likely become apparent

But it didn't turn out that way, did it? Just as with the personal carbon footprint, the ultimate outcome is a secondary consideration at best.


Yarkoni (the British Petroleum of social science) says "it's not the incentives, it's you" but, really, it's the incentives. Before you can run, you must walk. Before you replicate, you must have a scientific ecosystem with reliable self-correction mechanisms.1 And before you build that, it's a good idea to limit the publication of false positives and low-quality research in general.

One of the key insights of longtermism is that if humanity survives in the long term, the vast majority of humans will live in the future, so even a small improvement to their welfare can have a huge effect. We might make a similar argument about longtermism in social science: the vast majority of papers lie in the future. If we can do something today to improve them even by a little bit, the cumulative impact would be enormous. On the other hand, defeating one of the 10,000 bad papers that will be published this year is not going to do much at all. Effective scientific altruism is systematically improving the future by 0.01% rather than putting your energy into deboonking a single study. Every dollar wasted on replication is a dollar that could've been invested in fixing the underlying collective problems instead. The past is not going to change, but the future is still malleable.

Ideally we'd proclaim the beginning of a new era, ban citations to any pre-2022 works, and start from scratch (except actually do things properly this time). Realistically that won't happen, so the second-best approach is probably a Hirschmanian Exit into parallel institutions.

  1. 1.I don't want to overstate the case here—some disciplines work pretty well, so it's not entirely hopeless. I would certainly hope that medical researchers still try to replicate the effects of drugs, and physicists replicate their particle experiments. But in the social sciences things are different.