August 29, 2023

Anthropic bias is a delusion

Originally I planned for this article to be much longer, but I changed my mind. So here is my hastily abridged diatribe on Nick Bostrom.

Bostrom is an Oxford professor who is best known for Superintelligence, a best-selling book of speculation about artificial intelligence. This post is not about that book. Instead I’m going to review the bulk of Bostrom’s academic work—specifically, that part relating to the anthropic principle. My claim is that Bostrom’s academic work is worthless—harmful, even—because it’s built on a delusion he calls “anthropic bias.”

The idea behind anthropic bias is this: Since human beings have been around for a really long time, it seems unlikely that a planetary catastrophe, such as an asteroid strike, will occur in the near future. But we would all be dead if such a catastrophe had occurred. So, Bostrom claims, we can’t learn anything from the absence of catastrophes in our past, because of selection bias.

Bostrom is simply wrong about this. In fact, a simple simulation refutes him. Here’s a simulation: https://pastebin.com/VBnRvg5C.

Bostrom is wrong because he’s treating our past as if it’s been filtered—as if, when a catastrophe was going to occur, God intervened and prevented it. Bostrom would probably argue that our past has been “filtered” in a sense by the fact that we have to be alive to observe it. But we didn’t have to be alive in the first place, so that argument doesn’t work.

Go to chapter 4 of Bostrom’s book, Anthropic Bias. This is the chapter where Bostrom introduces his self-sampling assumption (SSA) and self-indication assumption (SIA). Do his “incubator” example, but make the explicit assumption that you exist a priori. In other words \(\Pr(\text{``I exist''}) = 1\). Make this assumption and you’ll get the same result he did (with SSA). Now replace that assumption with the much more reasonable \(\Pr(\text{``I exist''}) = p\), where \(p \approx 0\), and you will get his SIA result. Remember that these are priors—obviously you exist a posteriori. The lesson from this exercise is this: SSA is equivalent to the assumption that you were destined to exist; SIA is Bayes’ theorem. Bostrom rejects SIA in favor of SSA (or “SSSA,” or whatever). There’s a lot of room here to argue about how likely you were to be born and what “you” really means, but that’s irrelevant.

Now go to chapter 7. Scroll to the bottom and read his ironically-named “presumptuous philosopher” thought experiment. Realize that this is a really contrived argument that could never happen in the real world, and this is essentially Bostrom’s only argument for rejecting SIA. (The “heavenly-messenger” analogy further down the page is just a bunch of nonsense.)

And now—this is the most important part—read his anthropic shadow paper. This is where he argues that we should be much more worried about catastrophic risks, because of anthropic bias. Look at his “toy model,” equation 2, which makes two serious mistakes:

  1. It equivocates between past and future catastrophes.
  2. It treats the chance of catastrophe as a constant. Since it’s unknown to us, it should be a random variable. Making it a constant prevents us from learning from the past, even before so-called anthropic bias enters the picture!

Then look at figure 8. “The absence of points in the upper right area of the diagram is visible,” he says, suggestively. But the upper right area is for events that are recent and rare. It’s not that surprising that we haven’t seen any recent-but-rare events. Again, a simple simulation is all it takes to see that.

A major theme of Bostrom’s work is that you allegedly can’t learn anything from the absence of catastrophes in the recent past. This defies common sense and is easily refuted by simple experiments. It’s the kind of thing you’d expect to hear from a crank, not a respected public intellectual. But like many public intellectuals, Bostrom has a reality distortion field that makes it hard to convince people he’s wrong. His affiliation with the University of Oxford certainly doesn’t help.

I once presented the evidence in this article to a major Bostrom fan, and the response I got was simply “Bostrom didn’t say that.” So I just want to emphasize once and for all that yes, he really did say these things. Here’s a direct quote from the anthropic shadow paper:

Anthropic bias can be understood as a form of sampling bias, in which the sample of observed events is not representative of the universe of all events, but only representative of the set of events compatible with the existence of suitably positioned observers. We show that some [existential risk] probabilities derived from past records are unreliable due to the presence of observation selection effects. Anthropic bias, we maintain, can lead to underestimation of the probability of a range of catastrophic events.

[…]

Overconfidence becomes very large for very destructive events. As a consequence, we should have no confidence in historically based probability estimates for events that would certainly extinguish humanity.

Nick Bostrom has a lot of critics, but almost none of them have attacked his academic work. This is weird, because Bostrom’s work merely looks technical. He uses a lot of big words and long sentences because he’s a crappy writer, not because his ideas are profound. This is a major indictment of both Bostrom and his critics. Bostrom is treated as a serious intellectual because he’s an Oxford professor with academic publications to his name. And his critics, because of laziness, cowardice, or stupidity, have contented themselves with attacking either his popular work or his character.

In all likelihood, Bostrom’s most successful years are behind him, and he’ll gradually fade into obscurity over the next few decades. In case that doesn’t happen, I hope this article will be useful to someone who might otherwise have been taken in by him.