This paper sets out to resolve how agents ought to act in the Sleeping Beauty problem and various related anthropic (self-locating belief) problems, not through the calculation of anthropic probabilities, but through finding the correct decision to make. It creates an anthropic decision theory (ADT) that decides these problems from a small set of principles. By doing so, it demonstrates that the attitude of agents with regards to each other (selfish or altruistic) changes the decisions they reach, and that it is very important to take this into account. To illustrate ADT, it is then applied to two major anthropic problems and paradoxes, the Presumptuous Philosopher and Doomsday problems, thus resolving some issues about the probability of human extinction.
We cannot have been born on a planet unable to support life. This self-evident fact is an example of anthropic or self-locating reasoning: we cannot be 'outside observers' when looking at facts that are connected with our own existence. By realising that we exist, we change our credence of certain things being true or false. But how exactly? Anyone alive is certain to have parents, but what is the probability of siblings?
Different approaches have been used to formalise the impact of anthropics on probability. The two most popular revolve around the ‘Self-Sampling Assumption’ and the ‘Self-Indication Assumption’ (Bostrom, 2002). Both of these give a way of computing anthropic probabilities, answering questions such as ‘Given that I exist and am human, what probability should I assign to there being billions (or trillions) of other humans?’
The two assumptions are incompatible with each other, and give different answers to standard anthropic problems. Nor are they enough to translate probabilities into decisions. Many anthropic problems revolve around identical copies, deciding in the same way as each other, but causal (Lewis, 1981) and evidential1 (Gibbard and Harper, 1981) decision theory differ on whether agents can make use of this fact. And agents using SIA and SSA can end up always making the same decision, while still calculating different probabilities (Armstrong, 2012). We are at risk of getting stuck in an intense debate whose solution is still not enough to tell us how to decide.
Hence this paper will sidestep the issue, and will not advocate for one or the other of the anthropic probabilities, indeed arguing that anthropic situations are a distinct setup, where many of the arguments in favour of probability assignment (such as long run frequencies) must fail.
Instead it seeks to directly find the correct decision in anthropic problems. The approach has some precedence: Briggs argues (Briggs, 2010) that SIA-type decision making is correct. On the face of it, this still seems an extraordinary ambition -how can the right decision be made, if the probabilities aren’t fully known? It turns out that with a few broad principles, it is possible to decide these problems without using any contentious assumptions about what the agent’s credences should be. The aim of this approach is to extend classical decision making (expected utility maximisation) to anthropic situations, without needing to use anthropic probabilities.
This will be illustrated by careful analysis of one of the most famous anthropic problems, the Sleeping Beauty problem (Elga, 2000). One of the difficulties with the problem is that it is often underspecified from the decision perspective. The correct decision differs depending on how much Sleeping Beauty considers the welfare of her other copies, and whether they share common goals. Once the decision problem is fully specified, a few principles are enough to decide the different variants of the Sleeping Beauty problem and many other anthropic problems.
That principled approach fails for problems with non-identical agents, so this paper then presents an Anthropic Decision Theory (ADT), which generalises naturally and minimally from the identical agent case. It is not a normative claim that ADT is the ideal decision theory, but a practical claim that following ADT allows certain gains (and extends the solutions to the Sleeping Beauty problem). ADT is based on ‘self-confirming linkings’: beliefs that agents may have about the relationship between their decision and that of the other agents. These linkings are self-confirming, because if believed, they are true. This allows the construction of ADT based on open promises to implement self-confirming linkings in certain situations. Note that ADT is nothing but the Anthropic version of the far more general “Updateless Decision Theory2 and Functional Decision Theory (Soares and Fallenstein, 2017).
The last part of the paper applies ADT to two famous anthropic problems: the Presumptuous Philosopher and Doomsday problems, and removes their counter-intuitive components. This is especially relevant, as it removes the extra risk inherent in the Doomsday argument, showing that there is no reason to fear extinction more than what objective probabilities imply (Bostrom, 2013).
The Sleeping Beauty (Elga, 2000) problem is one of the most fundamental in anthropic probability. Many other problems are related to it, such as the absentminded driver (Aumann et al., 1996), the Sailor’s Child Problem (M., 2006), the incubator and the presumptuous philosopher (Bostrom, 2002). In this paper’s perspective, all these problems are variants of each other -the difference being the agent’s level of mutual altruism.
In the standard setup, Sleeping Beauty is put to sleep on Sunday, and awoken again Monday morning, without being told what day it is. She is put to sleep again at the end of the day. A fair coin was tossed before the experiment began. If that coin sh
This content is AI-processed based on open access ArXiv data.