I've been studying Newcomb's Problem for a while now, and I thought it might be fun to discuss it here. I will introduce the problem and some of the most important ideas from the literature. I support the position below calledtwo-boxing, but I can't be bothered with defending it to death, so I've placed this in Formal Discussions instead.

Introduction

Newcomb's Problem is a thought experiment devised by the physicist William Newcomb sometime around 1963, and first described in writing in Robert Nozick's 1969 paper `Newcomb's Problem and Two Principles of Choice' (Nozick 1969). The literature also refers to the thought experiment as Newcomb's Paradox (for example, see Wolpert 2013), but I have chosen to use the term `problem' as it is the original term, and is less presumptuous. Nozick's presentation of Newcomb's Problem (found in Nozick 1969 pp. 114-115) can be summarised thusly:

There are two boxes, B1 and B2. B1 contains £1,000. B2 either contains £0 or £1,000,000, but you do not know which. You have a choice between two options: taking (the contents of) both B1 and B2 (two-boxing), or taking just (the contents of) B2 (one-boxing). These are your only two options. Additionally, you know there is a being called the Predictor which is very good at predicting your actions. The Predictor has often correctly predicted your choices in the past, never making an incorrect prediction, and has correctly predicted the choices of many other people similar to you in the situation you are now facing. You know the Predictor has made a prediction as to what choice you will take in this situation. Importantly, the Predictor is in charge of the contents of B2: if the Predictor predicted you would one-box, it placed £1,000,000 in B2, whereas if it predicted you would two-box, it placed nothing in B2. The Predictor has already made its prediction, and thus it has already set up B2. What do you do?

Common Solution Attempts

Newcomb's Problem is of interest because there seem to be strong arguments to the effect that one-boxing is correct, and strong arguments to the effect that two-boxing is correct. The conjunction of these strong arguments is why Newcomb's Problem is sometimes referred to as a paradox. These arguments can be broken down into two types: intuitive arguments, and decision-theoretic arguments. Intuitive arguments are simply arguments based on intuitive judgements. Decision-theoretic arguments assume a particular decision theory and show that according to that decision theory, a certain decision is best. Note that by a decision theory, we mean a theory which attempts to tell us what we ought to do in a particular situation, perhaps in order to achieve to some specified goal. The term decision theory can also be used in a descriptive sense, but we are only interested in the normative usage. In this post, I will fully describe the intuitive arguments, then briefly describe the decision-theoretic arguments at the end.

Intuitive Arguments: Why You Should Two-Box

We know that the Predictor has already made its prediction, and has already set up the boxes. So either there is £1,000,000 in B2 or nothing. In either case, we're better off taking B1 and B2 over just B2; precisely £1,000 better off! Therefore we should two-box.

Sometimes this intuitive argument is treated more formally: the possible states of the world relevant to your decision at your time of decision making - in this case, the possible contents of B2 - are called thestates of nature(Wedgwood 2013 p. 2644), and because two-boxing has a higher payoff than one-boxing in every state of nature, two-boxing is said todominateone-boxing (Wedgwood 2013 p. 2661). The table below illustrates the payoff structure and clearly demonstrates that two-boxing dominates one-boxing.

Payoff One-Box Two-Box £1,000,000 in B2 £1,000,000 £1,001,000 £0 in B2 £0 £1,000

Intuitive Arguments: Why You Should One-Box

The Predictor is very good at predicting what we will do. Whatever action we take, we can safely assume the Predictor will have predicted our action correctly. So if we two-box, we can safely assume the Predictor predicted we would two-box, and hence put nothing in B2. If we one-box, we can safely assume the Predictor predicted that, and hence put £1,000,000 in B2. Thus, we can either two-box and receive £1,000, or one-box and receive £1,000,000. Therefore we should one-box.

Decision-Theoretic Arguments

My description here will be brief and assumes some knowledge of mathematics. If anyone wants a longer exposition of these arguments, tell me and I'll provide one in the future.

A variety of decision theories have been applied to Newcomb's Problem, including some rather exotic theories. For instance, see Ralph Wedgwood's Benchmark Theory, which was created partly in response to Newcomb's Problem (Wedgwood 2013); Robert Bassett critiques Benchmark Theory in (Basset 2015). However, the literature has focused on two types of decision theory: Causal Decision Theory (CDT) and Evidential Decision Theory (EDT) (Wedgwood 2013 pp. 2643-2644). Both of these types of decision theory are versions of Expected Utility Theory (EUT) (Wedgwood 2013 pp. 2644). WhilstEUThas a complex axiomatic foundation, its core principle is simply that one should act in order to maximise one's expected utility, where expectation is meant in the technical mathematical sense (Mongin 1997 p. 342).

However, for Newcomb's Problem, we do not need to worry ourselves with what utility means.Treat the question in Newcomb's Problem as `What should you do in order to maximise your monetary payoff?'.EDTstates that in order to maximise your monetary payoff M, you need to maximise the expectation E[M | d] where d is the decision you take.CDTstates in order to maximise your monetary payoff M, you need to maximise the expectation E[M | do(d)], where the operator do() is defined by Pearl in (Pearl 2000).EDTimplies you should one-box, whilstCDTimplies you should two-box.

Note

Nozick's presentation of Newcomb's Problem is problematic in a number of ways; if anyone wants me to address the issues with Nozick's presentation (for instance if one is suspicious of whether the thought experiment is even possible), I can provide a different presentation of Newcomb's Problem which avoids Nozick's problems in a future post.

References

Arif Ahmed. Infallibility in the newcomb problem. Erkenntnis, 80(2):261–273, 2015.

Robert Bassett. A critique of benchmark theory. Synthese, 192(1):241–267, 2015.

Philippe Mongin. Expected utility theory. Handbook of economic methodology, pages 342–350, 1997.

Robert Nozick. Newcomb’s problem and two principles of choice. In Nicholas Rescher, editor, Essays in Honor of Carl G. Hempel, pages 114–146. Reidel, 1969.

Ralph Wedgwood. Gandalfs solution to the newcomb problem. Synthese, 190(14):2643–2675, 2013.

David H Wolpert and Gregory Benford. The lesson of newcombs paradox. Synthese, 190(9):1637–1646, 2013.

## Bookmarks