Welcome guest, is this your first visit? Create Account now to join.
  • Login:

Welcome to the Online Debate Network.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed.

Results 1 to 3 of 3
  1. #1
    Registered User

    Join Date
    May 2013
    Posts
    90
    Post Thanks / Like

    Newcomb's Problem

    I've been studying Newcomb's Problem for a while now, and I thought it might be fun to discuss it here. I will introduce the problem and some of the most important ideas from the literature. I support the position below called two-boxing, but I can't be bothered with defending it to death, so I've placed this in Formal Discussions instead.

    Introduction

    Newcomb's Problem is a thought experiment devised by the physicist William Newcomb sometime around 1963, and first described in writing in Robert Nozick's 1969 paper `Newcomb's Problem and Two Principles of Choice' (Nozick 1969). The literature also refers to the thought experiment as Newcomb's Paradox (for example, see Wolpert 2013), but I have chosen to use the term `problem' as it is the original term, and is less presumptuous. Nozick's presentation of Newcomb's Problem (found in Nozick 1969 pp. 114-115) can be summarised thusly:

    There are two boxes, B1 and B2. B1 contains 1,000. B2 either contains 0 or 1,000,000, but you do not know which. You have a choice between two options: taking (the contents of) both B1 and B2 (two-boxing), or taking just (the contents of) B2 (one-boxing). These are your only two options. Additionally, you know there is a being called the Predictor which is very good at predicting your actions. The Predictor has often correctly predicted your choices in the past, never making an incorrect prediction, and has correctly predicted the choices of many other people similar to you in the situation you are now facing. You know the Predictor has made a prediction as to what choice you will take in this situation. Importantly, the Predictor is in charge of the contents of B2: if the Predictor predicted you would one-box, it placed 1,000,000 in B2, whereas if it predicted you would two-box, it placed nothing in B2. The Predictor has already made its prediction, and thus it has already set up B2. What do you do?

    Common Solution Attempts
    Newcomb's Problem is of interest because there seem to be strong arguments to the effect that one-boxing is correct, and strong arguments to the effect that two-boxing is correct. The conjunction of these strong arguments is why Newcomb's Problem is sometimes referred to as a paradox. These arguments can be broken down into two types: intuitive arguments, and decision-theoretic arguments. Intuitive arguments are simply arguments based on intuitive judgements. Decision-theoretic arguments assume a particular decision theory and show that according to that decision theory, a certain decision is best. Note that by a decision theory, we mean a theory which attempts to tell us what we ought to do in a particular situation, perhaps in order to achieve to some specified goal. The term decision theory can also be used in a descriptive sense, but we are only interested in the normative usage. In this post, I will fully describe the intuitive arguments, then briefly describe the decision-theoretic arguments at the end.

    Intuitive Arguments: Why You Should Two-Box
    We know that the Predictor has already made its prediction, and has already set up the boxes. So either there is 1,000,000 in B2 or nothing. In either case, we're better off taking B1 and B2 over just B2; precisely 1,000 better off! Therefore we should two-box.

    Sometimes this intuitive argument is treated more formally: the possible states of the world relevant to your decision at your time of decision making - in this case, the possible contents of B2 - are called the states of nature (Wedgwood 2013 p. 2644), and because two-boxing has a higher payoff than one-boxing in every state of nature, two-boxing is said to dominate one-boxing (Wedgwood 2013 p. 2661). The table below illustrates the payoff structure and clearly demonstrates that two-boxing dominates one-boxing.

    Payoff One-Box Two-Box
    1,000,000 in B2 1,000,000 1,001,000
    0 in B2 0 1,000

    Intuitive Arguments: Why You Should One-Box
    The Predictor is very good at predicting what we will do. Whatever action we take, we can safely assume the Predictor will have predicted our action correctly. So if we two-box, we can safely assume the Predictor predicted we would two-box, and hence put nothing in B2. If we one-box, we can safely assume the Predictor predicted that, and hence put 1,000,000 in B2. Thus, we can either two-box and receive 1,000, or one-box and receive 1,000,000. Therefore we should one-box.

    Decision-Theoretic Arguments
    My description here will be brief and assumes some knowledge of mathematics. If anyone wants a longer exposition of these arguments, tell me and I'll provide one in the future.

    A variety of decision theories have been applied to Newcomb's Problem, including some rather exotic theories. For instance, see Ralph Wedgwood's Benchmark Theory, which was created partly in response to Newcomb's Problem (Wedgwood 2013); Robert Bassett critiques Benchmark Theory in (Basset 2015). However, the literature has focused on two types of decision theory: Causal Decision Theory (CDT) and Evidential Decision Theory (EDT) (Wedgwood 2013 pp. 2643-2644). Both of these types of decision theory are versions of Expected Utility Theory (EUT) (Wedgwood 2013 pp. 2644). Whilst EUT has a complex axiomatic foundation, its core principle is simply that one should act in order to maximise one's expected utility, where expectation is meant in the technical mathematical sense (Mongin 1997 p. 342).

    However, for Newcomb's Problem, we do not need to worry ourselves with what utility means. Treat the question in Newcomb's Problem as `What should you do in order to maximise your monetary payoff?'. EDT states that in order to maximise your monetary payoff M, you need to maximise the expectation E[M | d] where d is the decision you take. CDT states in order to maximise your monetary payoff M, you need to maximise the expectation E[M | do(d)], where the operator do() is defined by Pearl in (Pearl 2000). EDT implies you should one-box, whilst CDT implies you should two-box.

    Note
    Nozick's presentation of Newcomb's Problem is problematic in a number of ways; if anyone wants me to address the issues with Nozick's presentation (for instance if one is suspicious of whether the thought experiment is even possible), I can provide a different presentation of Newcomb's Problem which avoids Nozick's problems in a future post.

    References

    Arif Ahmed. Infallibility in the newcomb problem. Erkenntnis, 80(2):261273, 2015.
    Robert Bassett. A critique of benchmark theory. Synthese, 192(1):241267, 2015.
    Philippe Mongin. Expected utility theory. Handbook of economic methodology, pages 342350, 1997.
    Robert Nozick. Newcombs problem and two principles of choice. In Nicholas Rescher, editor, Essays in Honor of Carl G. Hempel, pages 114146. Reidel, 1969.
    Ralph Wedgwood. Gandalfs solution to the newcomb problem. Synthese, 190(14):26432675, 2013.
    David H Wolpert and Gregory Benford. The lesson of newcombs paradox. Synthese, 190(9):16371646, 2013.

  2. Thanks Sigfried thanked for this post
    Likes MindTrap028 liked this post
  3. #2
    ODN Community Regular

    Join Date
    Mar 2008
    Location
    Seattle, Washington USA
    Posts
    7,068
    Post Thanks / Like

    Re: Newcomb's Problem

    Neat.

    I chose to One box on first reading the problem.

    I suppose I am weighing the certainty of 1K vs the risk for 1M and deciding the 50% risk is better.

    I can't think of a good reason to suspect the Predictor would get it wrong so I'm presuming he is at worst 50% at guessing and at those odds the 1M + risk looks much more appealing than 1K which has value but not nearly as significant.
    Feed me some debate pellets!

  4. Likes Caconym liked this post
  5. #3
    Registered User

    Join Date
    May 2013
    Posts
    90
    Post Thanks / Like

    Re: Newcomb's Problem

    I'm glad you like it!

    Quote Originally Posted by Sigfried View Post
    I suppose I am weighing the certainty of 1K vs the risk for 1M and deciding the 50% risk is better.

    I can't think of a good reason to suspect the Predictor would get it wrong so I'm presuming he is at worst 50% at guessing and at those odds the 1M + risk looks much more appealing than 1K which has value but not nearly as significant.
    Let me try and formalise your thought within a certain version of EDT and see if you like the formalisation. Let's call K what we know whilst we're making our decision, d our hypothetical action where d=1 means we one-box and d=2 means we two-box, P the prediction of the Predictor where P=1 means it predicted we would one-box and P=2 means it predicted we would two-box, and M our monetary payoff. EDT (or at least this version of) says we should choose d which maximises E[M | d, K]. Note that I am assuming the question is `What should you do in order to maximise your monetary payoff?', rather than simply `What should you do?' or `What do you do?', as I stated very briefly in the OP. Let's compute the expectations according to (this version of) EDT!

    E[M | d=1, K] = 1,000,000*P(P=1 | d=1, K)
    E[M | d=2, K] = 1,000 + 1,000,000*P(P=1 | d=2, K)

    Okay, but when is one-boxing preferable over two-boxing? By computation,
    E[M | d=1, K] > E[M | d=2, K]
    iff
    P(P=1 | d=1, K) + P(P=2 | d=2, K) > 1 + (1/1,000)

    Great! Now, we can argue that P(P=1 | d=1, K) is the probability of correct prediction in one-box cases, and P(P=2 | d=2, K) is the probability of correct prediction in two-box cases. In Nozick's thought experiment, as the Predictor is very good at predicting, these must both be very high and so according to EDT, one-boxing is better than two-boxing. But Sigfried considered the case when the predictor guessed with only 50% accuracy; if we interpret this to mean P(P=1 | d=1, K) = 0.5 = P(P=2 | d=2, K), then unfortunately the inequality isn't quite satisfied, so two-boxing is actually preferred to one-boxing. However, if each probability of prediction is at least slightly greater than 50% - e.g. 51% each - then the inequality is satisfied. So as long as the Predictor is at least slightly better at predicting than guessing randomly, we should one-box.

    To be clear about exactly what mistake Sigfried made, they thought the options were 1M + risk vs. 1K, but in fact the options are 1M*risk1 vs. 1K + 1M*risk2.

 

 

Similar Threads

  1. Problem
    By Swindall in forum Site Feedback
    Replies: 31
    Last Post: August 8th, 2010, 01:07 PM
  2. Help me with a problem I don't know I have
    By Xanadu Moo in forum Community Advice Forum
    Replies: 15
    Last Post: July 31st, 2007, 09:05 AM
  3. NK problem: op ed
    By cat's meow in forum Current Events
    Replies: 3
    Last Post: October 11th, 2006, 09:56 PM
  4. The problem with God
    By Lightkeeper in forum Religion
    Replies: 43
    Last Post: March 11th, 2005, 08:44 PM

Members who have read this thread in the last 45 days : 0

You do not have permission to view the list of names.

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •