Comment author: JamesDrain 05 November 2017 04:10:30AM *  1 point [-]

I’m worried that people’s altruistic sentiments are ruining their intuition about the prisoner’s dilemma. If Bob were an altruist, then there would be no dilemma. He would just cooperate. But within the framework of the one-shot prisoner’s dilemma, defecting is a dominant strategy – no matter what Alice does, Bob is better off defecting.

I’m all for caring about other value systems, but if there’s no causal connection between our actions and aliens’, then it’s impossible to trade with them. I can pump someone’s intuition by saying, “Imagine a wizard produced a copy of yourself and had the two of you play the prisoner’s dilemma. Surely you would cooperate?” But that thought experiment is messed up because I care about copies of myself in a way that defies the set up of the prisoner’s dilemma.

One way to get cooperation in the one-shot prisoner’s dilemma is if Bob and Alice can inspect each other’s source code and prove that the other player will cooperate if and only if they do. But then Alice and Bob can communicate with each other! By having provably committed to this strategy, Alice and Bob can cause other player’s with the same strategy to cooperate.

Evidential decision theory also preys on our sentiments. I’d like to live in a cool multiverse where there are aliens outside my light cone who do what I want them to, but it’s not like my actions can cause that world to be the one I was born into.

I’m all for chasing after infinities and being nice to aliens, but acausal trade makes no sense. I’m willing to take many other infinite gambles, like theism or simulationism, before I’m willing to throw out causality.

Comment author: caspar42 07 November 2017 09:06:24PM *  4 points [-]

I agree that altruistic sentiments are a confounder in the prisoner's dilemma. Yudkowsky (who would cooperate against a copy) makes a similar point in The True Prisoner's Dilemma, and there are lots of psychology studies showing that humans cooperate with each other in the PD in cases where I think they (that is, each individually) shouldn't. (Cf. section 6.4 of the MSR paper.)

But I don't think that altruistic sentiments are the primary reason for why some philosophers and other sophisticated people tend to favor cooperation in the prisoner's dilemma against a copy. As you may know, Newcomb's problem is decision-theoretically similar to the PD against a copy. In contrast to the PD, however, it doesn't seem to evoke any altruistic sentiments. And yet, many people prefer EDT's recommendations in Newcomb's problem. Thus, the "altruism error theory" of cooperation in the PD is not particularly convincing.

I don't see much evidence in favor of the "wishful thinking" hypothesis. It, too, seems to fail in the non-multiverse problems like Newcomb's paradox. Also, it's easy to come up with lots of incorrect theories about how any particular view results from biased epistemics, so I have quite low credence in any such hypothesis that isn't backed up by any evidence.

before I’m willing to throw out causality

Of course, causal eliminativism (or skepticism) is one motivation to one-box in Newcomb's problem, but subscribing to eliminitavism is not necessary to do so.

For example, in Evidence, Decision and Causality Arif Ahmed argues that causality is irrelevant for decision making. (The book starts with: "Causality is a pointless superstition. These days it would take more than one book to persuade anyone of that. This book focuses on the ‘pointless’ bit, not the ‘superstition’ bit. I take for granted that there are causal relations and ask what doing so is good for. More narrowly still, I ask whether causal belief plays a special role in decision.") Alternatively, one could even endorse the use of causal relationships for informing one's decision but still endorse one-boxing. See, e.g., Yudkowsky, 2010; Fisher, n.d.; Spohn, 2012 or this talk by Ilya Shpitser.

Comment author: caspar42 02 November 2017 09:48:43PM 4 points [-]

A few of the points made in this piece are similar to the points I make here: https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/

For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.