Hide table of contents

If we try to maximize expected utility with an unbounded utility function, we will sometimes be reckless: we will accept gambles with an arbitrarily small chance of success if the payout is large enough. And it is not just expected utility maximizers who encounter this problem.  Beckstead and Thomas have shown that any decision framework will be either reckless, timid (unwilling to take obviously good gambles), or non-transitive. This issue becomes important when considering the case for strong longtermism, which says that protecting the far future is overwhelmingly important because of the small probability that it contains an astronomically huge amount of value. It is also at the heart of the Pascal mugger problem, where someone threatens to use alleged supernatural powers to do some astronomically huge amount of harm/good, unless/if we give them our wallet.

But there is a loophole in these arguments. This loophole allows us to avoid the most problematic implications of recklessness in practice. The loophole: we can choose to adopt prior probabilities which make it unlikely that our actions would have such large effects. The larger the potential effect of our action in some scenario, the less probability we assign that scenario in our prior, in proportion. We can give zero probability to scenarios that would allow us to influence infinite utility, removing all of the associated issues that infinite value introduces. I usually see this referred to as the "dogmatic" response to recklessness. We simply adopt extreme confidence that the Pascal mugger is lying to us, or mistaken. If they claim to be able to do infinite harm, then we say that they are lying, or maybe mistaken, with certainty.

It seems to me that given results like Beckstead and Thomas, dogmatism is the most promising way to justify the obvious fact that it is not irrational to refuse to hand over your wallet to a Pascal mugger. (If anyone disagrees that this is an obvious fact, please get in touch, and be prepared to hand over lots of cash).

It is true that penalising large utilities in our priors will not eliminate recklessness entirely. For example, a  decay in our credences could still lead us to take some long-shot gambles. Also, although our prior might severely penalise the possibility of our actions having large effects, we may encounter evidence which causes us to update our initial scepticism. But I think a dogmatic prior would eliminate the most problematic forms of reckless behaviour. To convince a dogmatic decision maker to be reckless, you would now need to make an argument of the form:

Here is a good reason to believe that X has this particular small but non-tiny chance of leading to Y, and Y would be extremely good/bad, so you should do this reckless thing.

But a dogmatic decision maker would not be susceptible to arguments of the form:

We can't rule out that X would lead to Y, so you surely can't assign it that small a probability, and Y would be extremely good/bad, so you should do this reckless thing.

In other words: we avoid the Pascal mugger problem, as well as, I believe, the strongest form of longtermism. But we continue to allow other reckless conclusions that actually seem perfectly fine, like the expected utility based justification for why you should vote in an election (there is a ~1/N chance of your vote changing the result in a close election, and ~N impact on utility if it does, where N is the number of voters).

Given how neat this solution to the problem seems to be, I am confused about why I don't see it defended more often. I believe it is defended by Holden Karnofsky here. But apart from that, I typically see this solution raised, labelled as 'dogmatic' (with obvious negative connotations), and then the discussion moves on. I'm not a philosopher though. I would be interested to read anything anyone can point me to that discusses this in more depth.

In the next section, I'll try to explain what I think is supposed to be wrong with the dogmatic approach. Then, in the following section, I'll explain why I don't actually find this problem to be that bad. I suggest that there could be a close analogy between dogmatism and Occam's razor: both are assumptions that maybe can't be justified in a purely epistemological way, but which nevertheless should be adopted by practical decision makers.

What I think is supposed to be wrong with dogmatism

The obvious objection to dogmatism is that it seems to be a form of motivated reasoning. It tells us to adopt extreme confidence that certain claims are false, for apparently no good reason except that we dislike their consequences. Isn't this the wrong way for a truth-seeker to behave?

Here is another way of looking at it: the dogmatic approach looks like it has things backwards. Epistemology should come first, and decision theory second. That is, first, we should look at the world and form beliefs. Then, given those beliefs, we should try to make good decisions. Proposing dogmatic priors as a solution to recklessness seems strange, because it is an argument which goes in the other direction. It starts by considering problems in decision theory, and then lifts those into conclusions about epistemology, which feels wrong.

Holden Karnofsky's blogpost side-steps this criticism by attempting to provide a first principles defence of dogmatic priors. In his account, the prior is not chosen because it resolves the Pascal mugger problem. That is merely a nice consequence. Instead he argues that a dogmatic prior should emerge naturally from our 'life experience': 

Say that you’ve come to believe – based on life experience – in a “prior distribution” for the value of your actions, with a mean of zero and a standard deviation of 1.

(and he advocates taking a normal, or log-normal, distribution with these parameters). 

He then argues that you can refuse the Pascal mugger on this basis.

But I can see how this argument might not be convincing. Something seems wrong with using your experience of ordinary every-day past decisions to make such a confident judgement on the possibility of this mugger being a sorceror from another dimension. If you go around making these kinds of conclusions based on your past life experiences, that seems like it would lead you into an unjustified level of scepticism of 'black swan' events. It is also not clear to me how Karnofsky's argument would handle the possibility of infinite value.

Personally, I think I'd like to make the dogmatic assumption a part of my true prior. That is, the probability distribution I adopt before I have looked at any evidence at all. It is one of my starting assumptions, and not derived from my life experience. But if I do that, then it does look like I might be open to the 'motivated reasoning' criticism described above.

Why I think dogmatism isn't so bad: the analogy with Occam's razor

In the last section I described how dogmatism feels wrong because its logic seems to go in the wrong direction: from decisions to epistemology, instead of from epistemology to decisions. But in this section I explain why I'm actually not so concerned about this, because I think all of our knowledge about the world ultimately depends on a similarly suspicious argument.

I'll start by explicitly making the argument for dogmatic priors as best I can, and then I'll discuss the problem of induction and Occam's razor, and why the best defence of Occam's razor probably takes a similar form.

An (outline of a possible) argument for dogmatism:

  • We should make decisions to maximize expected utility (see e.g. von Neumann theorem, Savage's axioms)
  • There should be no bound on our utility function, at least when we're doing ethics (if an action affecting N individuals has utility X, the same action affecting 2N individuals should have utility 2X).
  • If there are actions available to us with infinite expected utility, then decision making under an expected utility framework becomes practically impossible (I'm sure you could write many papers on whether this claim is true or not, but here I'm just going to take it for granted).
  • We should therefore adopt dogmatic priors which penalise large utilities. This is the only way to rule out the possibility of infinite expected utility actions. (Note: the penalising of large finite utilities comes for free, even though we only required that infinite expected value be removed from the theory, because we need to rule out St Petersberg paradox type scenarios).
  • If dogmatic priors are a good model for the world, we make good decisions, and if they're not, we were always doomed anyway.

This argument might seem unsatisfying, but I think there could be a close analogy between this argument and the basis for Occam's razor, on which all our knowledge about the world ultimately rests.

The problem of induction concerns the apparent impossibility of learning anything at all about the world, from experience. In Machine Learning, it manifests itself as the No Free Lunch theorem. Suppose we see a coin tossed 99 times and it lands heads every time. What can we say about the probability that the next toss will be heads instead of tails? If we adopt the maximum entropy prior over the  possible coin toss results, where each of the  results is equally likely, and then apply Bayes' theorem to our observations, we can say nothing. The probability of heads or tails on the 100th toss is still 50/50.

In order to make inferences about the unknown, from the known, we need to start by assuming that not all of the  possible results are a priori equally likely. We start off believing, based on no evidence, that some possibilities are more likely than others. For example, we might typically assume that the coin has some constant unknown bias, p. But this might be too strong an assumption. What if this coin obeyed a rule where each toss was 99.9% likely to be the same as the immediately preceding toss? That seems a priori possible, and consistent with the observations, but would be ruled out by the constant unknown bias model. In general, the best way of describing the approach we actually take in these situations, is that we pick a prior over the  coin tosses which is consistent with Occam's razor (and you might try to formalize this using Solomonoff induction). We assume that the world follows rules, and that simpler rules are a priori more likely to be true than more complex rules. Under this assumption, we can be justified in stating that the 100th coin toss is very likely to be heads.

Occam's razor seems to be necessary in order to learn anything at all about the world from experience, but it remains an assumption. It is something we must take for granted. It is extremely tempting to try to justify Occam's razor based on our past experience of the world, but that would be circular. We would be using induction to justify induction.

It is troubling to discover that all of our knowledge potentially rests on an unjustified assumption. It would certainly be convenient for us if the Occam's razor principle were valid, but is there any other reason for believing in it? Or are we engaging in motivated reasoning here? How could we go about trying to defend ourselves against this charge? I think the best defence would actually look very similar to the defence of dogmatic priors given above:

  • We want to have external reasons for our decisions, so we need learning to be possible.
  • Learning is only possible if the Occam's razor principle is true (I'm sure you could write many papers on whether this claim is true or not, or perhaps how to re-phrase it slightly so that it becomes true, but I'm just going to take it for granted).
  • We should therefore adopt priors consistent with Occam's razor.
  • If Occam's-razor -consistent priors are a good model for the world, we make good decisions, and if they're not, we were always doomed anyway.

Hopefully the analogy with dogmatism is clear.

Conclusion

I'd be very interested to read a more in-depth discussion of whether the so-called dogmatic approach is a reasonable response to the problem of recklessness or not. To me, it seems like the best candidate for resolving some of the thorny issues associated with Pascal mugger type problems.

On the face of it, dogmatism looks like it might involve irrationally extreme confidence that the world happens to be arranged in a certain way. But I think there could actually be a close analogy between adopting dogmatic priors, and adopting priors consistent with Occam's razor. Everyone happily does that already without fretting too much.

25

0
0

Reactions

0
0

More posts like this

Comments31
Sorted by Click to highlight new comments since:

One problem with the dogmatic solution against infinities and (sub)distributions with infinite EV is that it means no finite amount of information could allow you to assign a nonzero probability to the possibilities.

Suppose you wake up in what appears to be Biblical Heaven (or Hell), with many of the details basically right. Years pass, and then decades, centuries and milennia. You'd still have to assign 0 probability to it lasting forever. Nothing could convince you to do otherwise.

Or, you watch us colonize space. All the evidence against a neverending (in time or space) universe was badly mistaken (e.g. the universe was never actually expanding, so heat death seems unlikely), and we make it past the edge of the currently observable universe (relative to here) and go further than any specific finite model scientists propose and have any specific reason to believe in should allow. In that kind of situation, you'd still have to be absolutely certain it (the generation of value, or the counterfactual gain in value) will all end eventually and assign 0 probability to heavy tails.

FWIW, unless the mugger is offering you infinite EV or gives you very persuasive evidence of their powers, there might be higher EV things you could pursue.

Personally if I had these experiences I would think it's much more likely that I have gone insane and that these experiences are hallucinations.

I would give substantial weight to that possibility in the case of Heaven, but I wouldn't absolutely rule out infinities 100%. I don't think the space colonization scenario should really make you doubt your sanity much, though, and especially if the evidence we had so far never supported an end to the universe (rather than just undermining the evidence later). We can also posit a future human or agent instead and consider their POV. The most unusual experience in the space colonization scenario is the space colonization itself, not the revisions to our understanding of physics which wouldn't really affect every day life, anyway.

The space colonization scenario doesn't offer the possibility of infinite utility though? Just very large but finite utility?

Regardless of whether or not you think the universe is infinite, my understanding is that the part of the universe that is reachable without faster-than-light travel is still finite.

I tried to set up the space colonization scenario so that the reachable universe (from now) grows without bound over time, but could have been clearer.

Even if it is bounded, if our descendants go on forever (or have an end date that's finite but has infinite expected value), that would still generate infinite expected utility.

I think that's a very persuasive way to make the case against assigning 0 probability to infinities. I think I've got maybe three things I'd say in response which could address the problem you've raised:

  • I don't think we necessarily have to asign 0 probability to the universe being infinite (or even to an infinite afterlife), but only to our capacity to influence an infinite amount of utility with any given decision we're faced with, which is different in significant ways, and more acceptable sounding (to me).
  • Infinity is a tricky concept to grapple with. Even if I woke up in something which appeared to be biblical heaven or hell, is that really convincing evidence that it is going to last literally forever. I'm not sure. Maybe. I'd certainly update to believe that there are very powerful forces at work that I don't understand, and that I'm going to be there a long time (if I've been there millenia already), but maybe it's not so irrational to continue to avoid facing up to infinite value.
  • The second point probably sounded like a stretch, but I think a fundamental part of my take on this is a particular view on what subjective probabilities fundamentally mean: they fundamentally describe how I'm going to make decisions. They obey the axioms of probability theory, because certain axioms of rationality say they should, but their fundamental meaning is that they describe my decision making behaviour. The interpretation in terms of the more abstract notion of a 'credence' is an optional add-on. Assigning 0 probability to the infinite is then simply me describing how I'm going to treat it for the purposes of decisions/gambles, rather than me saying that I am certain it can't be true.

I'd be interested in hearing you elaborate more on your final sentence (or if you've got any links you could point me to which elaborate on it).

(Also, I don't think heat death depends on an expanding universe. It's an endgame for existence that would require a change to the 2nd law of thermodynamics in order to escape, in my understanding.)

I think the original Pascal's wager addresses your first point (assuming you think a higher probability of infinity is infinitely better than a lower but still nonzero probability of infinity). Also, if our descendents will create infinite (or undefined) value in expectation, then extinction risk reduction plausibly affects the probability here. There's also acausal influence over infinitely many agents in an infinite universe, but that might be more tractable to treat as finite EVs, after some kind of averaging.

On your 2nd and 3rd points, I think we should aim for justified beliefs, and 0 to infinity and infinite EV subdistributions doesn't seem justifiable to me. Is it reasonable for someone to have an arbitrarily strong prior that the average number of heads per coin flip (for a specific coin, flipped a million times) is 0.9 with no particular reason favouring heads over tails (or only motivated reasons), and only barely adjusting their beliefs as the average comes out to about 0.5? I guess, from the outside, with "more reasonable" beliefs, this person is predictably losing by their own lights. The same seems true of those assigning 0 credence to infinities, although it's much harder (maybe impossible?) to get good enough feedback to show they're losing, and one-shots seem pretty different from events that are basically repeated many times.

On my last sentence from my first comment, see my point 5 here:

https://forum.effectivealtruism.org/posts/qcqTJEfhsCDAxXzNf/what-reason-is-there-not-to-accept-pascal-s-wager?commentId=Ydbz56hhEwxg9aPh8

Also, see this post and discussion:

https://forum.effectivealtruism.org/posts/sEnkD8sHP6pZztFc2/fanatical-eas-should-support-very-weird-projects

And https://reducing-suffering.org/lab-universes-creating-infinite-suffering/

I guess I am calling into question the use of subjective probabilities to quantify beliefs.

I think subjective probabilities make sense in the context of decisions, to describe your decision-making behaviour (see e.g. Savage's derivation of probabilities from certain properties of decision-making he thinks we should abide by). But if you take the decisions out of picture, and try to talk about 'beliefs' in abstract, and try to get me to assign a real number between 0 and 1 to them, I think I am entitled to ask "why would I want to do something like that?" Especially if it's going to lead me into strange conclusions like "you should give your wallet to a Pascal mugger".

I think rejecting the whole business of assigning subjective probabilities to things is a very good way to reply to the Pascal's mugger problem in general. Its big weakness is that there are several strong arguments you can make that tell you you should quanity subjective uncertainty with mathematical probabilities, of which I think Savage's is the strongest. But the actual fundamental interpretation of subjective probabilities in Savage's argument is that they describe how you will act in the context of decisions, not that they quantify some more abstract notion like a degree of belief (he deliberately avoids talking in those terms). For example, P(A)>P(B) fundamentally means that you will choose to receive a prize in the event that A occurs, rather than in the event that B occurs, if forced to choose between the two.

If that's what subjective probabilities fundamentally mean, then it doesn't seem necessarily absurd to assign zero probability to something that is concievable. It at least doesn't violate any of Savage's axioms. It seems to violate our intuition of how quantification of credences should behave,  but I think I can reply to that by resorting to the "why would I want to do something like that?" argument. Quantifying credences is not actually what I'm doing. I'm trying to make decisions.

Thanks for the links! I'm still not sure I quite understand point 5. Is the idea that instead of giving my wallet to the mugger, I should donate it to somewhere else that I think has a more plausible sounding way of achieving infinite utility? I suppose that might be true, but doesn't really seem to solve the Pascal mugger problem to me, just reframe it a bit.

If you aren't (ever) using subjective probabilities to guide decisions, then what would you use instead and why? If you're sometimes using subjective probabilities, how do you decide when to and when not to, and why?

If that's what subjective probabilities fundamentally mean, then it doesn't seem necessarily absurd to assign zero probability to something that is concievable. It at least doesn't violate any of Savage's axioms.

Unbounded utilities do violate Savage's axioms, though, I think because of St. Petersburg-like lotteries. Savage's axioms, because of completeness (you have to consider all functions from states to outcomes, so all lotteries), force your utility function and probability function to act in certain ways even over lotteries you would assign 0 probability to ever countering. But you can drop the completeness axiom and assume away St. Petersburg-like lotteries, too. See also Toulet's An Axiomatic Model of Unbounded Utility Functions (which I only just found and haven't read).

I am comfortable using subjective probabilities to guide decisions, in the sense that I am happy with trying to assign to every possible event a real number between 0 and 1, which will describe how I will act when faced with gambles (I will maximize expected utility, if those numbers are interpreted as probabilities).

But the meaning of these numbers is that they describe my decision-making behaviour, not that they quantify a degree of belief. I am rejecting the use of subjective probabilities in that context, if it is removed from the context of decisions. I am rejecting the whole concept of a 'degree of belief', or of event A being 'more likely' than event B. Or at least, I am saying there is no meaning in those statements that goes beyond the meaning: 'I will choose to receive a prize if event A happens, rather than if event B happens, if forced to choose'.

And if that's all that probabilities mean, then it doesn't seem necessarily wrong to assign probability zero to something that is concievable. I am simply describing how I will make decisions. In the Pascal mugger context: I would choose the chance of a prize in the event that I flip 1000 heads in a row on a fair coin, over the chance of a prize in the event that the mugger is correct.

That's still a potentially counter-intuitive conclusion to end up at, but it's a bullet I'm comfortable biting.  And I feel much happier doing this than I do if you define subjective probabilities in terms of degrees of belief. I believe this language obscures what subjective probabilities fundamentally are, and this, previously, made me needlessly worried that by assigning extreme probabilities, I was making some kind of grave epistemological error. In fact I'm just describing my decisions.

Savage's axioms don't rule out unbounded expected utility, I don't think. This is from Savage's book, 'Foundations of Statistics', Chapter 5, 'Utility', 'The extension of utility to more general acts':

"If the utility of consequences is unbounded, say from above, then, even in the presence of P1-7, acts (though not gambles) of infinite utility can easily be constructed. My personal feeling is that, theological questions aside, there are no acts of infinite or minus infinite utility, and that one might reasonable so postulate, which would amount to assuming utility to be bounded."

The distinction between 'acts' and 'gambles' is I think just that gambles are acts with a finite number of possible consequences (which obviously stops you constructing infinite expected value), but the postulates  themselves don't rule out infinite utility acts.

I'm obviously disagreeing with the Savage's final remark in this post. I'm saying that you could also shift the 'no acts of infinite or minus infinite utility' constraint away from the utility function, and onto the probabilities themselves.

I think this doesn't really answer my question or is circular. I don't think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what you're saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how you'll do this.

Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?

On Savage’s axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savage’s version, one that would be similarly irrational to violate; see https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but I'm not sure you'd want to commit to a finite state space.

I still don't think the position I'm trying to defend is circular. I'll have a go at explaining why.

I'll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. I'd look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayes' theorem. Savage's axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayes' theorem. If there is any difference between our positions, it will only be in how we should pick our priors.  You pick those before you look at any evidence at all. How should you do that?

Savage's axioms don't tell you how to pick your priors. But actually I don't know of any other principle that does either. If you're trying to quantify 'degrees of belief' in an abstract sense, I think you're sort of doomed (this is the problem of induction). My question for you is, how do you do that?

But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I don't think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but that's quite a weak constraint. It doesn't   by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).

So this is the way I'm thinking about the whole problem of forming beliefs and making decisions. I'm asking the question:

" I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?"

I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I don't think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that I'm not susceptible to infinity threatening Pascal muggers, then I shouldn't assign non-zero probability to situations that would allow me to influence infinite utility.

I don't think there is anything circular here.

Ok, this makes more sense to me.

FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.

Thanks for your post. This is exactly the resolution I've been advocating for years. Can you provide links to some of those papers that consider this position and reject it? I've never seen a convincing argument against this.

The two specific examples that come to mind where I've seen dogmatism discussed and rejected (or at least not enthusiastically endorsed) are these:

The first is not actually a paper, and to be fair I think Hajek ends up being pretty sympathetic to the view that in practice, maybe we do just have to be dogmatic. But my impression was it was a sort of reluctant thing, and I came away with the impression that dogmatism is supposed to have major problems.

In the second, I believe dogmatism is treated quite dismissively, although it's been a while since I've read it and I may have misunderstood it even then!

So I may be summarising these resources incorrectly, but iirc they are both really good, and would recommend checking them out if you haven't already!

dogmatism is the most promising way to justify the obvious fact that it is not irrational to refuse to hand over your wallet to a Pascal mugger. (If anyone disagrees that this is an obvious fact, please get in touch, and be prepared to hand over lots of cash).

There is another way out. We can agree that it is rational to hand over the wallet and thank heavens that we’re lucky not to be rational. I’m convinced by things like Kavka’s poison paradox and Newcomb’s paradox that sometimes it sucks to be rational. Maybe Pascal’s mugger is one of those cases.

Occam's razor seems to be necessary in order to learn anything at all about the world from experience, but it remains an assumption.

There are plenty of other assumptions that would allow learning. For any specific complex way the world might be, x, we are able to learn given an assumption of bias toward simplicity for every hypothesis except for x and a bias for x. If all you have to justify Occam’s razor is overall usability, you’ve got very little reason to prefer it to nearby aberrations.

Thanks for your comment, these are good points!

First, I think there is an important difference between Pascal's mugger, and Kavka's poison/Newcomb's paradox. The latter two are examples of ways in which a theory of rationality might be indirectly self-defeating. That means: if we try to achive the aims given to us by the theory, they can sometimes be worse achieved than if we had followed a different theory instead. This means there is a sense in which the theory is failing on its own terms. It's troubling when theories of rationality or ethics have this property, but actually any theory will have this property in some concievable circumstances, because of Parfit's satan thought experiment (if you're not familiar, do a ctrl+F for satan here: https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf doesn't seem to have a specific wikipedia article that i can find).

Pascal's mugger seems like a different category of problem. The naive expected utility maximizing course of action (without dogmatism) seems absurd, but not because it is self-defeating. The theory is actually doing well on its own terms. It is just that those terms seem absurd. I think the Pascal mugger scenario should therefore present more of a problem for the expected utility theory, than the Kavka's poison/Newcomb's paradox thought experiments do.

On your second point, I don't have a good reply. I know there's probably gaping holes in the defence of Occam's razor I gave in the post, and that's a good example of why.

I'm very interested though, do you know a better justification for Occam's razor than usability?

The theory is actually doing well on its own terms.

Can you expand on what you mean by this? I would think that expected utility maximization is doing well insofar as your utility is high. If you take a lot of risky bets, you're doing well if a few pay off. If you always pay the mugger, you probably think your decision theory is screwing you unless you find yourself in one of those rare situation where the mugger's promises are real.

I'm very interested though, do you know a better justification for Occam's razor than usability?

I don't . I'm more or less in the same boat that I wish there was a better justification, and I'm inclined to continue using it because I have to (because there is no clear alternative, because it is human nature, etc.)

Lets assume for the moment that the probabilities involved are known with certainty. If I understand your original 'way out' correctly, then it would apply just as well in this case. You would embrace being irrational and still refuse to give the mugger your wallet. But I think here, the recommendations of expected utility theory in a Pascal's mugger situation are doing well 'on their own terms'. This is because expected utility theory doesn't tell you to maximize the probability of increasing your utility, it tells you to maximize your utility in expectation, and that's exactly what handing over your wallet to the mugger does. And if enough people repeated it enough times, some of them would eventually find themselves in a rare situation where the mugger's promises were real.

In reality, the probabilities involved are not known. That's an added complication which gives you a different way out of having to hand over your wallet, and that's the way out I'm advocating we take in this post.

Rather than assigning 0 probability to infinities or infinite impacts, if you're set on sticking to unbounded utility, I'd endorse just allowing (not necessarily requiring) yourself to ignore probabilities up to  some small threshold for any option, including tiny probability differences for outcomes that aren't very unlikely.

I wouldn't want to keep ignoring the possibility of infinities if I believed I could make an actual infinite difference with a large enough probability. Giving this up seems worse to me than deviating slightly (more) from the ideal of vNM or Savage rationality. That ideal was also never realistic to meet, anyway.

This solution also works for St. Petersburg lotteries and Pascal's muggings, although you might also have independent reasons to pick a very skeptical prior.

Related: Monton, How to Avoid Maximizing Expected Utility.

May be of interest, from Paul Christiano:

I think it's also worth observing that although St Petersburg cases are famously paradox-riddled, these cases seem overwhelmingly important on a conventional utilitarian view even before we consider any exotic hypotheses. Indeed, I personally became unhappy with unbounded utilities not because of impossibility results but because I tried to answer questions like "How valuable is it to accelerate technological progress?" or "How bad is it if unaligned AI takes over the world?" and immediately found that EU maximization with anything like "utility linear in population size" seemed to be unworkable in practice. I could find no sort of common-sensical regularization that let me get coherent answers out of these theories, and I'm not sure what it would look like in practice to try to use them to guide our actions.

The dogmatic solution has its charm, but it's incompatible with a minimal scepticism.

The idea is that there's always a possibility everything you see is unreal because some powerful demons or deities will it so. Therefore, we have a reason not to give any statement 100% (or 0) credence just in case.

I think this comes down to the question of what subjective probabilities actually are. If something is concievable, do we have to give it a probability greater than 0? This post is basically asking, why should we?

The main reason I'm comfortable adapting my priors to be dogmatic is that I think there is probably not a purely epistemological 'correct' prior anyway (essentially because of the problem of induction), and the best we can do is pick priors that might help us to make practical decisions.

I'm not sure subjective probabilities can necessarily be given much meaning outside of the context of decision theory anyway. The best defence I know for the use of subjective probabilities to quantify uncertainty is due to Savage, and in that defence decisions are central. Subjective probabilities fundamentally describe decision making behaviour (P(A) > P(B) means someone will choose to receive a prize if A occurs, rather than if B occurs, if forced to choose between the two).

And when I say that some infinite utility scenario has probability 0, I am not saying it is inconcievable, but merely describing the approach I am going to take to making decisions about it: I'm not going to be manipulated by a Pascal's wager type argument.

Make sense!

I'm not sure I'm understanding. It looks like at some K, you arbitrarily decide that the probability is zero, sooner than the table that the paper suggests. So, in the thought experiment, God decides what the probability is, but you decide that at some K, the probability is zero, even though the table lists the N at which the probability is zero where N > K. Is that correct?

Another way to look at this problem is with respect to whether what is gained through accepting a wager for a specific value is of value to you. The thought experiment assumes that you can gain very large amounts and matter how high the accumulated value at N, the end of the game, you still have a use for the amount that you could, in principle, gain.

However, for any valuable thing I can think of (years of life, money, puppies, cars), there's some sweet spot, with respect to me in particular. I could desire a 100 hundred years of life but not 1000, or 10 cars but not 100, or fifty million dollars but not five hundred million dollars, or one puppy but not ten. Accordingly, then, I know how much value to try to gain.

Assuming some pre-existing need, want, or "sweet spot", then, I can look at the value at i, where at i the value meets my need. If N< i, the question becomes whether I still gain if I get less value than I want. If N> i, then I know to take a risk up to K, where K = i and K < N. If N=i, then I know to play the game (God's game) to the end.

In real life, people don't benefit past some accumulation of valuable something, and what matters is deciding what level past which an accumulation is wasteful or counterproductive. One hundred cars would be too much trouble, even one puppy is a lot of puppies when you have to clean up puppy poop, and why not $500,000,000? Well, that's just more than I need, and would be more of a burden than a help. Put differently, if I really needed big sums, I'd take a risks for up to that amount, but no higher. When would I need such big sums and take the accompanying big risks? Maybe if I owed a bookie $50,000,000 and the bookie had very unpleasant collectors?

If we know the probabilities with certainty somehow (because God tells us, or whatever) then dogmatism doesn't help us avoid reckless conclusions. But it's an explanation for how we can avoid most reckless conclusions in practice (it's why I used the word 'loophole', rather than 'flaw'). So if someone comes up and utters the Pascal's mugger line to you on the street in the real world, or maybe if someone makes an argument for very strong longtermism, you could reject it on dogmatic grounds.

On your point about diminishing returns to utility preventing recklessness, I think that's a very good point if you're making decisions for yourself. But what about when you're doing ethics? So deciding which charities to give to, for example? If some action affecting N individuals has utility X, then some action affecting 2N individuals should have utility 2X. And if you accept that, then suddenly your utility function is unbounded, and you are now open to all these reckless and fanatical thought experiments.

You don't even need a particular view on population ethics for this. |The Pascal mugger could tell you that the people they are threatening to torture/reward already exist in some alternate reality.

Hm, ok. Couldn't Pascal's mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.

My pursuit of giving to charity is not unbounded, because I don't perceive an unbounded need. If the charity were meant to drive unbounded increase in the numbers of those receiving charity, that would be a special case, and not one that I would sign up for. But putting aside truly infinite growth of perceived need for the value returned by the wager, in all wagers of this sort that anyone could undertake, they establish a needed level of utility, and compare the risk involved to whatever stakeholders of taking the wager at that utility level against the risks of doing nothing or wagering for less than the required level.

In the case of ethics, you could add an additional bounds on personal risk that you would endure despite the full need of those who could receive your charity. In other words, there's only so much risk you would take on behalf of others. How you decide that should be up to you. You could want to help a certain number of people, or reach a specific milestone towards a larger goal, or meet a specific need for everyone, or spend a specific amount of money, or whathaveyou, and recognize that level of charity as worth the risks involved to you of acquiring the corresponding utility. You just have to figure it out beforehand.

If by living 100 years, I could accomplish something significant, but not everything, on behalf of others, that I wanted, but I would not personally enjoy that time, then that subjective decision makes living past 100 years unattractive, if I'm deciding solely based on my charitable intent. I would not, in fact, live an extra 100 years for such a purpose without meeting additional criteria, but for example's sake, I offered it.

I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn't even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.

But you haven't really avoided the problem, just re-phrased it slightly. Whatever the amount of money you would be willing to risk for others, then on expected utility terms, it seems better to give it to the mugger, than to an excellent charity, such as the Against Malaria Foundation. In this framing of the problem, the mugger is now effectively robbing the AMF, rather than you, but the problem is still there.

In my understanding, Pascal's Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let's assume that I don't have the money sufficient for that donation, and have no other way to get that money. Ever. I don't care to spend the money I do have on anything else. Then, thinking altruistically, I'll keep negotiating with Pascal's Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All I've done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I don't have enough money otherwise to do anything that I care about for others, and I'm not considering consequences to me of losing the money.

In Yudkowsky's formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. I'm supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Mugger's threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn't have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren't people better off if I give that money to charity after all?

You wrote,

"I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn't even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger."

The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I don't repay they'll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I won't. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, it's my rent money, and without it, I'm turned out onto predatory streets. Or it's my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. That's when that personal threshold is meaningful.

Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then I'm not interested in those returns. This is why someone with a limited income or savings typically shouldn't make bets. It's also why Effective Altruism's betting focus makes no sense for bets with sizes that impact a person's well-being when the bets are lost. I think it's also why, in the end, EA's don't put their money where their mouthes are.

EA's don't make large bets or they don't make bets that risk their well-being. Their "big risks" are not that big, to them. Or they truly have a betting problem, I suppose. It's just that EA's claim that betting money clarifies odds because EA's start worrying about opportunity costs, but does it? I think the amounts involved don't clarify anything, they're not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singer's utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.

When you write:

"I decide what the probability of the Mugger's threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn't have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren't people better off if I give that money to charity after all?"

This is exactly the 'dogmatic' response to the mugger that I am trying to defend in this post! We are in complete agreement, I believe!

For possible problems with this view, see other comments that have been left, especially by MichaelStJules.

Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:

  • probability that you assign to the Mugger's threat
  • probability that the Mugger or a third party assigns to the Mugger's threat

Although I'm not a fan of subjective probabilities, that could be because I don't make a lot of wagers.

There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the "best" match?

  • a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
  • a hamming distance: each binary digit is a yes/no answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
  • a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final "matches prototypical situation X" or "doesn't match prototypical situation X" along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.

In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief "don't give" because, even if I gave them the money, they'd not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as "person asking for money is a criminal?(Y/N)"

Curated and popular this week
Relevant opportunities