Hide table of contents

In this post I defend that we should make all our decisions on the basis of how likely a given action set would be to elicit a positive infinite utility. I then suggest some possible mechanisms and actions that we might want to consider given this. Lastly, I offer some responses to anticipated objections.

Note that in order to keep my discussion simple I assume that we are totally sure that a moral realist form of hedonistic utilitarianism is true. I don't think the method of analysis used would need to change if we relaxed this assumption to a different form of consequentialism.

Argument

Many here will be sympathetic to the following claim:

(1) An agent ought to take the set of actions which maximise global expected utility

However, ignoring the possibility of infinite negative utilities (see objection e below for more on this), all possible actions seem to have infinite positive utility in expectation. For all actions have a non-zero chance of resulting in infinite positive utility. For instance it seems that for any action there's a very small chance that I might get an infinite bliss pill as a result.

As such, classical expected utility theory won't be action guiding unless we add an additional decision rule: that we ought to pick the action which is most likely to bring about the infinite utility. This addition seems intuitive to me, imagine two bets: one where there is 0.99 chance of getting infinite utility and one where there is a 0.01 chance. It seems irrational to not take the 0.99 deal even though they have the same expected utility. Therefore it seems that we should really be sympathetic to:

(2) An agent ought to take the set of actions which make it most likely that infinite utility results

If you're sympathetic to (2) the question we now need to consider is what things are most likely to elicit the infinite utility. One imperfect but useful taxonomy is that there two types of relevant options: direct and indirect options.

Direct options are the proximate causes which would elicit the infinite utility, i.e. the action of swallowing an infinite bliss pill. Indirect options are those that make it more likely we get to the stage of being able to take the direct option, i.e. avoiding existential risk so that humanity has longer to search for the infinite bliss pill.

It's quite possible that the indirect options dominate all the current direct options we can currently think of. However it's possible that although the indirect options are our better hope that there are also some direct options that we could (nearly) costlessly add to our best indirect option action set.

The rest of this post will focus on what the most plausible direct options for eliciting the infinite utility might be. For simplicity's sake I'll also assume that we are justified in having a credence of one in a moral realist form of hedonic utilitarianism being the correct moral theory.

Three potential possibilities of infinite utility bringing mechanisms initially spring to my mind that we should pay attention to: a vastly powerful superintelligence; the God of classical theism (or something similar); or some scientific quirk which ends up in infinite replications, i.e. perhaps creating a maximal multiverse. I don't pretend that there couldn't be other mechanisms which could create actual infinities, but these are the most plausible ones that spring to mind, feel free to suggest others. Note that currently unknown mechanisms are unlikely to be action guiding for us when it comes to direct options in our current position.

Given these potential mechanisms, the relevant options that might be worth exploring are:

(i) Working towards creating a superintelligence with the best chance of having the computational power necessary to make infinite positive valence experience moments. (Perhaps this is sufficiently far-off that really it's an indirect option.)

Quick review: if we help engineer a superintelligence quickly and badly then it might reduce rather than increase our possibility of eliciting an infinite utility in the long run.

(ii) Taking the set of options that the 'revealed' religions suggest might elicit the infinite number of positive valence experience moments (typically described as heaven). The commonly suggested routes to achieve this are conversion and good behaviour. We'd presumably want to make sure that a good number of people converted to each of theisms in order to have the best chance of at least one of them eliciting the infinite utility.

Quick review: this action might already be covered, there are a lot of religious believers and if those religions are correct then the infinite utility will probably have already been secured. Still it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances.

(iii) Presumably doing more foundational scientific research to understand how we might unlock an infinite replication dynamic? However this would probably fall in the indirect options category.

Quick review: this seems inoffensive and a good idea generally.

Responses to anticipated objections

Objection (a): There is good reason to believe that there is already infinite utility out there, so your argument would just mean that we don't need to do anything else.

Response (a): Unless you have a credence of one that this is the case your best option is still to do all you can to try push the probability of an infinite utility as close to one as possible, so the ideas in this blog are going to be decision relevant?


Objection (b): This is just a Pascal's Mugging. I have reason to think that Pascal's Mugging style arguments are always unsuccessful.

Response (b): If you do have some knock down argument against Pascal's Mugging then maybe you can (and please do tell me!). However if you're at all uncertain about your argument against Pascal's Mugging then you might want to consider the post above as an insurance in case you are wrong.


Objection (c): I have a zero credence that morality objectively requires us to do anything as I'm a 100% convinced moral anti-realist. So your argument doesn't bite with me.

Response (c): You can run a version of the above argument not on the basis of an infinite utility being achieved by *someone* (the morality version) but instead about *you* achieving the infinite utility (the prudential version). If you think think you have normative reason to maximise your expected utility then the above post will be relevant for you, though the suggestions will be different. Presumably direct options become much more attractive than indirect options as you'll need to get yourself the infinite utility before you cease to exist.


Objection (d): Your post doesn't take into account different cardinalities of infinity. We shouldn't be aiming for just any possible positive infinity but only the highest cardinality of infinity.

Response (d): This seems plausible to me, though I don't understand the maths/philosophy of infinity to have a strong view. If we should pursue higher order cardinalities of infinities over lower ones then I expect this means we should just focus on God mechanisms as Gods, if they exist, presumably are more likely to have access to higher cardinalities of infinity than anything else.


Objection (e): You conveniently ignore possibilities of infinite negative utilities. These wreck your analysis.

Response (e): I imagine they might. My understanding from Alan Hájek on this is that any action which has a non-zero chance of bringing about both a positive and negative infinite utility would have an undefined expected utility. My view is that this will probably be true of all actions and so all actions would have an undefined expected utility. However if that's the case then this fact won't be decision relevant. So perhaps it is rational to make decisions on the basis of bracketing the possibility of either positive or negative infinite utilities (but not both). I'd be very interested in people's views here.

4

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

>For all actions have a non-zero chance of resulting in infinite positive utility.

Human utility functions seem clearly inconsistent with infinite utility. See Alex Mennen's Against the Linear Utility Hypothesis and the Leverage Penalty for arguments.

I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate as infinite utility.

Human utility functions seem clearly inconsistent with infinite utility.

If you're not 100% sure that they are inconsistent then presumably my argument is still going to go through, because you'll have a non-zero credence that actions can elicit infinite utilities and so are infinite in expectation?

I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate as infinite utility.

So maybe from the self interest perspective you discount future experiences. However from a moral perspective that doesn't seem too relevant, these are experiences and they count the same so if there are an infinite number of positive experiences then they would sum to an infinite utility. Also note that even if your argument counted in the moral realm too then unless you're 100% sure it does then my reply to your other point will work here too?

I think it's more appropriate to use Bostrom's Moral Parliament to deal with conflicting moral theories.

Your approach might be right if the theories you're comparing used the same concept of utility, and merely disagreed about what people would experience.

But I expect that the concept of utility which best matches human interests will say that "infinite utility" doesn't make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same.


Similarly, I use a dealist approach to morality. If you show me an argument that there's an objective morality which requires me to increase the probability of infinite utility, I'll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom's parliament than like your approach.

Still it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances.

I'm very sympathetic to the idea that all we ought to be doing is to maximize the probability we achieve an infinite amount of value. And I'm also sympathetic to religion as a possible action plan there; the argument does not warrant the "incredulous stares" it typically gets in EA. But I don't think it's as simple as the above quote, for at least two reasons.

First, religious belief broadly specified could more often create infinite amounts of disvalue than infinite amounts of value, from a religious perspective. Consider for example the scenario in which non-believers get nothing, believers in the true god get plus infinity, and believers in false gods get minus infinity. Introducing negative infinities does wreck the analysis if we insist on maximizing expected utility, as Hajek points out, but not if we switch from EU to a decision theory based on stochastic dominance.

Second, and I think more importantly, religiosity might lower the probability of achieving infinite amounts of value in other ways. Belief in an imminent Second Coming, for instance, might lower the probability that we manage to create a civilization that lasts forever (and manages to permanently abolish suffering after a finite period).

Thanks @trammell.

Will read up on stochastic dominance, will presumably bring me back to my mirco days thinking about lotteries...

Note that I think there may be a way of dealing with it whilst staying in the expected utility framework. Where we ignore undefined expected utilities as they are not action guiding. Instead we focus on the part of our probability spaces where they don't emerge. In this case I suggest we should only focus on worlds in which you can't have both negative and positive infinities. We'd assume in our analysis that only one of them exists (you'd choose the one which is more plausible to exist on it's own). Interested to hear whether you think that's plausible.

On your second point I guess I doubt that sending a couple of thousand people into each religion would have big enough negative indirect effects to make it net negative. Obviously this would be hard to assess but I imagine we'd agree on the methodology?

I was just saying that, thankfully, I don’t think our decision problem is wrecked by the negative infinity cases, or the cases in which there are infinite amounts of positive and negative value. If it were, though, then okay—I’m not sure what the right response would be, but your approach of excluding everything from analysis but the “positive infinity only” cases (and not letting multiple infinities count for more) seems as reasonable as any, I suppose.

Within that framework, sure, having a few thousand believers in each religion would be better than having none. (It’s also better than having everyone believe in whichever religion seems most likely, of course.) I was just taking issue with “it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances”.

Presumably this system would suggest we should encourage people to believe in a wide variety of religions, if one believer is all we need for infinite utility. Rather than converting more people to Catholicism we'd spend our time inventing/discovering new religions and converting one person.

Yep that seems right, though you might want more than one believer in each in case one of the assigned people messes it up somehow.

Curated and popular this week
Relevant opportunities