6

Problems and Solutions in Infinite Ethics

Summary: The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective.

Like everything in life, the canonical reference in philosophy about this problem was written by Nick Bostrom. However, I found that an area of economics known as "sustainable development" has actually made much further progress on this subject than the philosophy world. In this post I go over some of what I consider to be the most interesting results.

NB: This assumes a lot of mathematical literacy and familiarity with the subject matter, and hence isn't targeted to a general audience. Most people will probably prefer to read my other posts:


1. Summary of the most interesting results

  1. There’s no ethical system which incorporates all the things we might want.
  2. Even if we have pretty minimal requirements, satisfactory ethical systems might exist but we can’t prove their existence, much less actually construct them
  3. Discounted utilitarianism, whereby we value people less just because they are further away in time, is actually a pretty reasonable thing despite philosophers considering it ridiculous.
    1. (I consider this to be the first reasonable argument for locavorism I've ever heard)

2. Definitions

In general, we consider a population to consist of an infinite utility vector (u0,u1,…) where ui is the aggregate utility of the generation alive at time i. Utility is a bounded real number (the fact that economists assume utility to be bounded confused me for a long time!). Our goal is to find a preference ordering over the set of all utility vectors which is in some sense “reasonable”. While philosophers have understood for a long time that finding such an ordering is difficult, I will present several theorems which show that it is in fact impossible.

Due to a lack of latex support I’m going to give English-language definitions and results instead of math-ey ones; interested people should look at the papers themselves anyway.

3. Impossibility Results

3.0 Specific defs

  • Strong Pareto: if you can make a generation better off, and none worse off, you should.
  • Weak Pareto: if you can make every generation better off, you should.
  • Intergenerational equity: utility vectors are unchanged in value by any permutation of their components.
    • There is an important distinction here between allowing a finite number of elements to be permuted and an infinite number; I will refer to the former as “finite intergenerational equity” and the latter as just “intergenerational equity”
  • Ethical relation: one which obeys both weak Pareto and finite intergenerational equity
  • Social welfare function: an order-preserving function from the set of populations (utility vectors) to the real numbers

3.1 Diamond-Basu-Mitra Impossibility Result1

  1. There is no social welfare function which obeys Strong Pareto and finite intergenerational equity. This means that any sort of utilitarianism won’t work, unless we look outside the real numbers.

3.2 Zame's impossibility result2

  1. If an ordering obeys finite intergenerational equity over [0,1]N, then almost always we can’t tell which of two populations is better 
    1. (i.e. the set of populations {X,Y: neither X<Y nor X>Y} has outer measure one)
  2. The existence of an ethical preference relation on [0,1]N is independent of ZF plus the axiom of choice

4. Possibility Results

We’ve just shown that it’s impossible to construct or even prove the existence of any useful ethical system. But not all hope is lost!

The important idea here is that of a “subrelation”: < is a subrelation to <’ if x<y implies x<’y.

Our arguments will work like this:

Suppose we could extend utilitarianism to the infinite case. (We don't, of course, know that we can extend utilitarianism to the infinite case. But suppose we could.) Then A, B and C must follow.

Technically: suppose utilitarianism is a subrelation of <. Then < must have properties A, B and C.

Everything in this section comes from (3). This is a great review of the literature.

4.1 Definition

  • Utilitarianism: we extend the standard total utilitarianism ordering to infinite populations in the following way: suppose there is some time T after which every generation in X is at least as well off as every generation in Y, and that the total utility in X before T is at least as good as the total utility in Y before T. Then X is at least as good as Y.
    • Note that this is not a complete ordering! In fact, as per Zame’s result above, the set of populations it can meaningfully speak about has measure zero.
  • Partial translation scale invariance: suppose after some time T, X and Y become the same. Then we can add any arbitrary utility vector A to both X and Y without changing the ordering. (I.e. X > Y ó X+A > Y+A)

4.2 Theorem

  1. Utilitarianism is a subrelation of > if and only if > satisfies strong Pareto, finite intergenerational equity and partial translation scale invariance.
    1. This means that if we want to extend utilitarianism to the infinite case, we can’t use a social welfare function, as per the above Basu-Mitra result

4.3 Definition

  • Overtaking utilitarianism: suppose there is some point T after which the total utility of the first N generations in X is always greater than the total utility of the first N generations in Y (given N > T). Then X is better than Y.
    • Note that utilitarianism is a subrelation of overtaking utilitarianism
  • Weak limiting preference: suppose that for any time T, X truncated at time T is better than Y truncated at time T. Then X is better than Y.

4.4 Theorem

  1. Overtaking utilitarianism is a subrelation of < if and only if < satisfies strong Pareto, finite intergenerational equity, partial translation scale invariance, and weak limiting preference

4.5 Definition

  • Discounted utilitarianism: the utility of a population is the sum of its components, discounted by how far away in time they are
  • Separability:
    • Separable present: if you can improve the first T generations without affecting the rest, you should
    • Separable future: if you can improve everything after the first T generations without affecting the rest, you should
  • Stationarity: preferences are time invariant
  • Weak sensitivity: for any utility vector, we can modify its first generation somehow to make it better or worse

4.6 Theorem

  1. The only continuous, monotonic relation which obeys weak sensitivity, stationary, and separability is discounted utilitarianism

4.7 Definition

  • Dictatorship of the present: there’s some time T after which changing the utility of generations doesn’t matter

4.8 Theorem

  1. Discounted utilitarianism results in a dictatorship of the present. (Remember that each generation’s utility is assumed to be bounded!)

4.9 Definition

  • Sustainable preference: a continuous ordering which doesn’t have a dictatorship of the present but follows strong Pareto and separability.

4.10 Theorem

  1. The only ordering which is sustainable is to take discounted utilitarianism and add an “asymptotic” part which ensures that infinitely long changes in utility matter. (Of course, finite changes in utility still won't matter.)

5. Conclusion

I hope I've convinced you that there's a "there" there: infinite ethics is something that people can make progress on, and it seems that most of the progress is being made in the field of sustainable development.

Fun fact: the author of the last theorem (the one which defined "sustainable") was one of the lead economists on the Kyoto protocol. Who says infinite ethics is impractical?

6. References

 

  1. Basu, Kaushik, and Tapan Mitra. "Aggregating infinite utility streams with intergenerational equity: the impossibility of being Paretian." Econometrica 71.5 (2003): 1557-1563. http://folk.uio.no/gasheim/zB%26M2003.pdf
  2. Zame, William R. "Can intergenerational equity be operationalized?." (2007).  https://tspace.library.utoronto.ca/bitstream/1807/9745/1/1204.pdf
  3. Asheim, Geir B. "Intergenerational equity." Annu. Rev. Econ. 2.1 (2010): 197-222.http://folk.uio.no/gasheim/A-ARE10.pdf

 

Comments (71)

Comment author: Lila 02 January 2015 04:13:31PM 6 points [-]

One thing I liked about this post is that it was written in English, instead of math symbols. I find it extremely hard to read a series of equations without someone explaining them verbally. Overall I thought the clarity was fairly good.

Comment author: Brian_Tomasik 02 January 2015 09:28:36PM *  3 points [-]

Thanks for the summary. :)

I don't understand why they're working on infinite vectors of future populations, since it looks very likely that life will end after a finite length of time into the future (except for Boltzmann brains). Maybe they're thinking of the infinity as extended in space rather than time? And of course, in that case it becomes arbitrary where the starting point is.

we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness.

Actually, every action we take makes an infinite difference. I was going to write more explanation here but then realized I should add it to my essay on infinity: here.

Comment author: Ben_West  (EA Profile) 03 January 2015 01:52:05PM *  1 point [-]

Thanks Brian – insightful as always.

  1. It might be the case that life will end after time T. But that's different than saying it doesn't matter whether life ends after time T, which a truncated utility function would say.
  2. (But of course see theorem 4.8.1 above)
  3. Thanks for the insight about multiverses – I haven't thought much about it. Is what you say only true in a level one multiverse?
Comment author: Brian_Tomasik 03 January 2015 04:01:37PM *  1 point [-]

1) Fair enough. Also, there's some chance we can affect Boltzmann brains that will exist indefinitely far into the future. (more discussion)

3) I added a new final paragraph to this section about that. Short answer is that I think it works for any of Levels I to III, and even with Level IV it depends on your philosophy of mathematics.

(Let me know if you see errors with my facts or reasoning.)

Comment author: Ben_West  (EA Profile) 03 January 2015 06:06:06PM 1 point [-]

1) interesting, thanks! 3) I don't think I know enough about physics to meaningfully comment. It sounds like you are disagreeing with the statement "we can plausibly only affect a finite subset of the universe"? And I guess more generally if physics predicts a multiverse of order wi, you claim that we can affect wi utils (because there are w_i copies of us)?

Comment author: Brian_Tomasik 03 January 2015 07:51:20PM *  0 points [-]

Yes, I was objecting to the claim that "we can plausibly only affect a finite subset of the universe". Of course, I guess it remains plausible that we can only affect a finite subset; I just wouldn't say it's highly probable.

you claim that we can affect wi utils

Yes, unless the type of multiverse predicts that the measure of copies of algorithms like ours is zero. That doesn't seem true of Levels I to III.

Also, if one uses my (speculative) physics-sampling assumption for anthropics, a hypothesis that predicts measure zero for copies of ourselves has probability zero. On the other hand, the self-indication assumption would go hog wild for a huge Level IV multiverse.

Comment author: Geuss 03 January 2015 01:41:07PM *  6 points [-]

"The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective."

I have no particular objection to those, unlike me, interested in aggregative ethical dilemmas, but I think it at least preferable that effective altruism - a movement aspiring to ecumenical reach independent of any particular ethical presuppositions - not automatically presume some cognate of utilitarianism. The repeated posts on this forum about decidedly abstract issues of utilitarianism with little or no connection with the practice of charitable giving is, perhaps, not particularly helpful in this regard. Most basically however, I object to your equivalence of altruism and utilitarianism as a matter of form: that should not be assumed, but qualified.

Comment author: Ben_West  (EA Profile) 03 January 2015 06:12:23PM *  2 points [-]

The problems with extending standard total utilitarianism to the infinite case are the easiest to understand, which is why I put that in the summary, but I don't think most of the article was about that.

For example, the fact that you can't have intergenerational equity (Thm 3.2.1) seems pretty important no matter what your philosophical bent.

Comment author: Geuss 04 January 2015 01:48:17AM 1 point [-]

A minuscule proportion of political philosophy has concerned itself with aggregative ethics, and in my being a relatively deep hermeneutical contextualist, I take what is important to them to be what they thought to be important to them, and thus your statement - that intergenerational equity is perennially important - as patently wrong. Let alone people not formally trained in philosophy.

The fact I have to belabour that most of those interested in charitable giving are not by implication automatically interested in the 'infinity problem' is exactly demonstrative of my initial point, anyhow, i.e. of projecting highly controversial ethical theories, and obscure concerns internal to them, as obviously constitutive of, or setting the agenda for, effective altruism.

Comment author: RyanCarey 03 January 2015 03:29:30PM *  0 points [-]

This seems reasonable to me. Assuming aggregative ethics only and examining niche issues within it are probably not diplomatically ideal for this site. Especially when one could feasibly get just as much attention for this kind of post on LessWrong.

That'd suggest that if people want to write more material like this, it might fit better elsewhere. What do others think?

Comment author: Lila 03 January 2015 05:14:20PM 4 points [-]

I found the OP useful. If it were on LW, I probably wouldn't have seen it. I don't go on LW because there's a lot of stuff I'm not interested in compared to what I am interested in (ethics). Is there a way to change privacy settings so that certain posts are only visible to people who sign in or something?

Comment author: RyanCarey 03 January 2015 05:35:23PM *  0 points [-]

Thanks for the data point!

Sadly the forum doesn't have that kind of feature. Peter and Tom are starting to work through a few minor bugs and feature requests but wouldn't be able to implement something like that in the foreseeable future.

I can see why it would be convenient for utilitarian EAs to read this kind of material here. But equally, there's a couple of issues with posting stuff about consequentialism. First, it's more abstract than seems optimal, and secondly, it's presently not balanced with discussion about other systems of ethics. As you're already implying with the filtering idea, if the EA Forum became an EA/Consequentialism Forum, that would be a real change that lots of people would not want.

Would you have found this post more easily if it was posted on Philosophy for Programmers and linked from the Utilitarianism Facebook group?

Comment author: Lila 03 January 2015 08:32:01PM 3 points [-]

I'm trying to use Facebook less, and I don't check the utilitarianism group, since it seems to have fallen into disuse.

I have to disagree that consequentialism isn't required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that's not consequentialism, I don't know what is.

Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.

Comment author: Geuss 04 January 2015 02:00:58AM 3 points [-]

The 'E' relates to efficiency, usually thought of as instrumental rationality, which is to say, the ability to conform one's means with one's ends. That being the case, it is entirely apart from the (moral or non-moral) end by which it is possessed.

I have reasons for charitable giving independent of utilitarianism, for example, and thus find the movement's technical analysis of the instrumental rationality of giving highly valuable.

Comment author: RyanCarey 03 January 2015 09:08:26PM 1 point [-]

You can believe that you want to help people a lot, and that it's a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there's infinite people is irrelevant to whether you're a virtuous helper.

You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.

The other problem is that there's going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don't think that philosophical analysis of consequentialism is the optimal for most people. I've been there and discussed those issues a lot for years, and I'd just like to move past it and actually do things, y'know :p

Still happy for Ben to think about it because he's smart, but it's not for everyone!

Comment author: Giles 04 January 2015 03:32:51AM 0 points [-]

there's going to be some optimal level of abstraction

I'm curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:

http://effective-altruism.com/ea/b2/open_thread_5/1fe

Also, I know that I'd really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.

Comment author: RyanCarey 04 January 2015 12:32:30PM *  1 point [-]

Making an expected-utilons-per-dollar calculator is an interesting project. Cause prioritisation in the broader sense can obviously fit on this forum and for that there's also: 80,000 Hours, Cause prioritisation wiki and Open Philanthropy Project.

If you're going for max number of years of utility per dollar, then you'll be looking at x-risk, as it's the cause that most credibly claims an impact that extends far in time (there aren't yet credible "trajectory changes"). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.

Comment author: Brian_Tomasik 04 January 2015 03:41:00PM 0 points [-]

there aren't yet credible "trajectory changes"

I strongly disagree. :)

It's obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn't, it should soon move in that direction.)

Depending how one defines "x-risk", many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.

Comment author: RyanCarey 04 January 2015 03:51:34PM 1 point [-]

I agree. I'd be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.

Comment author: pappubahry 02 January 2015 03:17:47AM 0 points [-]

The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists

This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably local region, like the Earth. No-one responds to the drowning child by saying, "well there might be an infinite number of sentient life-forms out there, so it doesn't matter if the child drowns or I damage my suit". It is just not a consideration.

So I disagree very strongly with the framing of your post, since the bit I quoted is in the summary. The rest of your post is on the somewhat more reasonable topic of comparing utilities across an infinite number of generations. I don't really see the use of this (you don't need a fully developed theory of infinite ethics to justify a carbon tax; considering a handful of generations will do), and don't see the use of the post on this forum, but I'm open to suggestions of possible applications.

Comment author: Ben_West  (EA Profile) 02 January 2015 02:21:09PM *  2 points [-]

Thanks for the feedback. Couple thoughts:

  1. I actually agree with you that most people shouldn't be worried about this (hence my disclaimer that this is not for a general audience). But that doesn't mean no one should care about it.
  2. Whether we are concerned about an infinite amount of time or an infinite amount of space doesn't really seem relevant to me at a mathematical level, hence why I grouped them together.
  3. As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostrom's that I linked above, since I think "just look in a local region" is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
Comment author: pappubahry 02 January 2015 03:01:24PM 2 points [-]

Hopefully this is my last comment in this thread, since I don't think there's much more I have to say after this.

  1. I don't really mind if people are working on these problems, but it's a looooong way from effective altruism.

  2. Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don't know if it's useful to do this, but it doesn't seem obviously stupid.)

  3. Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there's a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ....)

Comment author: Lila 02 January 2015 04:11:38PM 4 points [-]

I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn't run into any asymptotic issues?

Comment author: AGB 02 January 2015 11:31:03AM 2 points [-]

"No-one responds to the drowning child by saying, "well there might be an infinite number of sentient life-forms out there, so it doesn't matter if the child drowns or I damage my suit". It is just not a consideration."

"It is not an issue for altruists otherwise -- everyone saves the drowning child."

I don't understand what you are saying here. Are you claiming that because 'everyone' does do X or because 'noone' does not do X (putting those in quotation marks because I presume you don't literally mean what you wrote, rather you mean the 'vast majority of people would/would not do X'), X must be morally correct?

That strikes me as...problematic.

Comment author: pappubahry 02 January 2015 12:35:50PM 1 point [-]

Letting the child drown in the hope that

a) there's an infinite number of life-forms outside our observable universe, and

b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region

strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.

Comment author: Pablo_Stafforini 02 January 2015 11:28:11PM *  4 points [-]

More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.

Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.

Comment author: pappubahry 03 January 2015 03:08:00AM 0 points [-]

If that procedure was followed consistently, it would disprove all moral theories.

I consider this a reason to not strictly adhere to any single moral theory.

Comment author: Pablo_Stafforini 03 January 2015 06:37:55AM *  8 points [-]

I consider this a reason to not strictly adhere to any single moral theory.

This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you don't adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isn't wrong, and a theory's implying that not rescuing the child isn't wrong cannot therefore be a reason for rejecting this theory.

Comment author: pappubahry 03 January 2015 08:36:22AM 2 points [-]

OK -- I mean the hybrid theory -- but I see two possibilities (I don't think it's worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):

  • In my hybridisation, I've already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.

  • Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of "what philosophers can write papers on", rather than what is actually important. The repugnant conclusion falls under this category.

Whichever way it works out, I stick resolutely to saving the drowning child.

Comment author: AGB 03 January 2015 06:22:16PM 1 point [-]

Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'?

Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/counter-intuitive.

Comment author: pappubahry 04 January 2015 01:59:16AM 3 points [-]

Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'?

Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.

Comment author: Pablo_Stafforini 04 January 2015 05:12:34AM *  1 point [-]

I'm sorry, but I don't understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don't. As an analogy, consider Kant's theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kant's theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.

But maybe I'm misunderstanding what you meant by "not actually important"?

Comment author: Geuss 03 January 2015 01:20:04PM 1 point [-]

I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that "nothing is right or wrong" (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not 'relativists'.

Comment author: Gregory_Lewis 03 January 2015 07:17:04AM 1 point [-]

I sympathize with this. It seems likely that the accessible population of our actions is finite, so I'm not sure one need to necessarily worried about what happens in the infinite case. I'm unworried if my impact on earth across its future is significantly positive, yet the answer of whether I've made the (possibly infinite) universe better is undefined.

However, one frustration to this tactic is that infinitarian concerns can 'slip in' whenever afforded a non-zero credence. So although given our best physics it is overwhelmingly likely the morally relevant domain of our actions will be constrained by a lightcone only finitely extended in the later-than direction (because of heat death, proton decay, etc.), we should assign some non-zero credence our best physics will be mistaken: perhaps life-permitting conditions could continue indefinitely, or we could wring out life asymptotically faster than the second law, etc. These 'infinite outcomes' swamp the expected value calculation, and so infinitarian worries loom large.

Comment author: RyanCarey 05 January 2015 12:53:29AM *  1 point [-]

Putting to one side my bias towards aggregative consequentialism, someone has to say that to anyone except a radical consequentialist, the classic 'hope physics is broken' example does make you seem crazy and consequentialism seem wrong! :p

Comment author: Larks 05 January 2015 12:26:22AM 0 points [-]

Or perhaps uncertainty to the size of the universe might lead to similar worries, if we merely know it is finite, but do not have a bound.

Comment author: Pablo_Stafforini 02 January 2015 05:27:44AM *  1 point [-]

The text immediately following the passage you quoted reads:

for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness.

This implies that the quantity of happiness in the universe stays the same after you save the drowning child. So if your reason for saving the child is to make the world a better place, you should be troubled by this implication.

Comment author: pappubahry 02 January 2015 05:36:35AM 1 point [-]

That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise -- everyone saves the drowning child.

Comment author: Lawrence 13 January 2015 10:11:56PM 1 point [-]

I am curious about your definitions: intergenerational equity and finite intergenerational equity. I am aware of that some literature suggests that finite permutations are not enough to ensure equity among an infinite number of generations. The quality of the argumentation in this literature is often not so good. Do you have a reference that gives a convincing argument for why your notion of intergenerational equity is appropriate and/or desirable? I hope this does not sound like I am questioning whether your definition is consistent with the literature: I am only asking out of interest.

Comment author: Ben_West  (EA Profile) 18 January 2015 02:42:09AM *  1 point [-]

Good question. It's easiest to imagine the one-dimensional spatial case like (...,L2, L1, me, R1, R2, ...) where {Li} are people to my left and {Ri} are those to my right. If I turn 180° this permutes the vector to (..., R1, me, L1, ...) Which is obviously an infinite number of permutations, but seems morally unobjectionable.

Comment author: Lawrence 18 January 2015 08:17:29PM 1 point [-]

Thank you for the example. I have two initial comments and possibly more if you are interested. 1. In all of the literature on the problem, the sequences that we compare specify social states. When we compare x=(x1,x2,...) and y=(y1,y2,...) (or, as in your example, x=(....,x0,x1,x2,...) and y=(...,y0,y1,y2,...)), we are doing it with the interpretation xt and yt give the utility of the same individual/generation in the two possible social states. For the two sequences in your example, it does not seem to be the case that xt and yt give the utility of the same individual in two possible states. Rather, it seems that we are re-indexing the individuals. 2. I agree that moral preferences should generally be invariant to re-indexing, at least in a spatial context (as opposed to an intertermporal context). Let us therefore modify your example so that we have specified utilities xt,yt, where t ranges over the integers and xt and yt represent the utilities of people located at positions on a doubly infinite line. Now I agree that an ethical preference relation should be invariant under some (and possibly all) infinite permutations IF the permutation is performed to both sequences. But it is hard to give an argument for why we should have invariance under general permutations of only one stream.

The example is still unsatisfactory for two reasons. (i) since we are talking about intergenerational equity, the t in x_t should be time, not points in space where individuals live at the same time: it is not clear that the two cases are equivalent. (They may in fact be very different.) (ii) in almost all of the literature (in particular, in all three references in the original post), we consider one-sided sequences, indexed by time starting today and to the infinite future. Are you aware of example in this context?

Comment author: Ben_West  (EA Profile) 20 January 2015 04:01:01PM 1 point [-]

Thank you for the thoughtful comment.

For the two sequences in your example, it does not seem to be the case that xt and yt give the utility of the same individual in two possible states. Rather, it seems that we are re-indexing the individuals.

This is true. I think an important unstated assumption is that you only need to know that someone has utility x, and you shouldn't care who that person is.

Now I agree that an ethical preference relation should be invariant under some (and possibly all) infinite permutations IF the permutation is performed to both sequences. But it is hard to give an argument for why we should have invariance under general permutations of only one stream.

I'm not sure what the two sequences you are referring to are. Anonymity constraints simply say that if y is a permutation of x, then x~y.

in almost all of the literature (in particular, in all three references in the original post), we consider one-sided sequences, indexed by time starting today and to the infinite future. Are you aware of example in this context?

It is a true and insightful remark that whether we consider vectors to be infinite or doubly infinite makes a difference.

To my mind, the use of vectors is misleading. What it means to not care about temporal location is really just that you treat populations as sets (not vectors) and so anonymity assumptions aren't really required.

I guess you could phrase that another way and say that if you don't believe in infinite anonymity, then you believe that temporal location matters. This disagrees with general utilitarian beliefs. Nick Bostrom talks about this more in section 2.2 of his paper linked above.

A more mathy way that's helpful for me is to just remember that the relation should be continuous. Say s_n(x) is a permutation of n components. By finite anonymity we have that x~s_n(x) for any finite n. If lim {n -> infinity} s_n = y, yet y was morally different from x, the relation is discontinuous and this would be a very odd result.

Comment author: Lawrence 20 January 2015 08:59:31PM 0 points [-]

I would not only say that "that you only need to know that someone has utility x, and you shouldn't care who that person is" is an unstated assumption. I would say that it is the very idea that anonymity intends to formalize. The question that I had and still have is whether you know of any arguments for why infinite anonymity is suitable to operationalize this idea.

Regarding the use of sequences: you can't just look at sets. If you do, all nontrivial examples with utilities that are either 0 or 1 become equivalent. You don't have to use sequences, but you need (in the notation of Vallentyne and Kagan (1997)), a set of "locations", a set of real numbers where utility takes values, and a map from the location set to the utility set.

Regarding permutations of one or two sequences. One form of anonymity says that x ~ y if there is a permutation, say pi, (in some specified class) that takes x to y. Another (sometimes called relative anonymity) says that if x is at least as good as y, then pi(x) is at least as good as pi(y). These two notions of anonymity are not generally the same. There are certainly settings where the fullblown version of the relative anonymity becomes a basic rationality requirement. This would be the case with people lined up on an infinite line (at the same point in time). But it is not hard to see its inappropropriateness in the intertemporal context: you would have to rank the following two sequences (periodic with period 1000) to be equivalent or non-comparable

x=(1,1,....,1,0,1,1,...,1,0,1,1,...,1,......) y=(0,0,....,0,1,0,0,...,0,1,0,0,...,0,......)

This connects to whether denying infinite anonymity implies that "temporal location matters". If x and y above are two possible futures for the same infinite-horizon society, then I think that any utilitarian should be able to rank x above y without having to be critisized for caring about temporal location. Do you agree? For those who do not, equity in the intertemporal setting is the same thing as equity in the spatial (fixed time) setting. What those people say is essentially that intergenerational equity is a trivial concept: that there is nothing special about time.

If you do not think that the sequences x and y above should be equivalent in the intergenerational context then I would be very interested to see another example of sequences (or whatever you replace them with) that are infinite permutations of each other, but not finite permutations of each other, and where you do think that equivalence should.

P.S

Regarding continuity arguments, I assume that the usefulness of such arguments depends on whether you can justify your notion of continuity by ethical principles rather than that they appear in the mathematical literature. Take x(n)=(0,0,....,1,0,0,...) with a 1 in the n:the coordinate. For every n we want x(n) to be equivalent to (1,0,0,....). In many topologies x(n) goes to (0,0,0,....), which would then give that (0,0,...) is just as good as (1,0,0,....).

Comment author: Ben_West  (EA Profile) 05 February 2015 02:50:25PM 1 point [-]

The question that I had and still have is whether you know of any arguments for why infinite anonymity is suitable to operationalize this idea.

Maybe I am missing something, but it seems obvious to me. Here is my thought process; perhaps you can tell me what I am overlooking.

For simplicity, say that A is the assumption that we shouldn't care who people are, and IA is the infinite anonymity assumption. We wish to show A <-> IA.

  1. Suppose A. Observe that any permutation of people can't change the outcome, because it's not changing any information which is relevant to the decision (as per assumption A). Thus we have IA.
  2. Suppose IA. Observe that it's impossible to care about who people are, because by assumption they are all considered equal. Thus we have A.
  3. Hence A <-> IA.

These seems so obviously similar in my mind that my "proof" isn't very insightful… But maybe you can point out to me where I am going wrong.

One form of anonymity says that x ~ y if there is a permutation, say pi, (in some specified class) that takes x to y. Another (sometimes called relative anonymity) says that if x is at least as good as y, then pi(x) is at least as good as pi(y). These two notions of anonymity are not generally the same.

I hadn't heard about this – thanks! Do you have a source? Google scholar didn't find much.

In your above example is the pi in pi(X) the same as the pi in pi(y)? I guess it must be because otherwise these two types of anonymity wouldn't be different, but that seems weird to me.

If x and y above are two possible futures for the same infinite-horizon society, then I think that any utilitarian should be able to rank x above y without having to be critisized for caring about temporal location. Do you agree?

I certainly understand the intuition, but I'm not sure I fully agree with it. The reason I think that x better than y is because it seems to me that x is a Pareto improvement. But it's really not – there is no generation in x who is better off than another generation in y (under a suitable relabeling of the generations).

I would be very interested to see another example of sequences (or whatever you replace them with) that are infinite permutations of each other, but not finite permutations of each other, and where you do think that equivalence should.

(0,1,0,1,0,1,...) and (1,0,1,0,1,0,...) come to mind.

Comment author: Lawrence 07 February 2015 09:03:44PM *  0 points [-]

The problem in your argument is the sentence "...any permutation of people can't change the outcome...". For example: what does "any permutation" mean? Should the stream be applied to both sequences? In a finite context, these questions would not matter. In the infinite-horizon context, you can make mistakes if you are not careful. People who write on the subject do make mistakes all the time. To illustrate, let us say that I think that a suitable notion of anonymity is FA: for any two people p1 and p2, p1's utility is worth just as much as p2's. Then I can "prove" that A <-> FA by your method. The A -> FA direction is the same. For FA -> A, observe that if for any two people p1 and p2, p1's utility is worth just as much as p2's, then it is not possible to care about who people are.

This "proof" was not meant to illustrate anything besides the fact that if we are not careful, we will be wasting our time.

I did not get a clear answer to my question regarding the two (intergenerational) streams with period 1000: x=(1,1,...,1,0,1,1,,...) and y=(0,0,...,0,1,0,0,,...). Here x does not Pareto-dominate y.

Regarding (0,1,0,...) and (0,1,0,...): I am familiar with this example from some of the literature. Recall in the first post that I wrote that the argumentation in much of the literature is not so good? This is the literature that I meant. I was hoping for more.

Comment author: Ben_West  (EA Profile) 10 February 2015 08:04:42PM 0 points [-]

Fair enough. Let me phrase it this way: suppose you were blinded to the location of people in time. Do you agree that infinite anonymity would hold?

Comment author: Lawrence 11 February 2015 10:21:06AM *  0 points [-]

I will try to make the question more specific and then answer it. Suppose you are given two sequences x=(x1,x2,…) and y=(y1,y2,…) and that you are told that xt is not necessarily the utility of generation t, but that it could be the utility of some other generation. Should your judgements then be invariant under infinite permutations? Well, it depends. Suppose I know that xt and yt is the utility of the same generation – but not necessarily of generation t. Then I would still say that x is better than y if xt>yt for every t. Infinite anonymity in its strongest form (the one you called intergenerational equity) does not allow you to make such judgements. (See my response to your second question below.) In this case I would agree to the strongest form of relative anonymity however. If I do not know that xt and y_t give the utility of the same generation, then I would agree to infinite anonymity. So the answer is that sure, as you change the structure of the problem, different invariance conditions will become appropriate.

Comment author: Ben_West  (EA Profile) 22 February 2015 02:44:16PM 0 points [-]

Thank you for the clarification and references – it took me a few days to read and understand those papers.

I don't think there are any strong ways in which we disagree. Prima facie, prioritizing the lives of older (or younger) people seems wrong, so statements like "I know that xt and yt is the utility of the same generation" don't seem like they should influence your value judgments. However, lots of bizarre things occur if we act that way, so in reflective equilibrium we may wish to prioritize the lives of older people.

Comment author: Ben_West  (EA Profile) 10 February 2015 08:47:11PM 0 points [-]

By the way, one version of what you might be saying is: "both infinite anonymity and the overtaking criterion seem like reasonable conditions. But it turns out that they conflict, and the overtaking criterion seems more reasonable, so we should drop infinite anonymity." I would agree with that sentiment.

Comment author: Lawrence 11 February 2015 10:19:43AM 0 points [-]

Forget overtaking. Infinite anonymity (in its strongest form – the one you called intergenerational equity) is incompatible with the following requirement: if everyone is better off in state x=(x1,x2,..) than in state y=(y1,y2,..), then x is better than y. See e.g. the paper by Fleurbaey and Michel (2003).

Comment author: Lawrence 08 February 2015 09:28:31AM *  0 points [-]

I forgot the reference for relative anonymity: See the paper by Asheim, d'Aspremont and Banerjee (J. Math. Econ., 2010) and its references.

Comment author: AlexMennen 05 January 2015 07:25:00AM *  1 point [-]

Some kind of nitpicky comments:

3.2: Note that the definition of intergenerational equity in Zame's paper is what you call finite intergenerational equity (and his definition of an ethical preference relation involves the same difference), so his results are actually more general than what you have here. Also, I don't think that "almost always we can’t tell which of two populations is better" is an accurate plain-English translation of "{X,Y: neither X<Y nor X>Y} has outer measure one", because we don't know anything about the inner measure. In fact, if the preference relation respects the weak Pareto ordering, then {X,Y: neither X<Y nor X>Y} has inner measure 0. So an ethical preference relation must be so wildly nonmeasurable that nothing at all can be said about the frequency with which we can't tell which of two populations is better.

4.1:

Partial translation scale invariance: suppose after some time T, X and Y become the same. Then we can add any arbitrary utility vector A to both X and Y without changing the ordering. (I.e. X > Y iff X+A > Y+A)

X+A and Y+A won't necessarily be valid utility vectors. I assume you also want to add the condition that they are.

4.3: What does "truncated at time T" mean? All utilities after time T replaced with some default value like 0?

4.5:

Weak sensitivity: for any utility vector, we can modify its first generation somehow to make it better

Since you defined utilities as being in the closed interval [0,1], if you have a utility vector starting with 1, you can't get anything better just by modifying the first generation, so weak sensitivity should never hold in any sensible preference relation. I'm guessing you mean that we can modify its first generation to make it either better or worse (not necessarily both, unless you switch to open-interval-valued utilities).

4.7: Your definition of dictatorship of the present naively sounded to me like it's saying "there's some time T after which changing utilities of generations cannot affect the ordering of any pairs of utility vectors." But from theorem 4.8, I take it you actually meant "for any pair of utility vectors X and Y such that X<Y, there exists a time T such that changing utilities of generations after T cannot reverse the preference to get X>=Y."

Comment author: Ben_West  (EA Profile) 13 January 2015 01:21:39AM *  0 points [-]

Thanks!

3.2 good catch – I knew I was gonna mess those up for some paper. I'm not sure how to talk about the measurability result though; any thoughts on how to translate it?

4.3 basically, yeah. It's easier for me to think about it just as a truncation though

4.5 yes you're right – updated

4.7 yes, that's what I mean. Introducing quantifiers seems to make things a lot more complicated though

Comment author: AlexMennen 17 January 2015 07:51:39AM 0 points [-]

I'm not sure how to talk about the measurability result though; any thoughts on how to translate it?

Unfortunately, I can't think of a nice ordinary-language way of talking about such nonmeasurability results.

Comment author: capybaralet 29 August 2016 11:34:58PM *  0 points [-]

This is really interesting stuff, and thanks for the references.

A few comments:


It'd be nice to clarify what: "finite intergenerational equity over [0,1]^N" means (specifically, the "over [0,1]^N" bit).

Why isn't the sequence 1,1,1,... a counter-example to Thm4.8 (dictatorship of the present)? I'm imagining exponential discounting, e.g. of 1/2 so the welfare function of this should return 2 (but a different number if u_t is changed, for any t).

Comment author: Ben_West  (EA Profile) 30 August 2016 08:09:55PM 1 point [-]

Thanks for the comments!

Regarding your second question: the idea is that if x is better than y, then there is a point in time after which improvements to y, no matter how great, will never make y better than x.

So in your example where there is a constant discount rate of one half: (1, 1, 1, (something)) will always be preferred to (0, 0, 0, (something else)), no matter what we put in for (something) and (something else). In this sense, the first three generations "dictate" the utility function.

As you point out, there is no single time at which dictatorship kicks in, it will depend on the two vectors you are comparing and the discount rate.

Comment author: Lila 01 January 2015 10:00:16PM 0 points [-]

In the Basu-Mitra result, when you use the term "Pareto", do you mean strong or weak?

I found the section on possibility results confusing.

In this sentence you appear to use X and Y to refer to properties: "Basically, we can show that if < were a “reasonable” preference relation that had property X then it must also have property Y. (of course, we cannot show that < is reasonable.)"

But here you appear to use X and Y to refer to utility vectors: "For example, say that X<Y if both X and Y are finite and the total utility of X is less than that of Y."

Did you duplicate variables, or am I misreading this?

General note: if you numbered your headings and subheadings, e.g. (1, 1.1, 1.1.1), it would make it easier to refer back to them in comments.

Comment author: Ben_West  (EA Profile) 01 January 2015 11:46:52PM 0 points [-]

Updated, thanks!

Comment author: ericyu3 01 January 2015 07:25:26PM 0 points [-]

Sorry, did you mean to save this in your drafts?