This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! 
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. 

Inspired by Common-sense cases where "hypothetical future people" matter.

I agree with the general idea that discounting temporally distant people due to a time preference doesn't make sense, in the same way that discounting geographically distant people due to a location preference doesn't seem justified. This seems to be a common perspective in EA, and I agree with it.

Does it make sense to discount future people in proportion to the probability that they will not exist? This seems ever-so-vaguely related to the idea of epistemic humility, and recognizing that we cannot know with certainty what the future will be like. It also seems vaguely related to the idea of acting on values rather than focusing on specific causes, as in the example of ScotsCare. The farther out in the future we project, the higher the uncertainty, and this the more we should discount. Thus, maybe from my stance in 2022 I should prioritize Alice (who was born in 2020) more than Bob (who is expected to be born in 2030), who is in turn prioritized more than Carl (who is expected to be born in 2040), simply because I know Alice exists, whereas Bob might end up never existing, and Carl has an even higher probability of never existing.

Short, vaguely related thought-experiments/scenarios:

  • I do something to benefit a yet-to-be born child, but then the mother has a miscarriage and that child never comes into being.
  • I invest money in a 529 plan[1] for my child, but when my child is 18 he/she decides to not go to college, and instead to work.
  • You promise to pay me $X in ten years, and I fully and completely trust you... but maybe you will be robbed, or maybe you will die, or maybe something else will occur that prevents you from fulfilling your promise. So I should value this promise at less than $X (assuming we ignore time value of money).
  • If I'm trying to improve any currently existing national system for a particular country, I should keep in mind that countries don't last forever.
  • I can set up a investment fund to pay the money out for whatever health problem is the most severe in 200 years, but what if medical advances mean that there are no more health problem in 200 years.
  • I could focus on a project that will result in a lot of happiness on Earth in 4,000 years, but maybe earth will be uninhabited then.
  1. ^

    A tax-advantaged financial account in the USA, that is only allowed to be used on educational expenses.

8

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

I haven't read this post very carefully, but at a glance, you might be interested in the gist of this post: Existential risk pessimism and the time of perils (note: see also the top comment, which I think makes really excellent points). 

Here's ChatGPT's summary of the post (after I cut a bunch out of it): 

  1. Many effective altruists (EAs) believe that existential risk (the risk of human extinction) is currently high and that efforts to mitigate this risk have extremely high expected value.
  2. However, the relationship between these two beliefs is not straightforward. Using a series of models, it is shown that across a range of assumptions, the belief in high existential risk (Existential Risk Pessimism) tends to hinder, rather than support, the belief in the high expected value of risk mitigation efforts (the Astronomical Value Thesis).
  3. The most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the Time of Perils Hypothesis, which posits that we are currently in a period of high risk, but this risk will decrease significantly in the future.
  4. For the Time of Perils Hypothesis to support the Astronomical Value Thesis, it must involve a relatively short period of high risk and a very low level of risk thereafter.
  5. Arguments for the Time of Perils Hypothesis that do not involve the development of artificial intelligence (AI) are not sufficient to justify the necessary form of the hypothesis.
  6. It is suggested that the most likely way to ground a strong enough Time of Perils Hypothesis is through the development of superintelligent AI, which could radically and permanently lower the level of existential risk.
  7. This has implications for the prioritization of existential risk mitigation as a cause area within effective altruism.

Here's its lightly edited summary of (a part of) the comment I linked

  1. [Replaced with a direct quote] "I think this model is kind of misleading, and that the original astronomical waste argument is still strong. It seems to me that a ton of the work in this model is being done by the assumption of constant risk, even in post-peril worlds. I think this is pretty strange."
  2. The model makes the assumption of constant risk, which may be unrealistic.
  3. The probability of existential risk may be inversely proportional to the current estimated value at stake, if it is assumed that civilization acts as a value maximizer and is able to effectively reduce risk.

Yes, you should absolutely discount future people in proportion to their probability of not existing. This is still totally fair to those future people because, to the extent that they exist, you treat them as just as important as present people.

Interesting post. 

I have three points/thoughts in response:

1) Could it be useful to distinguish between "causal uncertainty" and "non-causal uncertainty" about who (and how many) will exist?

Causal uncertainty would be uncertainty resulting from the fact that you as a decision maker have not yet decided what to do, yet where your actions will impact who will exist--a strange concept to wrap my around. Non-causal uncertainty would be uncertainty (about who will exist) that stems from uncertainty about how other forces will play out that are largely independent of your actions. 

Getting to your post, I can see why one might discount based on non-causal uncertainty (see next point for more on this), but discounting based on causal uncertainty seems rather more bizarre and almost makes my head explode (though see this paper).  

 2) You claim in your first sentence that discounting based on space and time should be treated similarly to each other, and in particular that discounting based on either should be avoided. Thus it appears you claim that absent uncertainty, we should treat the present and future similarly; [if that last part didn't quite follow see point 3 below]. If so, one can ask should we also treat uncertainty about who will  eventually come into existence similarly to how we treat uncertainty about who currently exists?  For an example of the latter, suppose there are an uncertain number of people trapped in a well: either 2 or 10 with 50-50 odds and we can take costly actions to save them. I think we would weight the possible 10 people with only 50% (and similarly the possible 2 people), so in that sense I think we would and should discount on uncertainty about who currently exists. If so and if we answer yes to the third sentence in this paragraph, we should also discount future people based on non-causal uncertainty.  

3) Another possibility is to discount not based on time per se  (which you reject) but rather on current existence, so that future people will be discounting or ignored until they exist at which point they get full value. A potential difficulty with this approach is that you could be sure that 1 billion people are going to be born in a barren desert next year and you would, then, have no (or at discounted) reason to bring food to that desert until they were born, at which point you would suddenly have a humanitarian crisis on your hands which you quite foreseeably failed to prepare for. [Admittedly people come into existence through a gradual process (e.g. 9 months) so it wouldn't be quite a split-second change of priorities about whether to bring food, which might attenuate the force of this objection a bit.]

Curated and popular this week
Relevant opportunities