Hide table of contents

Greg Lewis opens his thought-provoking post Beware surprising and suspicious convergence with the following statements:

Imagine this:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

Generally, what is best for one thing is usually not the best for something else, and thus Oliver’s claim that donations to opera are best for the arts and human welfare is surprising. We may suspect bias: that Oliver’s claim that the Opera is best for the human welfare is primarily motivated by his enthusiasm for opera and desire to find reasons in favour, rather than a cooler, more objective search for what is really best for human welfare.

I think this is a very valid point. Furthermore, it’s the main reason the contents of the present post probably shouldn’t change your beliefs very much.

But if I was to speculate some surprising and suspicious convergence between what’s best for two quite different goals, it might go a little something like this...

Background

Moral circle expansion (MCE) essentially refers to influencing people to extend moral concern to additional types of entities, such as nonhuman animals. This is plausibly among the most valuable categories of interventions from a (near-term) animal welfare perspective. Some (e.g., Reese) have argued that MCE is also among the most valuable categories of interventions from a longtermist perspective, and perhaps more valuable than extinction risk reduction.

In response, some have argued that this is a claim of surprising and suspicious convergence. Some further argue that this is made especially suspicious by the fact that some of the claimants were already interested in MCE or (near-term) animal welfare before they learned of or became interested in longtermism.[1]

Personally, I see merit both in those skeptical arguments and in further work on MCE.

But here I’d like to speculate some fresh convergences; some surprising and suspicious arguments why, if what you really care about is “moral circle expansion”, you might want to do work that looks like “extinction risk reduction”, or vice versa. For example, let's say you’re mostly concerned about the quality of the long-term future (rather than whether humanity survives to experience that future), and you see the size of our moral circles as key to that. If so, the first two of these arguments might push in favour of you working “directly” on extinction risk reduction for the sake of its “indirect” MCE benefits.[2]

I think that these arguments should play a smaller role in decision-making than various other considerations (e.g., population ethics, the likely quality and size of the future, personal fit; see also Crucial questions for longtermists). But I think that these arguments may deserve some attention.

(Suspicious) arguments for working on extinction risk reduction

Argument 1: Work on extinction risks is a concrete project primarily premised on, and making salient, the moral value of future generations.[3] The general public typically think and care relatively little about future generations. Additionally, people discussing extinction risk reduction often highlight the importance of ensuring the existence and thriving of not only future humans, but also whatever sentient beings we end up as (e.g., digital minds, a species we evolve into). It seems plausible (though speculative) that this expands people’s moral circles to include the beings focused on in these discussions (future humans, digital minds, etc.). It may also expand people’s moral circles more generally.

At least at first glance, this argument seems similarly plausible to the argument that it makes sense to work on present-day animal welfare in order to secure broader, long-term moral circle expansion (e.g., to include digital minds). And that argument is common among proponents of MCE (e.g., the Sentience Institute; see also).[4]

This also seems similar to the occasionally made claim that work on, or concern for, climate change has increased thought about and concern for future generations more generally (see e.g. Lewis).

Argument 2: Many extinction risk reduction activities could also happen to reduce the chance that humanity (or its descendants) “locks in” a set of values or goals before we undertake some amount of something like a “long reflection”. That could mean that there’s more time in which moral circles can “naturally” expand, or in which people can actively push for MCE. That could in turn mean that, in the long-run, our moral circles will end up closer to the appropriate size. (See also.)

An example of an extinction risk reduction activity that might fit this bill is promoting cautious and safe AI development.

This is quite similar to standard arguments about existential or extinction risk reduction being robustly valuable because it could help us keep our options open, and allow us to act on whatever we later decide or realise is valuable. But this speculative argument is not quite identical to those argument; it adds something on top of them. One reason for this is that there could be cases of value/goal “lock-in” that are either not bad enough or not irreversible enough to count as existential catastrophes, but which still leave the future worse in expectation. Many extinction risk reduction efforts also happen to reduce the risk of such cases, providing more time for MCE.

(Suspicious) arguments for working on MCE

Argument A: MCE could expand concern to future generations, digital minds, future nonhuman animals on the Earth or other planets (see also), etc. This could all increase the apparent stakes involved in extinction risk reduction, because it could make people realise that we’re able to create even more value than they thought (e.g., because they realise that the future could be full of huge numbers of beings who matter). As a result, this could increase the attention and resources devoted towards extinction risk reduction.

This overlaps considerably with the idea that promoting longtermism is a good “indirect” or “meta” strategy for reducing extinction risk. The key distinction is that promoting longtermism may tend to expand concern only to future generations of humans, rather than also to other relevant groups (e.g., digital minds). (That said, there's no reason longtermism has to be human-centric.)

Argument B: Let’s assume that the moral circles of most people (including various “key people”, such as AI designers) are currently “smaller” than they “should” be. If so, MCE might increase the expected value of the future conditional on us avoiding extinction. This is because it may reduce the chances of other types of existential catastrophes (such as unrecoverable dystopias), or of futures that are somewhat, or theoretically reversibly, suboptimal.

If MCE increases the expected value of the future conditional on us avoiding extinction, then MCE might serve as a “complementary good” to extinction risk reduction; it might make each unit of extinction risk reduction more valuable, and increase demand for extinction risk reduction. This could perhaps increase the attention and resources devoted towards extinction risk reduction.

Personal conclusion

Personally, this updates me slightly towards further valuing both extinction risk reduction and MCE. This is because it weakly suggests additional benefits of both of those categories of interventions. This effect is slightly stronger for extinction risk reduction, as Argument 1 seems to me slightly less speculative and suspicious than the other three arguments I gave.

But these are small updates, because:

  • various other considerations (e.g., population ethics) seem more important
  • I suspect I could come up with arguments of similar style and strength for various other categories of interventions, if I made an active effort to do so.

In any case, I currently favour existential risk reduction over either extinction risk reduction or MCE.

Some things this post didn’t cover

  • Various other considerations that could inform choices between categories of longtermist interventions
  • Which specific extinction risk reduction interventions are best, both in general and in relation to their indirect MCE benefits?
  • Which specific MCE interventions are best, both in general and in relation to their indirect extinction risk reduction benefits?
    • E.g., perhaps explicit advocacy of longtermism or consideration of future generations benefits extinction risk reduction more than corporate campaigns to improve animal welfare (see also).

For thoughts and links relevant to those questions, see Crucial questions for longtermists.

I’m grateful to Justin Shovelain for comments and suggestions on a draft of this post. This does not imply his endorsement of all of this post’s claims.

This post does not necessarily represent the views of any of my employers.


  1. I’m fairly confident I’ve seen or heard these sorts of arguments several times, though I can’t recall where.

    In Lewis’ post, he makes related points (though without referring explicitly to MCE). For example, he writes:

    In sketch, one first points to some benefits the prior commitment has by the lights of the new consideration (e.g. promoting animal welfare promotes antispeciesism, which is likely to make the far future trajectory go better), and second remarks about how speculative searching directly on the new consideration is (e.g. it is very hard to work out what we can do now which will benefit the far future).(6)

    That the argument tends to end here is suggestive of motivated stopping. For although the object level benefits of (say) global poverty are not speculative, their putative flow-through benefits on the far future are speculative.

    And Jacy Reese makes related points when discussing reasons why certain people may be biased towards MCE.

    I don’t actually know what proportion of the people claiming MCE should be a top priority from a longtermist perspective already thought MCE (or near-term animal welfare) should be a top altruistic priority before they learned of or became interested in longtermism. ↩︎

  2. Note that I’m talking about extinction risk reduction, not existential risk reduction. This is partly because it’s easier to distinguish MCE work from extinction risk reduction work than it is to distinguish it from existential risk reduction work. This, in turn, is due to the fact that some existential catastrophes could follow fairly directly from the failure of humanity’s moral circle (or particular people’s moral circles) to encompass entities that really should’ve been encompassed (see also Reese). ↩︎

  3. That said, extinction risk reduction, like existential risk reduction, can also be motivated by consideration of the past, the present, virtue, and cosmic significance (The Precipice, Chapter 2). See also the person-affecting value of existential risk reduction. ↩︎

  4. Just in case this isn’t clear, this genuinely isn’t meant as a veiled critique of proponents of MCE. And I don’t see the argument I’m making as a compelling reason why people who care about MCE should work on extinction risk reduction, just as one possible consideration. ↩︎

Comments4
Sorted by Click to highlight new comments since:

Thank you for the really cool and interesting post! I think it deserves much more attention and hope my comment would refresh some priority to it.

I want to comment on your recalled memory on people's reaction to MCE as one of the best interventions within longtermism. I think the meaning of the phrase "before they (MCE and animal advocates) learn and became interested in longtermism" is either being unclear or being unfair. 

If the meaning of "longtermism" here means EA/philosophical/Toby Ordian longtermism, then the claim that MCE and animal advocates seems to have "learned it later" is almost universally true. But it is also unfair, because one doesn't have to learn specific type of longtermism to think that one's action should mainly consider long term effects. And as someone working in the EA tangential animal movement for 3 yr+, I actually came across multiple EA/non-EA animal advocates/groups whose work and philosophy are decidedly for the "long term" benefit of non-human animals (though they don't specify what "long term" means in ways like the average EA longtermists do), and some of them haven't even heard of the word longtermism (until I asked and mentioned). *

If the meaning of "longtermism" here means simply doing and thinking things for the sake of making the far future better, then I think it is fair to say that at least some MCE/animal adovacates had been "longtermists in the rough sense" all the way. Some EA longtermists might object here, possibly pointing out that the lack of discussion about the physical possibilities/technological possibilities/deepness/scale/modes of existence of the future essentially renders a discussion not about longtermism. But notice that an MCE/animal advocate can still legitimately claim that they had always thought about the very long term, even if they had never thought about how long/deep/strange/potentious the future can maximally be.

Notice the above are also true for MCE advocates too, and they probably have even less suspicion of being "suddenly longtermist".

To conclude, I am very skeptical that the argument that because "animal/MCE advocates had only later learned and became interested in longtermism, therefore there is a suspicious emergence in their attempt to argue that MCE is among one of the best or maybe the best intervention within longtermism.

 

*For example, in Mercy For Animals which was my previous employer, we had done the exercise of trying to imagine what the world will be like in 30 years due to the animal movement's current work, and in that exercise we even tried to think what more could be done. 30 years certainly isn't "long" for EA longtermists, maybe isn't even mid-term for some. But it still shows that the animal advocates are not just interested in alleviating suffering that is happening now. 

Glad to hear you found the post interesting!

As for your arguments, I find them interesting but still feel unsure whether I'd land on your conclusions from them. I think for me the key point is maybe something like this: 

30 years certainly isn't "long" for EA longtermists, maybe isn't even mid-term for some.

If someone was thinking animal advocacy or MCE was best for the coming decades, but hadn't thought about the world more than 100 years out in any serious way[1], and then later they come across arguments for focusing on making the world more than 100 years out better, and they say "Yeah, I still think animal advocacy and MCE is best for that!", then that'd indeed be suspicious convergence. 

Analogously, I think many global health and development people focused on the coming decades, not just the coming few years, and if they then embraced longtermism but still thought global health and development interventions were the top longtermist priority, I'd call that suspicious convergence.

But a key point is that that isn't an extremely strong counterargument anyway. Something can be suspicious convergence and yet still happen to be correct. And there could be cases where you look further and discover that there's a systematic reason why a subset of the near-ish term objectives people already cared about are actually also really key for the long-term future, such that the suspiciousness of the convergence goes away. 

Another key point is that I don't have any systematic data on how many people who currently say animal advocacy or MCE stuff should be a top priority for longtermists already supported animal advocacy or MCE stuff beforehand. So maybe there isn't even much suspicious convergence anyway.

But I do think that something like that Mercy For Animals case wouldn't make the convergence non-suspicious, and I do think that that would be a weak argument against the person's conclusion.  

[1] We could roughly operationalise this as "at least spending 30 minutes in one go at some point really thinking, reading, talking about how to make the world more than 100 years from now better". I don't require that people engaged with e.g. EA arguments specifically.

I think the last useful thing in this thread might be your last reply above. But I am going to share my final thoughts anyway.

I think I am still not convinced that the suspicion that animal/MCE advocates  had "suddenly embraced longtermism" (in the loose sense, not the EA/philosophical/Toby Ordian sense)  is justified, even if the animal advocates I said (like the ones in MFA) haven't thought explicitly about the future beyond 100+ yrs, because they might have thought that they roughly had, maybe in a tacit assumption that what is being achieved in a few decades is going to be staying to be the norm for very long. 

So using my MFA example again, I believe the exercise used 30 yrs for thinking not because they (we?) wanted to think only 30 years ahead, but that we kind of thought it might be the most realistic timeline for factory farming to disappear, maybe also that they can't tolerate the thought that they and animals have to wait longer than 30 years. Imagine that if most of the team members in that exercise think that 100 years, or 200, or 1000 is the realistic timeline instead of 30, the exercise could easily have been done for 1000 years, which "magically" (and incorrectly) refutes the suspicion of "suddenly embracing longtermism". But 30 years or 1000 years it be, the argument is the same, because they are thinking the same thing: that the terminal success will stay with the world for very long.

Actually everything said before can be summarised with this simple claim: that some (many?) animal advocates tend to tacitly think that they are going to have very long term or even eternal impacts. For example, if there isn't a movement to eliminate factory farming, it will be there forever.

I think I actually have an alternative accusation toward average farmed animal advocates rather than "suddenly embracing longtermism". I think their suffer from an overconfidence about the persistence and level of goodness of their perceived terminal success, which in turn might be due to lack of imagination, lack of thinking about counterfactual worlds, lack of knowledge about technologies/history, or reluctance to think of the possibility of bad things happening for too much longer. 

P.S. An alternative way to thinking about my counter to your counter argument is that, if whether someone's thinking counts as long term thinking has to fit in some already given definition, it is possible for someone who seriously think a billion yrs ahead to accuse someone who had only previously thought about only a million yrs ahead to be "suddenly embracing longtermism".

But, in terms of most of the picture, I think we are already quite on the same page, probably just not on the same sentence. I probably spent too much time on something trivial.

some (many?) animal advocates tend to tacitly think that they are going to have very long term or even eternal impacts. For example, if there isn't a movement to eliminate factory farming, it will be there forever.

I think I actually have an alternative accusation toward average farmed animal advocates rather than "suddenly embracing longtermism". I think their suffer from an overconfidence about the persistence and level of goodness of their perceived terminal success, which in turn might be due to lack of imagination, lack of thinking about counterfactual worlds, lack of knowledge about technologies/history, or reluctance to think of the possibility of bad things happening for too much longer. 

This is quite an interesting observation/claim. I guess this I've observed something kind-of similar with many non-EA people interested in reducing nuclear risks: 

  • It seems they often do frame their work around reducing risks of extinction or permanent collapse of civilization
  • But they usually don't say much about precisely why this would be bad, and in particular how this cuts off all the possible value humanity could experience/create in future
  • But really the way they seem differ from EA longtermists who are interested in reducing nuclear risk isn't the above point, but rather how they seem to too uncritically and overconfidently assume that any nuclear exchange would cause extinction and that whatever interventions they're advocating for would substantially reduce the risk

So this all seems to tie into a more abstract, broad question about the extent to which the EA community's distinctiveness comes from its moral views (or its strong commitment to actually acting on them) vs its epistemic norms, empirical views, etc. 

Though the two factors obviously interrelate in many ways. For example, if one cares about the whole long-term future and is genuinely very committed to actually making a difference to that (rather than just doing things that feel virtuous in relation to that goal), that could create strong incentives to actually form accurate beliefs, not jump to conclusions, recognise reasons why some problem might not be an extremely huge deal (since those reasons could push in favour of working on another problem instead), etc.

Curated and popular this week
Relevant opportunities