Hide table of contents

Summary: Much EA discussion assumes an "expanding moral circle" with certain properties. Echoing certain observations of Gwern, I claim this notion has to be reconceptualized significantly. Upshot is, moral circle expansion (MCE) is not simply a matter of getting people to care more about minds that are "far away" or "different" from them, but instead involves challenging several different dimensions on which people form moral beliefs.

Singer's "Expanding Circle"

From a certain perspective, one could view effective altruism as a social movement aimed at getting society to care more about groups that it currently undervalues morally. These groups include:

  1. People living far away spatially
  2. People living far away temporally
  3. Animals used by humans
  4. Wild animals
  5. Digital minds + certain advanced reinforcement learning (RL) algorithms

There are some complications with this perspective, but I'm not the first to make this point. A moral viewpoint that includes some of (1-5) is a kind of generalized cosmopolitanism, so its unsurprising that most EAs are cosmopolitan. It's also unsurprising that we often talk about the expanding moral circle: the space of beings/"things" that we (as a society) care about. When Peter Singer coined this term, he pointed out how (much of) society has already come to include the following in their moral circle:

a) People from a different tribe

b) People from a different nation

c) People from a different religion

d) People from a different race/ethnicity

(Note: I haven't actually read much of anything by Singer, this is just my vicarious understanding of what he said about the )

The point I want to make in this post is that the moral circle does not necessarily expand uniformly. Indeed, as Gwern has pointed out, it has sometimes narrowed. More on this below, but first we need to clarify the concept. When I've thought about the moral circle naively, I've imagined a coordinate system with

  • "Me" at the origin
  • People I know represented as points scattered around nearby
  • People of my own nationality/ethnicity/ideology as points somewhat further out
  • The other groups (2-5) being sets of points somewhere further out

where the "distance from the origin" of a mind simply represents 'how different' it is compared to mine, and consequently how much difficult it is to empathize/sympathize with it. And under this view, I've conceived of moral circle expansion (MCE) as merely the normative goal of growing a circle centered around the origin much like inflating a balloon.

Some Technical Caveats

First of all, a moral circle does not actually make a line in the sand: the average non-EA communitarian cares somewhat about the well-being of people halfway across the world, they just care less than the well-being of people around them. The question is their willingness to pay for groups (1-5), and so a moral circle is more of a fuzzy gradient than an "in or out" binary classification. For simplicity I'll sometimes talk of the "boundary" of a moral circle, with the understanding that readers can make fuzzify it in their minds if necessary.

It is also worth clarifying further what I actually mean by a "moral circle". Consider the following example:

In the decades after the American Revolutionary War, various economic factors pushed the northern United States to gradually phase out slavery, while the southern United States saw incentives in the opposite direction. Together with some important social activism, the North came to view slavery as morally wrong, while the South articulated its own justifications of the practice.

The question is, did the moral circle of Northerners expand in the early 19th century?

I would answer yes. Not because they stopped slavery per se, but because their internal moral compass flipped. In principle, they could have ended their own involvement in slavery while continuing to defend their Southern neighbors' actions just as stridently as the South itself, and here I would say that their moral circle didn't expand at all.

So as a matter of defining a "moral circle", we mean the set of beings that a person/society regards as being within their realm of moral consideration.

In practice, we note that this distinction often doesn't matter, since the ethical views of society upon depends on its actions (and of course, vice versa). Furthermore, it is often hard to empirically gauge someone's "internal moral compass" except by looking at their revealed preference. Still, when having abstract discussions about moral circles, I think this distinction prevents some potential confusion.

Note that under this criterion, the advent of factory farming probably did not cause moral circle contraction in the West, because despite its direct harms, farmed animals were already well outside of society's moral circle.

Gwern's "Shifting Circle"

In a lengthy article, more recently summarized on this Forum, Gwern points out how some things which were firmly within our circle of concern are now outside it:

i) Gods

ii) Our ancestors (e.g. by not following their wills, forgetting their names, no longer performing rituals for them, or even digging them up)

iii) Family/community/"tribe"/country

iv) Certain sacred animals

v) Human embryos

Now, one can plausibly object to these:

  • (i) and (ii) arguably just "disappeared from the map" entirely, i.e. many of us concluded that gods don't exist and neither do deceased ancestors. And as Gwern observed, for the most part ancient cultures literally believed in these entities. But since (say) a century ago, we could say something more along the lines of "people stopped caring about the concept of gods/ancestors" and that whereas those ideas used to be mostly in our moral circle, they've drifted out since then. If it sounds kooky/incoherent to have a pure idea in your moral circle, such as "the will of a decomposing ancestor", consider how some EAs talk about acausal trades and this is an important feature of Functional Decision Theory.
  • (iv) is pretty culture-dependent, and less relevant for Western cultures. But certainly we would see a moral circle contracting along this dimension if we looked at many non-Western cultures over time (e.g. India).
  • As Gwern points out, the moral consideration of (v) has varied a lot even if we just look at the history of Western morals. Still, I think it's fair to say that the average EA moral circle tends to exclude these more than the historical norm.

For EAs who care about (1-5) but not (i-v), it seems clear to me that your area of concern is not a circle, but something drawn through thing-space in a pretty funky way, with nooks and nesses and fjords and all. But probably you don't see it that way, because your map of moral thing-space is organized in such a way that if you then draw the circle with the appropriate diameter then--tada!--you've included all sentient beings (present and future) but nothing else.

The reality is, the average person (past or present) doesn't seem to have moral thing-space organized that way at all. If they did, their values would probably be pretty EA-aligned in the first place.

The Many Facets of MCE

I was thinking about this stuff in the context of MCE for groups (1-5): each of these demographics are "weird" or "far" from our own minds, but they are far in different and often incomparable ways. For EAs, each of these 5 undervalued groups lend themselves to a particular problem of the form "How do we get society to care more about X?". And each of these problems have some of their own unique challenges:

1) To get the average person to care more about people living far away, obstacles include nationalistic and communitarian sentiments

2) To get the average person to care more about lives from the far future, obstacles include various cognitive biases such as hyperbolic discounting

3) To get the average person to care more about animals used by humans, an obstacle is the former's current diet and the resulting cognitive dissonance

4) To get the average person to care more about wild animals, an obstacle is the prevalence of the appeal to nature fallacy

5) To get the average person to care more about digital minds/RL agents, an obstacle is the cold, non-life-like feel of silicon-based algorithms

One consequence of this is that it is not at all necessary to solve Problem 3 before tackling Problem 4, or vice versa. In general none of these problems are necessarily harder or easier than the others, they each have their own unique obstacles.

That being said, there are clearly similarities between these problems, and there at least seems to be something real about the "empathizing/sympathizing with weird/distant minds" property that is common to all of them. But I feel like we often overstate how significant it is: many (most?) antispeciesist, longtermist who think about this subject seem to have come to their views less through pure empathy as through adopting abstract moral principles which imply (1-5) are all morally valuable. The average person does not tend to think in these abstract terms, and folk morality has a considerably different flavor than the moral philosophy done by educated people.

To make a rather facetious comparison, suppose we took some concept that arises when educated people do physics--such as the work-energy principle--and tried to find it in folk physics. We would, to some extent, find laymen reasoning about physics in such a way that approximates the work-energy principle, but it would be fallacious to assume that's how they are actually thinking internally.

I would qualify that there is much more EA reasoning to be found in folk morality than there is actual physics to be found in folk physics, but I think the basic error is only smaller in degree, and we should be wary of mind-projecting our own moral ontology when we study the moral psychology of society at large. When most people reason about the moral consideration of "others", there are quite a few different factors they seem to consider (consciously or unconsciously):

  • Diet: As mentioned, people ascribe less moral value to pigs vs. warthogs
  • Cuteness: Kittens are valued more than bobcats
  • Pests vs. helpers: Bees are valued more than locusts
  • Intelligence: This is pretty obvious, so much so that Westerners actually devalue the intelligence of an animal when they find out it's used for food
  • Size: All else equal, a larger animal tends to be valued higher than a smaller animal
  • Anthropocentrism: Apparently people tend to care more about another random human than a random alien of equal or greater intelligence (though admittedly its hard to control for all the other confounders)

For each of these attributes, we can try to push the moral circle outward in that dimension. I call this domain-specific MCE.

For instance, we could campaign against appeal-to-nature reasoning. If this is widely successful, then we would have gone a long way in the realm of wild animal suffering, but we would have done little for animals used by humans or digital minds.

But we can also consider the hypothetical effect of clean meat replacing animal products. This is also seems domain-specific, but I could plausibly see it having a substantial spillover effect for wild-animal suffering, by letting people reason about animal suffering in a less biased way.

Similarly, we can also imagine some form of advocacy that challenges anthropocentrism directly, by which I mean challenging the specific notion that our species is special. I would guess that this would have more generalization potential across (3-5) and their subcategories. It's not at all clear if we can find some intervention along these lines that's very tractable, but it's worth pointing out that there is a specific tradeoff to be made here.

There is unlikely to be a silver-bullet intervention that expands moral circles along the exact dimensions we have in mind. At least, short of "get people to do abstract moral reasoning", but I suspect that the cognitive strain needed to do EA-style moral calculus is on the same level of difficulty as actual calculus.

To us, the average person's "moral boundary" looks very odd and arbitrary. But to them, with their map of moral thing-space, the antispeciesist, longtermist moral circle could looks just as oddly-shaped.

Open Questions

Question 1: How much "spillover" should we expect for the usual MCE strategies in (3-5)?

I think maybe it could make sense to put MCE strategies on a spectrum from "completely domain-specific" to "completely general". In the last section, we saw how:

  • Combating the appeal-to-nature fallacy would make significant progress for group (4), but do very little for (3) or (5)
  • Normalizing clean meat, while aimed primarily at (3), could also help substantially with (4)
  • Challenging anthropocentrism could help all of (3-5)
  • Popularizing abstract moral reasoning could help all of (1-5)

Hence these (hypothetical) interventions are arranged in increasing order of how much potential they have to expand moral circles in a general sense. There are many other MCE strategies employed by contemporary animal advocates, as well as many other social movements of various stripes, past and present. I'd be interested to see how other examples would look on this spectrum, or if this spectrum turns out to not be a useful way of thinking about things.


Question 2: Is any of this at all relevant for (1-2), namely MCE for the other two EA cause areas?

Probably not, because they have less need for mass outreach as compared with sentience advocacy. For most existential risks, especially AI safety, the bottlenecks seem to be around getting more technical researchers involved actively, and the main bottleneck here does not seem to be insufficient valuation of future lives (furthermore, there are serious downside risks to many forms of outreach). For global poverty/health, it would be helpful to get more donors to AMF, SCI, GiveDirectly, etc., but this is best done via specific outreach to educated people who often already have cosmopolitan leanings. Still, I am curious what is known about MCE directed across spatial and temporal distances.


Comments6
Sorted by Click to highlight new comments since:

It seems to me like when most EA's are talking about an expanding circle what we are talking about is either an expanding circle of moral concern towards 1) all sentient beings or 2) equal consideration of interests for all entities (with the background understanding that only sentient beings have interests).

Given this definition of what it means to expand the moral circle, I don't think Gwern's talk of a narrowing moral circle is relevant. For the list of entities that Gwern has described us as having lost moral concern for, we did not lose moral concern for them for reasons having to do with their sentience. Even when these entities are plausibly sentient (such as with sacred animals) it seems like people's moral concern for them is primarily based on other factors. Therefore they should not count as data points in the trend of how our moral circle is or is not expanding.

Also, quite plausibly, a big reason why we have lost concern for these entities is because of an increasingly scientifically and metaphysically accurate view of the world that causes us to not regard these entities to be seen as special, to have interests, or even to exist at all.

Many of the processes we pejoratively call "cognitive biases" are actually true. Either in the sense of being useful heuristics for everyday circumstances, or in the sense of just being generally true (ie the prototypical grandma being right and a PhD scientist being wrong).

For example, hyperbolic discounting is completely rational in the face of uncertain risks. This is clearly the case when planning for the far future. While one might care about future beings in an abstract sense, it doesn't make sense to include their well-being in ones decision making as it has been discounted to approximately zero. As an extreme example: I fully agree that humans outside my light-cone have the same moral worth as those inside my light-cone, but since I can never effect those outside my light-cone (assuming they exist, which is not something we will ever know) I don't factor them into moral decisions.

I don't think people don't not care about digital minds just because they are digital. Watching the first episode of Black Mirror, it's hard not to feel sympathy for the simulated people. It would probably be a very unsuccessful show if the audience had no emotional investment in what happens to the simulated people.

Some objections to target when trying to increase moral concern for digital minds might be:

  • they don't exist, and feel much more hypothetical than "future generations"
  • it feels unclear what could be done to help them (them specifically, as opposed to helping future generations in general)
  • it feels hard to determine whether a digital mind (that is not just a human or animal consciousness upload) is sentient and what they would feel as positive or negative valence

As an overall trend, people act in their self-interest. At best people act in their long-term self-interest. So if you want to convince people of something, appeal to their self-interest. This may need to be an indirect appeal.

[comment deleted]1
0
0

On the subject of recognizing the moral worth of animals, Subhuman: The Moral Psychology of Human Attitudes to Animals by TJ Kasperbauer offers a good summary of issues. In particular, he argues that there are psychological processes at work that humans frequently use to distance themselves from animals that are different than what they apply to humans, though there are cases of overlap too.

Fwiw, I didn't find anything particularly actionable in the book. But I do think he argues well that different approaches to motivating people to morally care about animals (namely, welfarism and abolitionism) are both premised on moral psychological beliefs that we don't have very much empirical evidence to help adjudicate.

Curated and popular this week
Relevant opportunities