Hide table of contents
Jun 24 20144 min read 29

10

Crossposted from the Global Priorities Project

This post has two distinct parts. The first explores the meanings that have been attached to the term ‘cause’, and suggests my preferred usage. The second makes use of these distinctions to clarify the claims I made in a recent post on the long-term effects of animal welfare improvements.

On the meaning of ‘cause’

There are at least two distinct concepts which could reasonably be labelled a ‘cause’:
  1. An intervention area, i.e. a cluster of interventions which are related and share some characteristics. It is often the case that improving our understanding of some intervention in this area will improve our understanding of the whole area. We can view different-sized clusters as broader or narrower causes in this sense. GiveWell has promoted this meaning. Examples might include: interventions to improve health in developing countries; interventions giving out leaflets to change behaviour.
  2. A goal, something we might devote resources towards optimising. Some causes in this sense might be useful instrumental sub-goals for other causes. For example, “minimise existential risk” may be a useful instrumental goal for the cause “make the long-term future flourish”. When 80,000 Hours discussed reasons to select a cause, they didn’t explicitly use this meaning, but many of their arguments relate to it. A cause of this type may be very close to one of the first type, but defined by its goal rather than its methods: for example, maximising the number of quality-adjusted life-years lived in developing countries. Similarly, one could think of a cause a problem one can work towards solving.
These two characteristics often appear together, so we don’t always need to distinguish. But they can come apart: we can have a goal without a good idea of what intervention area will best support that goal. On the other hand, one intervention area could be worthwhile for multiple different goals, and it may not be apparent what goal an intervention is supposed to be targeting. Below I explain how these concepts can diverge substantially.

Which is the better usage? Or should we be using the word for both meanings? (Indeed there may be other possible meanings, such as defining a cause by its beneficiaries, but I think these are the two most natural.) I am not sure about this and would be interested in comments from others towards finding the most natural community norm. Key questions are whether we need to distinguish the concepts, and if we do then which is the more frequently the useful one to think of, and what other names fit them well.

My personal inclination is that when the meanings coincide of course we can use the one word, and that when they come apart it is better to use the second. This is because I think conversations about choosing a cause are generally concerned with the second, and because I think that “intervention area” is a good alternate term for the first meaning, while we lack such good alternatives for the second.

Conclusions about animals

In a recent post I discussed why the long-term effects of animal welfare improvements in themselves are probably small. A question we danced around in the comments is whether this meant that animal welfare was not the best cause. Some felt it did not, because of various plausible routes to impact from animal welfare interventions. I was unsure because the argument did appear to show this, but the rebuttals were also compelling.

My confusion at least was stemming, at least in part, from the term ‘cause’ being overloaded.

Now that I see that more clearly I can explain exactly what I am and am not claiming.

In that post, I contrasted human welfare improvements, which have many significant indirect and long-run effects, with animal welfare improvements, which appear not to. That is not to say that interventions which improve animal welfare do not have these large long-run effects, but that the long-run effects of such interventions are enacted via shifts in the views of humans rather than directly via the welfare improvement.

I believe that the appropriate conclusion is that “improve animal welfare” is extremely unlikely to be the best simple proxy for the goal “make the long-term future flourish”. In particular, it is likely dominated by the proxy “increase empathy”. So we can say with confidence that improving animal welfare is not the best cause in the second sense (whereas it may still be a good intervention area). In contrast, we do not have similarly strong reasons to think “improve human welfare” is definitely not the best approach.

Two things I am not claiming:

  • That improving human welfare is a better instrumental sub-goal for improving the long-term future than improving animal welfare.
  • That interventions which improve animal welfare are not among the best available, if they also have other effects.
If you are not persuaded that it’s worth optimising for the long-term rather than the short-term, the argument won’t be convincing. If you are, though, I think you should not adopt animal welfare as a cause in the second sense. I am not arguing against ‘increasing empathy’ as possibly the top goal we can target (although I plan to look more deeply into making comparisons between this and other goals), and it may be that ‘increase vegetarianism’ is a useful way to increase empathy. But we should keep an open mind, and if we adopt ‘increasing empathy’ as a goal we should look for the best ways to do this, whether or not they relate to animal welfare.

Comments29
Sorted by Click to highlight new comments since: Today at 7:56 PM

Your essay makes me think of a system where you have three things: a human welfare "bucket," values that control how much flows from human to animal welfare at a given time, and another animal welfare "bucket." And human welfare and values are long-term things, which at any given time feed into animal welfare. And you're saying that expanding the animal welfare bucket is not the best long-term intervention for the ultimate purpose of, say, maximizing the combined human and animal welfare. Given that we assume influencing the far future is possible, I don't see any flaw there.

but do you see practical differences between promoting animal causes in the short term and changing values to prioritize animal welfare?

I haven't spent long enough thinking about it to draw any conclusions with confidence, but prima facie we should expect that if you're optimising for different things you're likely to choose different actions.

One example which at least looks plausible to me is that if you take a long term view one of the major obstacles to shifting values is cognitive dissonance over the fact that many people enjoy eating meat. Rather than trying to shift values today it might be better to get excellent meat substitutes or vat-meat and then shift values after, when it will be easier. There's a chain of steps here, and it could involve investing at the start, or saving until you can implement one of the later steps, depending on which you think will need pushing the most: (i) Develop technologies and production (ii) Normalise use of meat substitutes in society (iii) When these are widespread, build support for ending animal cruelty in farms.

It's also possible that building the effective altruism movement is a better route, if it encourages reflection on values in a way which we think will tend to lead to improvements, or lead to good values more likely to spread further.

Owen,

Thanks for the two pieces.

I'd be interested to know: do you think your view would be different if you saw the past few hundred years as a period when one group of animals gained a great deal of power, which they used (1) to make their own lives much better and (2) to subject a much larger number of other animals to a great deal of suffering, mainly in order to be able to eat them?

I'm not making an argument that this is the right way of seeing things, though it doesn't seem crazy to me. I'm also not arguing that, if this view is accurate, it's sensible just to project it into the future (ie improvements in human lives will be accompanied by ever-greater suffering of other animals). I'm just drawing attention to the way that, in what you write, you focus on the ways that human lives have improved recently, and that we might expect this to continue, given the ways human societies work, with things being passed on to future generations (knowledge, habits, organisations, material goods); without paying any attention to how this process has also involved inflicting a great deal of suffering on other animals.

It might be that including other animals in the picture makes one feel more ambivalent about the process of development in human societies which we have benefited so richly from, and more wary about what supporting and accelerating this process will bring (the 'human lives getting better' strand which you seem to value highly for benefits long into the future). Or it might be that the suffering of animals is seen as a separate, contingent fact about a narrow period which doesn't have any lessons for what we should expect in the future. Do you have a sense one way or the other? (Perhaps you can't answer if you don't share the premise that lots of animals have suffered as humans have become more powerful, and that this matters.)

I don't think that this necessarily affects the argument that targeting changes in how humans act now is likely to be more important than targeting changes in how animals live now, since the human changes are more likely to be passed on long into the future in some form, and good changes might well have good effects long into the future. But when you write 'I do think that optimising for long-term animal welfare is not the best place to stop in picking an instrumental goal, because it's quite hard to see how things affect it.', you seem to be saying that it's hard to know what would affect the well-being of non-human animals in the long term, in contrast to things that would affect the well-being of humans in the long term. Having a sense of the mixture of gain and suffering across all animals from recent human development might

(1) draw attention to the importance of asking about the long-run well-being of all animals, human and not

(2) make one wonder about whether the future well-being of human and non-human animals are completely independent (thus, better to focus on humans since one can be more confident in the long-run effects) or if there might be an enduring negative relationship

(3) make it less difficult to see what might affect long-term animal welfare (eg more empathy for non-human animals among humans in the short term)

Great questions. I'm afraid I won't do justice to them properly here, but I'll give a quick answer with my opinions without too much justification.

I think that empowering humans tends to: (i) increase their influence over the world; (ii) improve their values.

Over the past few hundred years, the effect of (i) has significantly outweighed the effect of (ii) on animal welfare (in a way which may or may not be positive after you account for the effects on wild animals). My best guess is that going forwards this will be true for a while longer, but eventually (ii) will come to dominate (humans like to think of themselves as having nice values, and when we're wealthy enough that it's cheap to act on this I expect them to do so). I would like to have a better understanding of this dynamic, though.

My boyfriend Ben told me that this article is better understood after reading your previous article. Could you include a link to the previous article in the text of this one?

Hi Gina,

There is a link in the text, but you may have missed it. Here you go: /ea/6c/human_and_animal_interventions_the_longterm_view/

That's a wonderful idea -- researching how to increase empathy. Just as important, though, is how to actually get people to ACT on their caring feelings. I think that there's a lot of 'empty empathy': people feel bad for others yet don't act on their feelings. I guess this is the field of behavioural economics. It's importance and EFFICIENCY cannot be underestimated. One behavioural economist suggested that getting people to stop acting against their own self-interest is by far the most cost effective global health care intervention (ie. many parents in India let their children die by refusing to give them oral rehydration salts despite being begged to by health care professionals). But there is also far too little empathy in the world, so that also needs to be developed, certainly. Another point to keep in mind is that no virtue has absolute value, it only has relative value compared to competing values. For instance, it would be hard to donate a lot to a charity no matter how much you care about its cause if you are simultaneously trying very hard to keep up with the Jones.

Great points. :) There was a discussion on Felicifia in 2012 about the value of empathy vs. the value of feeling moral duty: http://felicifia.org/viewtopic.php?f=7&t=492&start=40#p7120 David Brooks argued that feelings without follow-through aren't very useful. Likewise, it's often said that Buddhist monks have immense empathy, but how often do you see them lobbying for more humane policies or something? Probably by "empathy" what Owen had in mind was more substantive empathy, like a culture of feeling and acting on compassion for powerless creatures.

If the Joneses are donating a bunch to charity, then keeping up with them could be great. :) Things like The Giving Pledge seem promising for this reason, because they suggest to billionaires that if you want higher status, you should donate a lot.

I couldn't agree more. Brian, I am absolutely, 100% positive that the only way to greatly improve society's behaviour (ie. Being veg, donating more, being more of a good person in general) is by altering society's reward/punishment structure towards favouring positive actions. The desire for social acceptance and fear of social consequences is the main driver of human behaviour.

Regarding Buddhists and empathy, meditators generally believe that the are helping others just by meditating – bringing God's light down to earth, yadda, yadda, yadda. For example, Paramahansa Yogananda has said that an adept yogi does more to help the world just by meditating than even the most prolific humanitarian. “Meditation is the highest service” is something I read from some guru, forget who. Also, since spiritual people usually consider spiritual practice the most important thing in the world, even more so than “worldly” problems, they think that by promoting their spiritual path, they're performing the greatest service a person can do. For example, a Buddhist monk may not give money to the poor, but he may perform duties supporting the ashram and thus feel that he's helping others with their spiritual lives.

Another reason spiritual people often don't do activism is because their belief in karma makes them fatalistic as they expect individuals'/the world's fate to play out as dictated by their pre-life karma and thus feel helpless to do anything about it. Believing in karma can be a tricky trap, no doubt. I get past it by telling myself that my altruistic actions are PART OF and not against others' karmic destiny!!!! Oh yeah, and many meditators think that their actions will create karma (if not done with the proper mindset) that will force them to reincarnate, even if they are “good” actions, and thus try not to “do” much. That's why I'm such a lazy bum – just avoiding making karma. No just kidding. :^)

Hi Owen :)

Animal welfare can be about more than promoting empathy. For one thing, it's about promoting empathy for nonhumans, which is somewhat different thing from promoting empathy wholesale (which usually means being nicer to other people). Secondly, animal welfare as a case study can raise a number of important ethical issues, such as naturalistic fallacies, welfare- vs. rights-based empathy, how far we think sentience extends, how to weigh minds of different complexities, population ethics, and lots more.

Also, animal welfare is quite sticky, which means it could be a good way to draw people in to these issues and get them excited about them.

I agree that, e.g., veg outreach is not the very best way to help animals. I think talking explicitly about things like wild-animal suffering and digital sentients in the future can be better, which is why I focus on those. But veg outreach is probably not vastly worse, and it can be a good donation suggestion for mainstream donors who are weirded out by far-future ideas.

As far as: "I do think that optimising for long-term animal welfare is not the best place to stop in picking an instrumental goal, because it's quite hard to see how things affect it." I don't agree with this, depending how broadly we define "animal." It seems likely to me that most of the sentience of the far future will reside in non-human-like creatures (robots, sentient subroutines, simulated insects, etc.), and most of the far-future-related things I write about are relevant to improving long-term "animal" welfare in that sense.

Thanks for the considered thoughts. :-)

I happen to think that promoting empathy wholesale is likely better than promoting animal welfare, but I guess I haven't presented an argument for that. The conclusion I'd draw is that we should be able to identify some targets which are better by our own lights as instrumental goals than short/medium run animal welfare. Promoting empathy for animals could be such.

I do see instrumental benefits to promoting animal welfare for the accessibility -- though also instrumental harms. I'm not sure how these weigh against each other.

On optimising for long-term animal welfare: yes, changing societal views may have an effect on this, although I guess that the expected size of our influence there may be rather smaller than the expected size of our influence on whether there is a long-term society at all.

I happen to agree that promoting empathy (for animals) is probably better than promoting welfare directly, but a devil's advocate might point out that beliefs often follow actions, and maybe directly changing people's practices toward animals would be a more concrete way to change values.

I think whether there is a long-term society at all is relatively hard to change, except maybe in the case of AI risk. I think our expected influence through values is not obviously smaller and may be larger than our expected influence through whether there is a future, especially for non-mainstream values. This is doubly true if you're a negative utilitarian, since for NUs there aren't feasible ways to decrease the probability of a future ( http://foundational-research.org/publications/how-would-catastrophic-risks-affect-prospects-for-compromise/ ), and doing so isn't nice to other value systems ( http://foundational-research.org/publications/reasons-to-be-nice-to-other-value-systems/ ), so you have to focus on improving the quality of the future. By the same token, it's nicer for non-NUs to focus on improving the quality of the future (which is something NUs can support) than on making the future more likely (which is something NUs oppose).

The question of whether "sub-goal" x is the "simplest" or best "proxy" for our more ultimate goals doesn't seem particularly useful and can be highly misleading, as in the example you chose. You conclude that promoting animal welfare is very probably not the best cause (because promoting empathy probably dominates it as a proxy), whereas we can't say the same for promoting human welfare. But it could still be the case that promoting animal welfare is a better proxy than human welfare for far future flourishing, even though there's a yet better intermediary in the case of animal welfare. The problem is that multiple descriptions of causes can be described, and we can generate multiple conflicting but practically uninformative statements about proxies and causes.

Of course it could be. I even said as much.

The takeaway isn't supposed to be that people should switch from working to promote animal welfare to working to promote human welfare, but that they should switch from working to promote animal welfare to working to promote empathy.

I am glad you are considering far-future effects in terms of proxies like "increase empathy." I think this is a useful method. Also, I commend you for considering far-future impacts in general, as increasing our confidence in effecting them seems to have high potential upside for EA decision-making.

On the specifics of this post, I am unsure exactly what your claim is now - whether it's (i) that a happier non-human farm animal, all else equal, seems "extremely unlikely" to meet the goal of making the long-term future flourish, or (ii) that improving animal welfare with interventions like vegan leafleting, public demonstrations, antispeciesism essays, or lobbying for farm animals seems "extremely unlikely" to meet the goal of making the long-term future flourish.

(i) seems clearly true, but if you mean (ii), I think that's debatable and am unsure how you arrive at "extremely unlikely." Most non-human animal activists see great value in helping farm animals now as well as potential impact on the far future. I think strong evidence is required to reject their claims with such confidence.

For example, even Vegan Outreach, arguably the most short-run impact focused organization in the field, considers its impact in both ways: "In addition to influencing the diets of individuals, VO wants to change the ways that people view farm animals and teach people that farm animals are capable of suffering. VO aims to influence public opinion to affect long-term public policy." (http://files.givewell.org/files/conversations/Jack%20Norris%205-20-14%20(public).pdf)

Some organizations are much more explicit about their goals for promoting general antispeciesism sentiment and helping animals in the long-run, like Direct Action Everywhere (http://directactioneverywhere.com), and concern themselves very little (if at all) with individual dietary changes.

Now, this isn't clearly the best route to improving the long-term future, but I think it would take very strong evidence to say it is "extremely unlikely" and the point merits further consideration. I know my personal decisions could change with new information on these potential far-future effects.

Neither (i) (although I think that's true) nor (ii).

Closer would be a modification of (ii): (iii) That improving animal welfare with interventions like vegan leafleting, public demonstrations, antispeciesism essays, or lobbying for farm animals seems "extremely unlikely" to be the best way of meeting the goal of making the long-term future flourish.

Personally, I'd more or less endorse (iii), though I'd replace "extremely unlikely" with "unlikely". Of course, the distinction between "good" and "best" matters a lot here. I don't think these things are bad, but I do think that we can do better.

The actual claim has another modification:

(iv) That setting out to improve animal welfare (in the short or medium term) seems extremely unlikely to be the best sub-goal to aim for to meet the goal of making the long-term future flourish.

--

One thing you said was:

"Most non-human animal activists see great value in helping farm animals now as well as potential impact on the far future."

I would argue that they should think the value in helping farm animals now, while it may seem large, is very small compared to the value they can expect to create in the far future (whether through this route or another), and should therefore not be a large component of their decision making (unless they find it useful for another reason, such as to sustain motivation).

Thanks for the clarification. Claim (iii) with "unlikely" rather than "extremely unlikely" is a tenable view, and the specifics, of course, depend on other ways we can affect the far future. Do you think it's fair to put the modified (iii) claim in the same category as...

(v) That improving human welfare with interventions like antimalaria nets, deworming pills, or cash transfers seems "unlikely" to be the best way of meeting the goal of making the long-term future flourish.

I take it you do put these in the same category as you say you are (vi) not making the claim: That improving human welfare is a better instrumental sub-goal for improving the long-term future than improving animal welfare.

But you also claim (vii) "In contrast, we do not have similarly strong reasons to think “improve human welfare” is definitely not the best approach." There seems to clearly be a tension between (vi) and (vii).

Could you resolve it?

I would put (iii) in roughly the same category as (v), though I think it's more unlikely in case of (iii) than (v).

There isn't really a tension between (vi) and (vii), although I can see why you might think there was. It's a distinction about our subjective probability distributions for how good the different causes are.

The way I see it, we are currently probing a space and know relatively little about it. We want to know global features -- which things are better than others, and which are best. Often we are better at distinguishing local features -- how to compare between relatively similar matters.

I think the arguments we've discussed show reasonably conclusively that animal welfare isn't the best instrumental goal -- because we can see other things in the vicinity such as targeting value improvements where it's almost certain that at least one of them is better. This doesn't tell us how to compare the things in this vicinity with the things in the vicinity of human welfare improvements. The contrast with human welfare interventions was firstly supposed to show how indirect effects matter, and second supposed to make clear that there was something unusual going on in the vicinity of the animal welfare interventions. It wasn't meant to make a direct claim about how to compare the two.

"I think the arguments we've discussed show reasonably conclusively that animal welfare isn't the best instrumental goal -- because we can see other things in the vicinity such as targeting value improvements where it's almost certain that at least one of them is better."

It seems the "improving animal welfare" interventions could very well be the best ways to improve those values. I think that's a key point where we disagree. I'd be interested in hearing what you think are better alternatives at some point.

If there are clearly better options for proxies related to "improving animal welfare," but not clearly better options for proxies related to "improving human welfare," then "improving animal welfare" could still be the better option of the two. Analogy: if we have two car races with five separate cars each, the worst car in one race could still be better than all five in the other.

Did you read the first part of the post, on meanings of cause? I don't disagree that that could be the best intervention cluster. But I think if we're pursuing it, it should be for the right reasons -- this will help us to make the correct decisions when new evidence comes to light.

I entirely agree that improving animal welfare could still beat improving human welfare. That's exactly what I was saying in (vi).

I don't "see other things in the vicinity such as targeting value improvements where it's almost certain that at least one of them is better." That's where I was asking for better alternative subgoals to reaching value improvements.

I think we practically understand each other's points now though. Thanks for the discussion and clarification.

Thanks for those clarifications, Owen. I understand your position better now.

"In that post, I contrasted human welfare improvements, which have many significant indirect and long-run effects, with animal welfare improvements, which appear not to. That is not to say that interventions which improve animal welfare do not have these large long-run effects, but that the long-run effects of such interventions are enacted via shifts in the views of humans rather than directly via the welfare improvement."

I think I can offer even more insight on what you're saying and why people are confused.

What I believe you're saying, and correct me if I'm wrong, but "work focused primarially on improving the lives of animals today (e.g., THL's talk of 'animals spared') is unlikely to be as high-impact as work focused primarially on improving the lives of humans today (though that also might not be the best cause overall) because humans today have various flow-through effects (e.g., economic development) and animals do not."

I think this is an important conclusion that appears accepted but not widely internalized by many nonhuman-animal-focused EAs.

However, what you actually say are things like "I contrasted human welfare improvements, which have many significant indirect and long-run effects, with nonhuman animal welfare improvements, which appear not to". The term "animal welfare improvements" is ambiguous, though, and does not necessarily refer solely to targeting nonhuman animals in the present.

For example, it's possible that by producing enough vegetarians (e.g., through leafleting) we get a large impact not from sparing nonhuman animals alive today, but produce enough of an anti-speciesist shift to prevent large quantities of nonhuman animal suffering in the far future (c.f., Brian Tomasik's thesis). I don't necessarily agree (or disagree) with this thesis, but you have not yet refuted it.

So when an nonhuman-animal-focused EA comes along and reads this, they conflate their focus on long-run animal goals with your crtique of short-run animal goals and think you're making claims that you're not, and then argue against you for things you may not have said.

Given this, perhaps more clarity could be introduced by clarifying the short-run nature of what you're discussing, by explicitly using the term "short-run" and/or providing concrete examples?

Well, I'm updating towards thinking it's hard to share my conclusions, even when I think I'm being very specific!

The statement that you ascribe to me is one that I believe but am not certain of, and is not what I'm trying to claim.

Yes, there's a useful distinction between short/medium-run (including today, but also, say, the next few thousand years), and long-run. I don't think we have strong reasons for thinking that improving animal welfare in the very long run is necessarily a bad cause in the 'instrumental goal' sense, in that it's a mistake to optimise for it. I do think that optimising for long-term animal welfare is not the best place to stop in picking an instrumental goal, because it's quite hard to see how things affect it. And I do wish to claim that it's a mistake to optimise for improving animal welfare in the medium term (whether as a proxy for very long-run animal welfare or otherwise).

Ah, then I notice I am confused!

I worry then that the criticisms are right and you are trying to assert more than you have argued for.

Could you clarify the worry?

I realise that by splitting the conclusions in a different post from the argument (which was written first), I may not have filled in all the steps of the argument. I think it all goes through, but if there's a step you're concerned about I'd have a go at providing explicit reasoning for that step.

My worry is that I see you claimed with Jacy that "(iv) That setting out to improve animal welfare (in the short or medium term) seems extremely unlikely to be the best sub-goal to aim for to meet the goal of making the long-term future flourish."

I do find this claim to be plausible, but, to the best of my understanding, I see nowhere in "Human and animal interventions: the long-term view" that you actually defend that claim.

Hence the worry of you asserting more than you have demonstrated, and the source of confusion.

Thanks for clarifying. You're right that the argument at that step isn't spelled out explicitly. It's supposed to go:

1. Short/medium term animal welfare improvements have small long-run effects compared to other things we can effect in the short/medium term.

2. It would be very surprising if optimising for something which doesn't have long-run effects could be comparably good with optimising for the best identifiable thing which does have long-run effects. (Even if at certain times optimising for these two things would recommend the same interventions.)

Both those claims make sense, and I agree you have demonstrated them, but I could see them being easily misinterpreted based on what I said in the beginning.