Our latest guest essay on utilitarianism.net is 'Buddhism and Utilitarianism', by Calvin Baker.  Here I'll just reproduce the final section comparing Effective Altruism and Engaged Buddhism, which may be of particular interest to Forum readers:

Engaged Buddhism is a somewhat heterogeneous social movement grounded in the conviction that Buddhists ought to bring Buddhist practices and values to bear on contemporary issues. Engaged Buddhists tend to be united in their commitment to addressing the structural, systemic, and institutional causes of suffering in their political, economic, social, and environmental forms, in a way that manifests Buddhist values of compassion and nonviolence. More succinctly, “Engaged Buddhism is characterized by activism to effect social change.” Activities carried out under the banner of Engaged Buddhism have taken a variety of forms, e.g., environmental activism in Thailand, hospice and elder care, participation in the Extinction Rebellion movement, work to alleviate hunger and poverty in Sri Lanka, disaster relief, recycling, and attempts at peaceful conflict resolution in Myanmar.

Effective altruism (EA) is a movement whose goal is to do the greatest possible amount of good, in terms of well-being, given a fixed quantity of resources (money, research hours, political capital, etc.).53 Given its emphasis on impact maximization, EA is heavily invested in global priorities research: research into which cause areas, and which interventions within those areas, are most effective at promoting well-being. So far, EA has focused the majority of its efforts on global health and developmentfarm animal welfare, and risks of extinction and civilizational collapse, including risks from transformative artificial intelligence (AI), pandemics, nuclear weapons, great power conflict, and extreme climate change. The EA emphasis on prioritization research marks a significant contrast with Engaged Buddhism, which has not attempted to systematically answer the question of how to bring about the greatest amount of well-being, given a finite quantity of resources. So, whereas EA retains a more analytical, research-heavy orientation that attunes it to problems that are—thankfully—not currently manifest, like engineered pandemics and misaligned, superintelligent AI, Engaged Buddhism is geared more towards social activism and immediately salient social issues.

It is also productive to compare EA efforts to reduce the suffering of farmed animals with the implications of Buddhist philosophy for non-human animal welfare. Buddhists have traditionally regarded all sentient beings as moral patients, holding that, like us, non-human animals are subject to duḥkha.55 Buddhist ethics, EA, and utilitarianism are therefore similar in assigning greater importance to non-human animal welfare than most other moral approaches.

We can nuance this picture, though, by recalling that Buddhism distinguishes between pain (negative hedonic valence) and duḥkha and maintains that pain is only bad to the extent that we are averse to it. (From a Buddhist perspective, pain is unavoidable, but suffering on account of pain is not.) It is extremely plausible that pain is aversive to many non-human animal species—including all those currently subjected to the horrendous conditions on factory farms, such as cows, chickens, pigs, and fish. However, it is possible that some species—perhaps only a tiny minority—lack the cognitive architecture that is necessary to generate what is, for the Buddhist, the ethically-relevant conjunction of pain and the higher-level attitude of aversion (dveṣa) to pain. It is therefore possible that Buddhists will end up with a slightly less expansive moral circle than many utilitarians and effective altruists, who tend to hold that pain simpliciter is bad and worth alleviating.

Finally, we can inquire into Buddhist and utilitarian perspectives on the future of humanity. Although utilitarianism is compatible with multiple positions in population ethics, a prominent strand in recent utilitarian(-leaning) work embraces totalism, which says, very roughly, that the more happy people there are in a population, the better. By totalist lights, the best-case scenario for humanity is that it develops into an extremely long-lasting interstellar civilization composed of trillions of happy people (or more!). To me, it seems doubtful that Buddhism would go in for a picture like this. As we saw in section 2, Buddhist ethics does not start with a conception of what is good and then say that we should maximize the total quantity of that thing in the universe (as does utilitarianism). Instead, Buddhist ethics starts with the problem of duḥkha and then sets out paths to the solution to that problem. Even on the tentatively optimistic reading of Buddhism, on which attaining the cessation of duḥkha is positively valuable, it seems to me that Buddhists would find the claim that we should bring new beings into existence, so that they too can overcome suffering, to be an alien one. Rather, it seems that Buddhists thinking about the future would wish for us to lead whichever beings currently exist along the path to awakening, and perhaps for the bodhisattvas of the interstellar space age to try to save the aliens too (if doing so turns out to be tractable).

There is one fascinating way in which Buddhist and utilitarian thinking about the future seems to converge, however. Over the past several decades, applied ethicists—alongside the public—have become increasingly interested in human biomedical enhancement, which we can gloss as the project of biomedically intervening on the human organism for the purpose of increasing well-being. Human enhancements would thus include everything from currently existing, relatively mundane procedures such as laser eye surgery to radical possible interventions, such as genetic engineering aimed at dramatically increasing general mental ability (“IQ”). 

I believe that Buddhism and utilitarianism are both committed to in-principle support for human enhancement (if this can be achieved without harmful side-effects or unintended consequences). Utilitarianism says that we should promote the sum-total of well-being. So, if a certain enhancement would make humanity better off, utilitarianism would support it. For its part, unlike many other religious traditions (such as Christianity), Buddhism thoroughly rejects the notion that there is a sacrosanct human essence that we must preserve. Moreover, Buddhism is pragmatic about attaining the cessation of suffering. For instance, if it turned out that stimulating the brain in a certain way during meditation allowed meditators to more efficiently gain insight into the nonexistence of the self, it seems that Buddhists should heartily endorse this practice. So although Buddhists may disagree with totalist utilitarians that our primary objective should be to become a vast interstellar civilization, they may well agree that we should use the tools of modern technology to intervene in our biology and psychology—perhaps radically—to attain a greater level of well-being.

52

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since:
Would an agent who accepted strong pessimism [i.e. the view that there are no independent goods]—which I absolutely believe we should reject—have most reason to end their own life? Not necessarily. An altruistic agent with this evaluative outlook would have strong instrumental reason to remain alive, in order to alleviate the suffering of others.

I agree that life can be worth living for our positive roles in terms of reducing overall suffering or dukkha. More than that, such a view seems (to me at least) like a perfectly valid view on what constitutes evaluative meaning and positive value.

Indeed, if I knew for a fact that my life were overall (hopelessly) increasing suffering or dukkha, then this would seem to me like a strong reason not to live it, regardless of what I get to experience. So I'm curious how the author has come to believe that we should absolutely reject this view in favor of, presumably, offsetting views.

However, such an agent would be forced to accept the infamous null-bomb implication, which says that the best thing to do would be to permanently destroy all sentient life in the universe. I join almost every other philosopher in taking the fact that an ethical theory accepts the null-bomb implication as a decisive reason to reject the theory (as not merely misguided, but horrifically so).

To properly consider such a theoretical reductio, I trust that most philosophers would agree (on reflection) that we need to account for potential confounders such as status quo bias, omission bias, self-serving bias, and whether alternative views have any less horrific theoretical implications.

In particular, offsetting views theoretically imply things like the “Very Repugnant Conclusion”, “Creating Hell to Please the Blissful”, and “Intense Bliss with Hellish Cessation”, none of which seems to me any less horrific than does the non-creation of an imperfect world (cf. the consequentialist equivalence of cessation and non-creation).

Are these decisive reasons to reject offsetting views? A proponent of such views could still argue that such implications are only theoretical, that we shouldn't let them (mis)guide us in practice, and that the practical implications of impartial consequentialism are a separate question.

Yet the quoted passage neglects to mention that the very same response applies to minimalist consequentialism (whose proponents take pains to practically highlight the importance of cooperation, the avoidance of accidental harm, and the promotion of nonviolence).

I would just generally caution against performing such theoretical reductios so hastily. After all, a more bridge-building and illuminating approach is to consider the confounding factors and intuitions behind our differing perceptions on such questions, which I hope we can all do to better understand each other's views.

I'm concerned that this comment has received so many upvotes.  I just want to flag two major concerns I have with it:

(1)

if I knew for a fact that my life were overall (hopelessly) increasing suffering or dukkha, then this would seem to me like a strong reason not to live it, regardless of what I get to experience. So I'm curious how the author has come to believe that we should absolutely reject this view

This is extremely misleading.  You make it sound like the author favours causing a net increase in suffering for his own personal gain.  But of course that is not remotely fair or accurate. What he absolutely rejects is the idea that there are no positive goods (i.e., positive welfare has no moral value).  The alternative, "Positive Goods" view implies that it can be permissible to do things that include some additional suffering, so long as there are sufficient offsetting gains (possibly to those same individuals -- the author didn't take any stand on the further issue of interpersonal tradeoffs).

For example, suppose you had the option of bringing into existence a (causally isolated) blissful world, with the only exception being that one person will at one point stub their toes (a brief moment of suffering in their otherwise blissful life). Still, every single person on this world would be extremely happy to be alive (for objective list theorists: feel free to add in other positive goods, e.g. knowledge, accomplishment, friendship, etc.). The "No Positive Goods" view implies that it would be wrong to allow such a blissful world to exist. The author -- along with pretty much every expert in moral philosophy -- absolutely rejects this view.

Again, I want to emphasize how misleading I find it to characterize this as endorsing "increasing suffering", since in ordinary use we only describe lives as "suffering" when they have overall negative welfare, and we typically use "increasing suffering" to mean increasing suffering on net, i.e. more than one increases positive well-being. To give someone a blissful life + a stubbed toe is not, as most people use the term, to "increase suffering".  I would urge you in future to be clearer about how you are using this phrase in an unusually literal way (and also please avoid making it sound like your interlocutors have selfish motivations [as in: "regardless of what I get to experience"] when there is no basis for such an incendiary charge).

(2) 

More substantively, I'd dispute the suggestion that there's any sort of parity between the Postive Goods and No Positive Goods views when it comes to "horrific implications".

I don't want to get into a back-and-forth about this, but I'll just report my editorial view that there is something distinctively problematic about propounding a view that implies that destroying the world is literally the best possible outcome.

Speaking as a moral philosopher, my professional opinion is that the No Positive Goods view lacks basic justification in a way that differs markedly from Positive Goods views (even ones, like Totalism, with some troubling implications). And speaking as an editor of a public-facing website, I'm much more concerned that some crazy person might act on the No Positive Goods view in horrific ways than I am that anyone would (or could) do likewise with Positive Goods views like Totalism (for the obvious reason that it's easier to commit mass-murder than to create the positive replacement conditions that would be required for Totalism to permit such an act).

So that's why I whole-heartedly endorse (as both theoretically and practically warranted) our guest author's strong rejection of No Positive Goods views.  I don't expect proponents of the view to agree, and I don't wish to be drawn into further discussion of the matter. This explanation is more for third parties who might otherwise be confused or misled by the above comment.

I am keen to read critiques of Teo's claims but I downvoted this comment  for a couple of reasons:

1) Aggressive language - I felt that Teo's comment was written in good faith and I was surprised that you dismissed it in such strong words: "extremely misleading", "not remotely fair or accurate",  "an incendiary charge".

2) Appeals to authority - It doesn't matter to me that "pretty much every expert in moral philosophy" disagrees. I want to know why they disagree and be referred to relevant authors or papers that make these arguments.

Thanks for explaining your perspective.  I hope most people will instead vote based on whether they think the comment will add to or detract from the understanding of most readers.  To briefly explain why I don't think the two factors you point to are indicative of a low-quality comment:

(1) A comment may be "written in good faith" and yet have the effect of being misleading, unfair, or otherwise harmful.  If a comment does have these effects, and especially if it is being highly upvoted (suggesting that many readers are being taken in by it), then I think it is important to be clear about this.  (Note that I made no claims about Teo's motivations, nor did I cast any personal attacks.  I simply criticized the content of what was written, in a clear and direct way.)

So I would instead ask readers to assess whether my objections were merited.  Is it true that Teo's comment "[made] it sound like the author favours causing a net increase in suffering for his own personal gain"?  If so, that would in fact be extremely misleading, not remotely fair or accurate, etc. So I think it's worth being clear on this.

Of course, if you think I'm being idiosyncratic and no casual reader would come away with the impression I'm worried about here, then by all means downvote my comment for simple inaccuracy.

(2) Certainly, you don't have to defer to the opinion of moral philosophers if you don't trust that we're well-placed to judge the matter in question.  Still, the info may be helpful for many, so (imo) sharing info about an expert consensus should not be viewed negatively.

I kindly ask third parties to be mindful of the following points concerning the above reply.

(1)

  • It calls a part of my comment extremely misleading based on an incomplete quote whose omitted context provides a better sense of what I am talking about. Specifically, the omitted beginning clarifies that I am discussing “strong pessimism [i.e. the view that there are no independent goods]”, and noting how I personally find it a perfectly valid view to equate my positive value with whether my life has overall positive roles under that view. And the omitted ending clarifies that I am therefore curious about the author’s reasons to “absolutely reject” impartial, minimalist consequentialism “in favor of, presumably, offsetting views” (→ such as “classical utilitarianism as well as ‘weak negative’ or ‘negative-leaning’ views”), all of which are impartial views. (My use of “this view” was ambiguous, but the reading of it felt uncharitable given the above.)
  • At any rate, I have no reason to question the impartiality of the author’s preferred views. I hope it is clear that what my comment is questioning is the relative plausibility of the assumptions and implications of impartial offsetting views (over those of impartial minimalist views).
  • It claims that my use of the phrase of “increasing suffering” is misleading on the grounds that my use differs from the (allegedly common) assumption that to bring about an outcome with “outweighed” suffering does not entail increasing suffering. Yet I would think that the literal use (i.e. counting suffering only) makes more common sense than does the alternative use that is based on the offsetting assumption. After all, such an assumption would imply that (e.g.) the offsetting choice of “Intense Bliss with Hellish Cessation” (see the diagram) does not entail increasing suffering, even though it brings about arbitrarily large amounts of torture-level suffering (as do the other two supposedly outweighed hells) in place of an alternative world that would contain only untroubled experiences.

(1) + (2)

  • It argues (from authority) that minimalist views lack basic justification, without engaging with any of the direct arguments in their favor (cf. here). In particular, it does not address the intuition that “torture-level suffering cannot be counterbalanced”. Rather, it seems to imply that the outweighability of torture follows from the supposedly obvious outweighability of mild suffering (such as a stubbed toe).
  • Indeed, proponents of offsetting views do not seem to find, by direct introspection, that a moment of happiness would seem to be outweighing a certain fraction of a torture-moment; instead, they appear to infer this from other assumptions (such as from the additive aggregation of experiences that are represented with positive and negative real numbers). To quote another commenter:
    • “A lot of disagreements about axiology and population ethics have this same dynamic. You infer from your intuition that [mild suffering] to please the blissful is acceptable that we can scale this up to torture to (more intensely) please many more blissful people. I infer from the intuitive absurdity of the latter that maybe we shouldn't think it's so obvious that [mild suffering] to please the blissful is good.”
  • It also claims that the theoretical implications of minimalist consequentialism are absolutely rejected by “pretty much every expert in moral philosophy”. Yet it does not account for the extent to which these rejections may be highly confounded by 1) our practical intuitions, by 2) the status quo bias and/or omission bias, and 3) by the fact that we ourselves are currently living in this world whose hypothetically instant and painless cessation (i.e. non-creation, cf. the diagram) we are supposed to be impartially evaluating in this thought experiment.
    • It also hints that the torture-for-greater-joy implications of offsetting views fare much better in terms of acceptance, yet does not provide support for whether this is the case.
  • To be clear, it does present a thought experiment about the creation of a causally isolated world, which is a step in the right direction to account for these confounders. Yet even there, the discussion is eventually framed in terms of whether we would “allow such a blissful world to exist”, which potentially brings in the confounders by making it sound like the option of non-creation entails interfering with a status quo where some blissful beings already exist, whereas the minimalist intuition is precisely that the non-creation of (causally isolated) beings with even perfectly fulfilled needs is, other things being equal, morally unproblematic (“no need, no problem”).
  • The latter view (i.e. “the Asymmetry”) has many defenders (some of whom are cited here). And if we acknowledge the consequentialist equivalence of cessation and non-creation, then this view also implies a theoretical endorsement of the (un-confounded) hypothetical cessation implication in the case of purely experientialist and consequentialist minimalism.
    • (To the extent that one feels like denying the equivalence, perhaps one’s intuitions are not captured by purely experientialist consequentialism.)

(2)

  • By italicizing “destroying the world”, the reply again forcefully brings in the aforementioned confounders, and omits to mention the theoretical equivalence with non-creation. Theoretically, offsetting versions of utilitarianism likewise imply that an ideal outcome is to unleash a utilitronium shockwave (“converting all matter and energy into pure utilitronium”), which is presumably no less “destructive” according to most people’s practical intuitions.
    • Some minimalist/”strong pessimist” views would also not consider cessation an ideal option, though these were beyond the scope of my post on the topic (cf. footnote 1).
  • It shares a worry (which I also have) that a naive reading of these views might lead someone to act in horrific ways. Yet it provides little justification for why this would make the rejection of minimalist views more “practically warranted” relative to the rejection of offsetting views. After all, the latter could also lead to violence based on naive ideas about how happiness may be increased, such as by targeting those who are perceived to have bad values (e.g. people with certain political or religious views). (Relatedly, Karl Popper argued that offsetting views were likely to lead to atrocities if they were accepted at an institutional level.) In any case, it seems to me that all consequentialist views imply a large gap between theory and practice, and my response to potentially naive minimalists (and other consequentialists) would be to always mind the gap.

Kind of off topic, but I want to throw it out there anyways. If Engaged Buddhism is a social movement that

  1. has an opinion on what the best way to do good is
  2. is actually doing something in practice Then we (EA and EB) should probably be talking. There might be valuable things we can learn from eachother

I don't think it actually has (1).

Engaged Buddhism is, as I see it, best understood as a movement among Western Liberals who are also Buddhists, and as such as primarily infused with Western liberal values. These are sometimes incidentally the best way to do good, but unlike EA they don't explicitly target doing the most good, they instead uphold an ideology that values things like racial equality, human dignity, and freedom on religion (including freedom to reject religion).

As for (2), I'm not sure how much there is to learn. There's likely some things, but I also worry that paying too much attention to Engaged Buddhism might be a distraction because it suffers common failure modes that EA seeks to avoid. For example, people I know who are part of Engaged Buddhism would rather volunteer directly, even if it's ineffective, than earn to give, because they want to be directly engaged. That's fine, but from what I've seen the whole movement is oriented more around satisfying a desire to help rather than actually doing the most good.

Interesting essay, thanks for sharing.  Buddhist practice is the central focus of my life & is how I became interested in EA.  I see the two as fairly compatible.  I'm assuming the essay's focus is on Buddhists that have a primarily physicalist ontology (that subjective experience is an epiphenomena of brain chemistry).  If that is the case, then I think engaged Buddhism, when taken to the highest degree of intensity, converges fairly well with EA.  

Things become arguably more interesting if we adopt the traditional Buddhist ontology which includes multiple realms of existence, karma & rebirth.  For instance, the population ethics does change in this case.  In the traditional Buddhist worldview, there are a finite set of sentient beings being reborn in the universe.  The total population of sentient beings can decrease (because sentient beings reach liberation & stop being reborn) but not increase (since Buddhist logic negates a first cause).  

The main thrust of population ethics in this case is to increase the proportion of sentient beings reborn into "fortunate human births" (a traditional Buddhist phrase) which thus allows them the greatest opportunity to generate positive momentum (i.e. by being effective altruists) to eventually reach liberation.  Ordinary sentient beings are not really able to effect this; at most they can encourage other humans to maximize their altruistic efforts & thus build that positive momentum.  To me, this is how traditional Buddhadharma could align with EA.

Where they don't align is around doing more than just practicing altruism.  The traditional Buddhist worldview suggests that some of the most possible good someone can do is to strive to become a Buddha through training in meditative concentration & insight into the nature of reality.  Through this training, it is possible to progress through degrees of liberation which put one in a position to do the most possible good for others from a multi-lifetime perspective.  This would include occupying altruistic worldly functions such as those encouraged by EA, but also encouraging others to spend a large portion of their lives meditating.  In other words, spending a large portion of life meditating is highly recommended by traditional Buddhism but only makes sense from a utilitarian perspective if one takes a multi-lifetime view.

I think there's some case for specialization. That is, some people should dedicate their lives to meditation because it is necessary to carry forward the dharma. Most people probably have other comparative advantages. This is not a typical way of thinking about practice, but I think there's a case to be made that we could look at becoming a monk, for example, as a case of exercises comparative advantage as part of an ecosystem of practitioners who engage in various ways based on their comparative abilities (mostly focused on what they could be doing in the world otherwise).

I use this sort of reasoning myself. Why not become a monk? Because it seems like I can have a larger positive impact on the world as a lay practitioner. Why would I become a monk? If the calculus changed and it was my best course of action to positively impact the world.

A few years ago I asked a zen nun what exactly is the use of being a nun, living quite secluded and without much impact on the world. Her response was (roughly speaking) that it is good if some people practice and study intensely because that keeps the quality and depth of the tradition alive and develops it. But not everyone should take that path. It seems like you was expressing the same idea as you are! I think she now leads one of the monastic centers in Germany. 

Really appreciate that notion.  It is something I've thought a lot about myself.  I also tend to find that my personal spiritual practice benefits from a mix of many short meditation retreats, daily formal meditation sessions & ongoing altruistic efforts in daily life.  I don't feel that I would make a good teacher of meditation if I did that full time or that my practice would reach greater depth faster if I quit my job & practiced full time.  

One point I would like to add: whether you take the lay path of incorporating some Buddhist practices into your ordinary daily life, or the monastic path of dedicating yourself full time to Buddhist practice, it can help build the emotional resources necessary to do good for the world and live a life in service of others.  

Considering how difficult it can be to do good - let alone trying to do the most good - and to make sacrifices on behalf of others, and how common burnout and other challenges are, such tools for building emotional resilience, clarity, and compassion can be extremely helpful. 

[Apologies for the accidental multi-post.  Should be fixed now!]

Even this comment is being downvoted?  I'm so confused by the karma behaviour in this thread.

Curated and popular this week
Relevant opportunities