Hide table of contents

"It was never contended by a sound utilitarian that the lover should kiss his mistress with an eye to the common weal"

– John Austin
 

Tl;dr: 

  • There are at least three important but different things which are consistently being referred to as ‘consequentialism’.
  • One use refers to the ethical theory, another to the decision theory, and the third to the general mindset of Explicit Consequence Analysis.
  • Mixing them up is probably causing unnecessary confusion and grief.

 

Three Types of Consequentialism

Ethical Consequentialism

When the likes of Toby Ord and Will MacAskill talk about Consequentialism, they’re talking about a type of ethical theory under which, roughly speaking, you determine how good something is based on its consequences. (Let’s call this Ethical Consequentialism.) Utilitarianism is an example of this kind of consequentialism. Consequentialist theories are also often agent-neutral: you’re trying to get the ‘best’ consequences not for you, but for everyone.

Notably, this is a property of what is actually good, not of how we decide what is good. It might well be that the way of making choices which leads to the best consequences is not to do with evaluating consequences. Something like 'take the most virtuous action' might be the most robust way to get good outcomes in day-to-day life, where actually explicitly calculating the consequences would take too long. This is actually the topic of Toby Ord’s Thesis, which I highly recommend reading if you consider yourself to be an Ethical Consequentialist.

Decision Consequentialism

Meanwhile, when Eliezer Yudkowsky talks about Consequentialism, he means something totally different. He is explicitly talking about the decision theory bit, where an agent weighs up a set of possible actions and picks the one with the best expected utility according to some utility function. (Let’s call this Decision Consequentialism.) But there are no claims about what the underlying utility function is: it might well be ‘number of paperclips in the world’. ‘Consequentialism’ in this sense is a very specific technical term used in alignment theory: it’s a feature of how we’d expect sufficiently advanced artificial intelligences to behave, and it underpins the concept of convergent instrumental goals and some kinds of coherence and selection theorems.

Explicit Consequence Analysis

To make matters worse, there’s a third kind of thing which EAs often seem to mean by ‘consequentialism’, which is the general practice of making choices informed by explicit utility and probability calculations. I’ve never seen it formally defined in this way but it’s the concept being pointed at by posts like “On Caring” and “Scope Insensitivity”. To quote the latter:

“The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets worked up about that poor struggling oil-soaked bird.”

This is the lens which underpins impact analysis, and it’s a fairly identifying feature of EA thinking. It seems to intuitively follow from Ethical Consequentialism that this would be a good idea especially for things like career decisions and charitable donations, and it bears some resemblance to Decision Consequentialism although it is a much less technically precise notion. Crucially, though, it is not the same as either of those things, it is its own thing, which I will tentatively call Explicit Consequence Analysis. (Please, please let me know if you can think of a better name.)

Because it is rarely explicitly flagged as its own thing, I think attempts to talk about it often accidentally invoke properties from the other two kinds of Consequentialism. People get the sense that you have to do this all the time, that it's strictly optimal, that not doing this is a moral failure. This is incorrect, and I suspect that these beliefs are harmful.

So there we have it: Ethical Consequentialism, Decision Consequentialism, and Explicit Consequence Analysis.

Downstream Problems

The basic issue, I believe, is that people end up thinking they need to do things that they don’t actually need to do. I hope to expand on all of these at length in future, but for now I’ll try to keep it brief. 

You don’t need to optimise everything all the time

No serious consequentialist philosopher has ever recommended that you try to optimise everything all the time. Many have recommended against it. Indeed, there are warnings on this very forum. But I think they might not be loud enough, because sometimes people are so eager to do good that they don’t actually take the time to read through all the ambient philosophy. And to be clear, I think this is reasonable: I think the onus is on people introducing EA to be clear about this.

I’ve now seen a handful of cases where people get into EA, and initially think they need to optimise everything only to discover that this is a bad idea. I believe this is downstream of the ethical consequentialism / explicit consequence analysis confusion. Choosing to not always optimise doesn’t mean you’re an imperfect consequentialist, it is in fact almost certainly the right way to do consequentialism.  I'm fairly sure no utilitarian philosopher has ever advocated for this, and many have advocated against.

Being unproductive sometimes, taking time for yourself, caring about things that are not high impact, these are all ok. They’re not you failing to be a good utilitarian. Becoming someone who always calculates utility perfectly and acts on it with no sentiment is a fabricated option. Given that we are in fact squishy humans, the path to maximising impact involves recognising that fact and making room for it. [1]

EA Burnout

Relatedly: it seems like EA has a burnout problem. It also happens to be, as far as I can tell, the first large-scale movement with such a high concentration of utilitarians and people explicitly trying to optimise things. I do not think this is a coincidence, although I’m not sure what the causal chain is. I hope to write on this more in future.

Utility calculations shouldn’t totally override commonsense morality

If you do your utility calculations and impact analyses and they suggest that the correct path to highest impact involves deceiving people, manipulating them, or coercing them, you should seriously entertain the possibility that you messed up the calculations. I see a lot of problems in community building as being part of an attitude that treats people as means to an end and not ends in themselves.[2]

Likewise if the calculations suggest that you should do something that feels really bad, and also, if the calculations suggest that you should do exactly the thing you already wanted to do. I think it’s really valuable to have Explicit Consequence Analysis as another lens for evaluating decisions, but it really shouldn’t be the only one.

I feel like people have suitably internalised that it’s very easy to misrepresent things with statistics, but not that it is even easier to do so with hypothetical future calculations.  

Consequentialism is not obviously best

There is a way in which consequentialism is in fact obviously best. You can in fact prove that agents need to have transitive preference orders to avoid being Dutch-booked. So it’s true, if you’re not a Decision Consequentialist, then there’s some sequence of trades you’d want to accept which would leave you strictly worse off. But firstly, humans are actually not great at this sometimes, and secondly, this in no way implies that you should be an Ethical Consequentialist. If the distinction has never been made clear to you though, you might quite reasonably get confused. 

Recommendations

In the spirit of brevity I will end here and save the elaborations. Concrete recommendations:

For the Ethical Consequentialists

  • If you feel like you’re not doing enough and this makes you a bad person, cut yourself some slack.
    • Beating yourself up about it is unlikely to help.
    • This isn’t abandoning philosophy and truth, it is the sound and reasonable thing to do, well-supported in the literature.
  • Read Everyday Longtermism
  • When you’re thinking of optimising for something (especially in community building and social interactions), really ask yourself ‘might this actually make things worse?’, or better yet, 'suppose this ends up making this worse, what happened?'

For the Community Builders

  • Make sure that when people are introduced to Consequentialism and Utilitarianism, they are also introduced to the decision theory / ethical theory distinction, and be clear about the fact that Explicit Consequence Analysis is a tool rather than a moral obligation
  • When talking about Consequentialism, try to be clear about what kind you mean.
    • I think by default Consequentialism should refer to Ethical Consequentialism, and the thing to be careful of is when you’re talking about Decision Consequentialism or Explicit Consequence Analysis but it might be misread.
  • Always remember that people are people, and the fact that you can use them as terms in impact analyses does not change this

For the philosophers

  • Consider reading Toby Ord’s Thesis
    • Consider summarising it for the forum?
  • If you’re feeling plucky, I think ‘longtermist decision theory’ is an underexplored and valuable avenue of research.

For the naming committee

  • Please, I beg of you, a better name than Explicit Consequence Analysis

Appendix

Part of what prompted me to write this was a friend saying “everybody is ultimately consequentialist whether they realize it or not”. He has offered the following elaboration:

When you dig into the justifications for following a moral rule or virtue even if it leads to what looks like a bad outcome, people will often say it’s still worth doing because following the rule or virtue is expected to lead to overall better outcomes in the long run… which is just another way of saying they care about the expected consequences, just on a longer timeline. But real consequentialists use the same argument against doing things that appear to have a good immediate outcome if they expect it to have a bad one in the long run or if universalized. So error correction seems like the real thing that distinguishes whether someone is a consequentalist or not, and it is a rare deontologist or virtue ethicist who won’t take any expectation or models of outcomes into consideration at all when deciding what moral rules or virtues to follow... 


 

  1. ^

    I will freely admit that even if I thought running on self-hate and shame was the path to highest impact I would generally discourage it, but as it happens I am also very confident that in 95% of cases it is not the path to highest impact.

  2. ^

    To flesh this out a little: one of the things I find most concering in current community building discourse is recommendations for how to imitate and simulate relationships. Leaving aside my personal distaste at this, I think people are generally very bad at doing this and that the more robust approach is to actually be friends with people.

33

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since:

Fwiw, I think the usage from moral philosophy is by far the most common outside the EA community, and probably also inside the community. So if someone uses the word "consequentialism", I would normally assume (often unthinkingly) that they're using it in that sense. I think that means that those who use it in any other sense should, in many contexts, be particularly careful to make clear that they're not using the term in that way.

There is a standard distinction in ethics between act consequentialism as a criterion of rightness and as a decision procedure (see Amanda Askell's post). Potentially that maps on to "ethical consequentialism" and "explicit consequence analysis", depending on what you have in mind.

I certainly agree that outside EA, consequentialism just means the moral philosophy. But inside I feel like I keep seeing people use it to mean this process of decision-making, enough that I want to plant this flag.

I agree that the criterion of rightness / decision procedure distinction roughly maps to what I'm pointing at, but I think it's important to note that Act Consequentialism doesn't actually give a full decision procedure. It doesn't come with free answers to things like 'how long should you spend on making a decision' or 'what kinds of decisions should you be doing this for', nor answers to questions like 'how many layers of meta should you go up'. And I am concerned that in the absence of clear answers to these questions, people will often naively opt for bad answers.

Relatedly: it seems like EA has a burnout problem. It also happens to be, as far as I can tell, the first large-scale movement with such a high concentration of utilitarians and people explicitly trying to optimise things. I do not think this is a coincidence, although I’m not sure what the causal chain is. I hope to write on this more in future.

 

I'm very sceptical about this point. It's probably true that EA has a 'burnout problem', but is burnout in EA professionals really higher than for other ambitious professionals that are excited about their job? My guess would be 'no', but would be good to see data on this. 

My guess would be yes. I too would really like to see data on this, although I don't know how I'd even start on getting it.

I imagine it would also be fairly worthwhile just to quantify how much is being lost by people burning out and how hard it is to intervene - maybe we could do better, and maybe it would be worth it.

My impression is that EAs also often talk about ethical consequentialism when they mean something somewhat different. Ethical consequentialism is traditionally a theory about what distinguishes the right ways to act from the wrong ways to act. In certain circumstances, it suggests that lying, cheating, rape, torture, and murder can be not only permissible, but downright obligatory. A lot of people find these implications implausible.

Ethical consequentialists often think what they do because they really care about value in aggregate. They don't just want to be happy and well off themselves, or have a happy and well off family. They want everyone to be happy and well off. They want value to be maximized, not distributed in their favor.

A moral theory that gets everyone to act in ways that maximize value will make the world a better place. However, it is consistent to think that consequentialism is wrong about moral action and to nonetheless care primarily about value in aggregate. I get the impression that EAs are more attached to the latter than the former. We generally care that things be as good as they can be. We have less a stake in whether torture is a-ok if the expected utility is positive. The EA attitude is more of a 'hey, lets do some good!' and less of a 'you're not allowed to fail to maximize value!'. This seems like an important distinction.

Saying that consequentialist theories are “often agent neutral” may only add confusion, as it’s not a part of the definition and indeed “consequentialism can be agent non-neutral” is part of what separates it from utilitarianism.

My understanding is that some philosophers do actually think 'consequentialism' should only refer to agent-neutral theories. I agree it's confusing - I couldn't think of a better way to phrase it.

I got into a very stupid argument with my tutor once about whether impartiality was part of the definition of consequentialism so I can attest that some people are wrong about this!

Aha.  Well, hopefully we can agree that those philosophers are adding confusion. :)

Another use of "consequentialism" in decision theory is in dynamic choice settings (i.e. where an agent makes several choices over time, and future choices and payoffs typically depend on past choices). Consequentialist decision rules depend only on the future choices and payoffs and decision rules that violate consequentialism in this sense sometimes depend on past choices.

An example: suppose an agent is deciding whether to take a pleasurable but addictive drug. If the agent takes the drug, they then decide whether to stop taking it or to continue taking it. Suppose the agent initially judges taking the drug once to have the highest payoff, continuing to take the drug to have the lowest payoff and never taking it to be in between. Suppose further though, that if the agent takes the drug, they will immediately become addicted and will then prefer to carry on taking it to stopping. One decision rule in the dynamic choice literature is called "resolute choice" and requires the agent to take the drug once and then stop, because this brings about the highest payoff, as judged initially. This is a non-consequentialist decision rule because at the second choice point (carry on taking the drug or stop), the agent follows their previously made plan and stops, even though it goes against their current preference to carry on.

I don't know how, if at all, this relates to what Yudkowsky means by "consequentialism", but this seems sufficiently different from what you described as "decision consequentialism" that I thought it was worth adding, in case it's a further source of confusion.

Curated and popular this week
Relevant opportunities