A useful distinction for people thinking about act consequentialism in general and act utilitarianism in particular is the distinction between a criterion of rightness and a decision procedure (which has been discussed by Toby Ord in much more detail). A criterion of rightness tells us what it takes for an action to be right (if it’s actions we’re looking at). A decision procedure is something that we use when we’re thinking about what to do. As many utilitarians have pointed out, the act utilitarian claim that you should ‘act such that you maximize the aggregate wellbeing’ is best thought of as a criterion of rightness and not as a decision procedure. In fact, trying to use this criterion as a decision procedure will often fail to maximize the aggregate wellbeing. In such cases, utilitarianism will actually say that agents are forbidden to use the utilitarian criterion when they make decisions.

There’s nothing inconsistent about saying that your criterion of rightness comes apart from the decision procedure it recommends. We can imagine views where the two come apart even more strongly than they do under utilitarianism. Imagine a moral theory that had the following as a criterion of rightness:

No Thoughts Too Many (NTTM): An action can only be right if it is not the result of a deliberative process about what is right to do.

If we assume that trying to use NTTM as a decision procedure would itself constitute a deliberative process, then the NTTM criterion of rightness is inconsistent with using NTTM as a decision procedure. 

We can think of more mundane examples of criteria of rightness that won’t always recommend themselves as decision procedures. Suppose that a criterion of rightness for meditating is to clear your mind of thoughts, but that people who try to clear their mind of thoughts are worse at clearing their mind of thoughts than people who use other techniques, such as focussing on their breath. ‘Clear your mind of thoughts’ might be a good criterion of rightness for meditation, but a bad decision procedure to use by the lights of that criterion. Or suppose that you are holding a gun with a misaligned sight. The criterion of rightness might be ‘hit the target’ while it’s better to use ‘hit 2ft to the right of the target’ as your decision procedure. (An interesting class of examples of this sort involve what Jon Elster calls ‘states that are essentially by-products’, such as being spontaneous. Thanks to Pablo Stafforini for pointing this out.)

So when is it good to accept or use ‘act such that you maximize the aggregate wellbeing’ as a decision procedure? The simple answer, if we assume act utilitarianism, is: whenever doing so will maximize the aggregate wellbeing! Of course, it can be hard for us to know when accepting or using the utilitarian criterion of rightness as a maxim would be the best thing for us to do. Perhaps most of us should never use it as a decision procedure.

What are the reasons for thinking that ‘act such that you maximize the aggregate wellbeing’ is a bad decision procedure? First, this is extremely cumbersome as a decision procedure: it is simply implausible that it would be best for us to stop before taking every step forward or before buying every can of soup to consider how these things affect the aggregate wellbeing. For everyday tasks, it makes sense to have simpler maxims to hand. Second, it’s easy to apply it naively. When we try to think about all of the outcomes of our actions, we often fail to take into account those that aren’t obvious. An example of this fault can be seen in the classic naive utilitarian who promises to watch your purse and then, as soon as a needy stranger comes along, gives your purse to them. The immediate impact of their action might be to transfer money from you to the needy stranger, but doing so undermines the trust that people can have in them in the future. Moreover, in order to do a lot of good in the world, people need to co-operate, and it’s virtually impossible for people to co-operate without trust and honesty norms being in place. Third, explicitly using ‘act such that you maximize the aggregate wellbeing’ as a maxim can alienate us from those we care about. Very few people would be happy to think that, while talking to them, their friends are calculating how much moral good the conversation is generating.

There are definite costs to using the utilitarian criterion of rightness as a decision procedure. But sometimes these costs are not in play quite so much, and sometimes the benefits of using it in deliberation might outweigh these costs. I think this might be true when we’re thinking about ‘large-scale’ decisions: where to allocate large amounts of money (e.g. annual charity donations, or government funding), what policies governments or companies should have, what to do with our life (e.g. what career, what research, or what lifestyle to pursue), or how we should advise others on these matters. These are all decisions that share several features. First, using ‘act such that you maximize the aggregate wellbeing’ in these cases wouldn’t conflict with prosocial norms like ‘don’t lie’, and so they don’t undermine trust or co-operation. Second, we can make each of these decisions slowly and with care, which is important since it is so hard to apply the criterion well. Third, they are decisions with significant impact, which means that it will be more important to try to take into account how they affect the world: the cost of a more cumbersome decision procedure can be justified when the stakes are sufficiently high. Finally, using it in these impersonal circumstances is not likely to alienate people with whom we have close relationships. It is still questionable whether most of us should try to apply the criterion of rightness as a decision procedure in these cases, but they strike me as candidates for decisions where it will sometimes be right to do so. The rest of the time, the criterion of rightness can sit like an evaluative backdrop on life that one is aware of, but not too eager to call upon in deliberation.

It seems plausible to me that someone behaving well by the lights of the utilitarian criterion of rightness will not employ it as a maxim regularly (note that we don’t need to appeal to rule utilitarianism or moral uncertainty to make this point). And many of the best people, by the lights of the act utilitarian criterion of rightness, will never have even heard of utilitarianism. For the most part, I suspect that the ideal act utilitarian would be a good friend, would try to keep promises, would aim to be honest and kind, and, when it comes to major decisions, would stop to think about how these decisions will affect the wellbeing of everyone. This kind of person seems much better for the world than the naive act utilitarian that most people are understandably put off by.

Comments5
Sorted by Click to highlight new comments since: Today at 4:50 AM
[anonymous]7y20
0
1

I agree with the general point made here and probably many of the examples. However, I think we need to be aware of a tendency I have noticed in the utiltiarian decision procedure literature to over-emphasise the extent of the overlap between utilitarianism and common sense morality. The obvious practical motivation for this is to make utiltiarianism seem more intuitively palatable, both for others and ourselves. However, this incentive exists, even if it is unjustified by an ultimate utilitarian rationale. It seems like what quick and dirty heuristics or other decision procedures we ought to use is a really difficult empirical question, and that we shouldn't settle too quickly for what aligns with common sense morality, given the obvious aforementioned bias in play.

E.g. I've seen it said a number of times (including by e.g. Parfit I think) that utiltiarianism as applied to decision procedures justifies significant parental partiality, without much in the way of argument. I find this definitely non-obvious, and can see persuasive arguments in the other direction, even at the level of decision procedures.

Something to be aware of, but great post!

I’ve been trying to explain this to people for a while, usually appealing to some examples from game theory, but this is a really clear and useful way of framing it. It should’ve occurred to me when reading Bostom’s Infinite Ethics.

This is a bit of a tangent, but one problem I’ve been encountering when applying the types of decision procedures that you suggest at the end is that in certain systems (I’ve heard them called anti-inductive systems), they need to be parameterized on the state of the system. That seems to be a step that is cognitively hard to come up with or even to follow for some people – a common pitfall. So my hypothesis is that it should be a crucial part of our messaging in this context to make this distinction clear, and first we may, as a movement, need to understand it better too.

A person might, for example, act according to the categorical imperative hoping to maximize utility. Intuitively they might look for proxies such as “Donate to the charity with the lowest overhead” or better “Donate to the charity that affects the greatest number of sentient beings.” These don’t decrease due to the person’s donating to the charity. But these heuristics will fall short unless something is added in like “donate to the charity that is also most neglected,” something that is decreased by the donation itself. All the newspaper critiques of EA attest to how unintuitive this parameterization seems to be to people: “Hurr durr, EAs think buying bednets is best, but that’s silly; if everyone did that, we’d die of treatable cancer, and what would we do with all those bednets anyway?”

When unparameterized messages are published on purpose, e.g., to save readers the time to check the state of the system, then they need to be updated regularly the way GiveWell and ACE do. But even GiveWell and ACE are not really promoting the unparameterized versions a lot but rather the different, parameterized version, “Donate to whatever we recommend.” They’re sort of like a cache for the parameterized function and invalidate the cache once per year.

Within the movement, meat offsetting is one place were the messaging often falls short in this fashion. Usually the part of the message, “But check in with us yearly to get your updated offsetting price” is missing. Get a million people to offset their consumption, and their offsetting will become a lot more expensive. But unless they check back every year, they’ll fail to notice.

80k is probably the next level here, since its advice is parameterized not only on the state of the system but also on the person seeking its advice. I wonder what the third level is.

Maybe EA is unusual among social movements in that it is based on parameterized messages, and maybe that’s something that makes it less accessible for people who are used to other social movements.

DM
3y2
0
0

To learn more about the difference between criteria of rightness and decision procedures, and how this difference entails a distinction between "single-level utilitarianism" and "multi-level utilitarianism", please see the section Chapter 3: Elements and Types of Utilitarianism: Multi-level Utilitarianism Versus Single-level Utilitarianism on utilitarianism.net.

This is also argued by Railton in the paper "Alienation, Consequentialism and the Demands of Morality."

However I think you're making a bit of a conflation between act-based thinking and careful calculation. We might believe in rough, quick estimates and heuristics to guide our lives, but to still have those defined by act utilitarian guidelines rather than by common sense ethics or intuition. So as an example, when thinking about whether to be kind to someone, the act utilitarian makes a quick estimate rather than following a cumbersome calculation - but that quick estimate is to be unkind to them.

Newcomb's Trolley Problem
A fortune-teller with a so-far perfect record of predictions has placed either 0 or 100 persons in an opque box some distance down the track. If the fortune-teller predicted you will pull the lever, killing the 5 people tiedto the track, ze will have left the opaque box empty. If the fortune-teller predicted you will NOT pull the lever (avoiding the 5 people tied to the track but still hitting the box), ze will have placed 100 people into the opaque box. Since the fortune-teller has already made zir choice of how many people to put into the opque box, is it more rational to pull the lever or not?

Accompanying image: https://photos.app.goo.gl/LvaVQye6tJBVqw2k8

Here, the act that fulfil the criterion of rightness is the opposite of the act you will take, whether you pull the lever or not (by the design of the thought experiment).

The decision procedure that maximize the criteria of rightness is to pull the lever (under a few further assumptions, such as: no quantum mixed strategies, no other superbeings punishing you for having this decision procedure).

More from Askell
Curated and popular this week
Relevant opportunities