Many everyday actions have major but unforeseeable long-term consequences. Some argue that this fact poses a serious problem for consequentialist moral theories. We argue that the problem for non-consequentialists is greater still. Standard non-consequentialist constraints on doing harm combined with the long-run impacts of everyday actions entail, absurdly, that we should try to do as little as possible. We call this the Paralysis Argument. After laying out the argument, we consider and respond to a number of objections. We then suggest what we believe is the most promising response: to accept, in practice, a highly demanding morality of beneficence with a long-term focus.

15

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since:

Are these constraints on doing harm actually standard among non-consequentialists? I suspect they would go primarily for constraints on ex ante/foreseeable effects per person (already or in response to the paralysis argument), so that

  1. for each person and each extent of harm, the probability that you harm them to at least the given extent must be below some threshold, or
  2. for each person, the expected harm is below some threshold, or
  3. for each person, their expected value from the act is nonnegative (or close enough to 0), so only acts which leave no one in particular worse off in expectation than "doing nothing", i.e. weak ex ante Pareto improvements, maybe with a little bit of room.

The thresholds could also be soft and depend on benefits or act as a penalty to a consequentialist calculus, if you want to allow for much more significant benefits to outweigh lesser harms.

It might get tricky with possible future people, or maybe the constraints only really apply in a person-affecting way. Building off 3 above, you could sum expected harms (already including probabilities of existence which can vary between acts, or taking the difference of conditional expectations and weighting) across all actual and possible people, and use a threshold constraint that depends on the expected number of actual people. Where  represents the individual utilities in the world in which you choose a given action and  represents the utilities for "doing nothing",

  1. , or
  2. , for some (or all) values of  such that .

This could handle things like contributing too much to climate change (many possible people are ex ante worse off according to transworld identity) and preventing bad lives. With counterparts, extending transworld identity, you might be able to handle the nonidentity problem, too.

Some constraints might also be only on intentional or reckless/negligent acts, although we would be owed a precise definition for reckless/negligent.

One's modus ponens is someone else's modus tollens.

Michael Huemer wrote something very similar In Praise of Passivity ten years ago, but he bit the deontologist bullet: so (unless you are acting inside the space defined by explicit rights and duties) if you are uncertain of the outcomes of your action, you are doing wrong.

More from Pablo
Curated and popular this week
Relevant opportunities