Crossposted to my blog.  The formatting is a bit better there.  

Very often, our intuitions about principles will conflict with our intuitions about cases. Most people, for example, have the following three intuitions:

The world would be improved if one person died and five others were saved.

If some action makes the world better, it isn’t wrong.

You shouldn’t kill one person to save five.

These are, however, inconsistent. In addition, as I’ve documented extensively, deontology—and more broadly any theory that believes in rights—requires giving up belief in dozens of plausible principles. Let me just list several:

  1. Perfectly moral beings shouldn’t hope you do the wrong thing.
  2. The fact that some act would give perfectly moral beings extra options doesn’t make it worse, all else equal.
  3. Doing the right thing won’t make things worse.
  4. If X is wrong and Y is wrong conditional on X, then X and Y are wrong.
  5. If some action would be done if you experienced everything experienced by anyone, or if you were behind the veil of ignorance, and was also approved of by the golden rule, that action is right.
  6. If you do the wrong thing and you can undo it before it’s affected anyone, you should.
  7. If some action is wrong, and you can prevent it from happening at no cost, you should do so.
  8. It’s bad when more people are endangered by trolleys.
  9. Our reasons to take action are given by the things that really matter.
  10. If you should do A instead of C, then it will never be true that some third option will make it so that you should do C instead. In other words, options that you don’t take don’t affect the desirability of the ones you do take.
  11. If some action makes everyone much better off in expectation, you should take it.
  12. It’s wrong to take lengthy sequences of immoral acts.
  13. If some action causes others to do the wrong things, that makes it worse as opposed to better.
  14. If some action is wrong, then if a person did it while sleepwalking, that would be bad.
  15. Perfectly moral beings being put in charge of more things isn’t bad.

There are, of course, various other principles that one should give up on rather than these. But the ones I’ve listed are the ones that are most plausible. And this only is the tip of the iceberg when it comes to arguments against deontology; there are many more. In addition, the same types of general arguments are available against other things that critics of utilitarianism believe in—for example, those who believe in desert have to believe that what you deserve depends on how lucky you are and that it’s sometimes better to be worse. But let’s use rights as a nice test case to see just how overwhelming the case is for utilitarianism.

The argument above is the cumulative case against deontic constraints. But let’s compare it to the cumulative case for deontic constraints, and then see which has more force. The argument for deontic constraints is that they’re the only way to make sense of the following intuitions:

  1. It’s wrong to kill a person and harvest her organs to save five people.
  2. You shouldn’t push people off bridges to stop trains from running over five people.
  3. You shouldn’t frame an innocent person to stop a mob from killing dozens of people.
  4. It would be very wrong to kill someone to get money, even if that money could be then used to save lots of lives.
  5. It’s wrong to steal from people, even if you get more pleasure from the stolen goods than they lost.
  6. It’s wrong to torture people even if you get more pleasure from torturing them than they get pain from being tortured.
  7. It’s wrong to bring people into existence, give them a good life, and then kill them.

So the question is which type of intuition we should trust more—the intuitions about cases or the intuitions about principles. If we should trust the intuitions about cases, then deontology probably beats utilitarianism. In contrast, utilitarianism utterly dominates deontology when it comes to intuitions about principles. But it seems like we have every reason in the world to trust the intuitions about principles over the intuitions about cases (for more on these points, see Huemer’s great article and the associated paper, Revisionary Intuitionism).

  1. Suppose that some intuition about principles was true. Well then, we’d expect the principle to be counterintuitive sometimes, because principles apply to lots of cases. If our moral intuitions are right 90% of the time, then if a principle applies to ten cases, we’d expect it to be counterintuitive once. Given that most of these principles apply to infinity cases, it’s utterly unsurprising that they’ll occasionally produce counterintuitive results. In contrast, if some case-specific judgment were right, it would be a bizarre, vast coincidence if it conflicted with a plausible principle. So we expect true principles to conflict with cases but we don’t expect true cases to conflict with principles. As a consequence, when cases conflict with principles, we should guess that the principle is true and the judgment about the case is false.
  2. We know that our intuitions constantly conflict. That’s why ethicists disagree and there are moral conflicts. In addition, lots of people historically have been very wrong about morality, judging, for example, that slavery is permissible. So we know that at the very least, many of our judgments about cases must be wrong. In contrast, we don’t have the same evidence for persistent error about principles. I can’t think of a single intuition about broad moral principles that has been unambiguously overturned. So trusting the deontological intuitions over the utilitarian ones is trusting the intuitions that we know to be wrong over the ones that we don’t know to be wrong. You might object by pointing out that utilitarians disagree with lots of broad principles, such as, for example, the principle that people have rights. But we don’t intuit the principle itself—and if we do, it’s clearly debunkable, for reasons I’ll explain in a bit. Instead, we infer it from cases. But this means the reason to believe in rights comes from intuitions about cases, which we know to often be wrong.
  3. We know that people’s moral beliefs are hugely dependent on their culture. The moral intuitions of people in Saudi Arabia differ dramatically from the intuitions of people in China, which differ dramatically from the intuitions of people in the U.S.. So we have good reason to expect that what we think are genuine moral intuitions are often just reflections of culturally transmitted values. But none of the principles I gave has any plausible cultural explanation—there is no government document that declares “the fact that some act would give perfectly moral beings extra options doesn’t make it worse, all else equal.” No one is taught that from a young age. In contrast, norms about rights are hammered into us all from a young age—the rules taught to us from the time we are literally babies are deontological. We are told by our teachers, parents, and government documents that people have rights—and if you doubt that, it’s seen as a sign of corrupt character (I remember one debater who attempted to paint me as a terrible person declared that I “literally don’t think anyone has human rights.”) We are told not to take the toys of others, not to only take away their toys if doing so is optimific. So it’s not hard to explain how we would come to have these intuitions even if they were bullshit. In contrast, there is no remotely plausible account of how we came to have the intuitions that, if accepted, lead us inescapably to utilitarianism.
  4. We know that our moral beliefs are often hugely influenced by emotions. Emotions can plausibly explain a lot of our non-utilitarian beliefs—contemplating genuine homicide brings out a lot of emotion. In contrast, the intuition that “if it’s wrong to do A and wrong to do B after doing A, it’s wrong to do A and B,” is not at all emotional. It seems true, but no one is passionate about it. So very plausibly, unreliable, emotional reactions can explain our non-utilitarian intuitions. Our desert-based intuitions come from our anger towards people who do evil; our rights-based intuitions come from the horror of things like murder. We have lots of evidence for this—we know, for example, that when people’s brains are damaged so that they are less emotional, they become almost 6 times more likely to support pushing the fat man off the bridge to stop six.
  5. We know that humans have a tendency to overgeneralize principles that are usually true. Huemer gives a nice example of the counterfactual theory of causation—intuitively a lot of people believe a simple counterfactual model of causation, that has clear counterexamples—I’ll provide the full quote in a footnote[1]. But this tendency can explain every single non-utilitarian intuition. It’s obviously almost always wrong to kill people. And so we infer the rule “it’s wrong to kill,” even in weird gerrymandered scenarios where it’s not wrong to kill. Every single counterexample to utilitarianism seems to involve a case in which an almost universally applicable heuristic doesn’t apply. But why would we trust our intuitions in cases like that? If we think about murder a million times, and conclude it’s wrong all of those times, then we infer the rule “you shouldn’t murder,” even if there are weird scenarios where you should. That’s why when you modify a lot of the deontological scenarios so that they are less like real-world cases, our intuitions about them go away. You might object that utilitarian principles can also be explained in this way. But the utilitarian principles aren’t just attempts to generalize our intuitions about cases—we have an independent intuition that they’re true, before considering any cases. The reason we think that perfectly moral beings shouldn’t hope you do the wrong thing is not that there are lots of cases where perfectly moral beings hope you do particular things and we also know that those things are rights. Instead, it’s that we have an independent intuition that you should want people to do the right thing—but that means that we don’t acquire the intuition from overgeneralizing, or from generalizing at all. The intuitions supporting utilitarianism don’t rely on judgments about cases, but instead rely on the inherent plausibility of the principle itself.
  6. We know that our linguistic intuitions affect our moral intuitions. We think things are wrong because they sound wrong. But the moral intuitions supporting utilitarianism sound much less convincing than the moral intuitions supporting deontology. No one recoils in horror at the idea that the fact that some action gives some perfectly moral person extra options is wrong. In contrast, people do recoil in horror at the idea that it’s okay to kill and eat people if you get enough pleasure from it. Anscombe famously declared of the person who accepts that you should frame an innocent person to prevent a mob from killing several people, “I do not want to argue with him; he shows a corrupt mind.” So it’s unsurprising that so many people are non-utilitarians—the non-utilitarian intuitions sound convincing, while the utilitarian intuitions sound sort of boring and bureaucratic. No one cares much about whether “if it’s wrong to do A and then do B then it’s wrong to do A and B.” When people are given moral dilemmas in a second language, they’re more likely to give utilitarian answers—this is perfectly explained by our non-utilitarian intuitions being dependent on non-truth-tracking linguistic intuitions.
  7. Some of our beliefs have evolutionary explanations. Caring more about those closer to us, not wanting to do things that would make us directly morally responsible, and caring more about friends and family is easily evolutionarily debunked. But those intuitions are core deontological intuitions.
  8. False principles tend to have lots of counterexamples, many of them being very clear. For example, the principle that you shouldn’t do things that would be bad for everyone to do implies that you shouldn’t be the first to cut a cake, move to a secluded forest, and that it’s wrong to kiss your spouse (it would be terrible if everyone kissed your spouse). Often, you can derive straightforward contradictions from them. So when there is a principle without a clear counterexample—as is true of these principles—you should give it even more deference.
  9. Our moral beliefs are subject to various biases. But it seems our deontological intuitions are uniquely subject to debunking. For example, humans are subject to status quo bias—an irrational tendency to maintain the status quo. But deontological norms instruct us to maintain the status quo, even when diverging from it would be better. So we’d expect to be biased to believe in deontic norms even if they weren’t real.
  10. Deontological norms seem like the norms we’d expect one to adopt to rationalize a theory that tries to minimize being blamable. Our intuition that you shouldn’t kill people to save five can be explained by you being blamable if you kill one but not blamable if you just fail to act—after all, everyone is constantly failing to act all the time. But if our moral beliefs stem from rationalizing a course of action for which no one could criticize us, then they can be debunked.

So it seems like we have overwhelming evidence against the trustworthiness of our deontological intuitions. It’s not hard for a utilitarian to explain why so many intuitions favor deontology on the hypothesis that utilitarianism is true. In contrast, there is no plausible explanation of why we would come to have so many intuitions that favor utilitarianism if it were false. The deontologist seems to have to suppose that we make errors over and over again, with no explicable explanation, and the intuitions that are in error are the ones that are most trustworthy. This is miraculously improbable, and gives us very good reason to give up deontology.

Edit: After I wrote this, I realized it greatly resembled Richard Y Chappell‘s master argument. They’re similar, but I think my argument is more sweeping and describes how the entire debate between utilitarians should be resolved. I also disagree with his master argument, but that’s a story for another day.

 


 

  1. ^

    “For example, the following generalization seems initially plausible:

    (C) For any events X and Y, if X was the cause of Y, then if X had not occurred, Y would not have occurred.

    But now consider the following case: The Preemption Case: Two mob assassins, Lefty and Righty, have been hired to assassinate FBI informant Stoolie. As it happens, both of them get Stoolie in their sights at about the same time, and both fire their rifles. Either shot would be sufficient to kill Stoolie. Lefty’s bullet, however, reaches Stoolie first; Consequently, Lefty’s shot is the one that actually causes Stoolie’s death. However, if Lefty had not fired, Stoolie would still have died, because Righty’s bullet would have killed him.

    This shows that there can be a case in which X is the cause of Y, but if X had not occurred, Y would still have occurred.”

     


     

15

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

Very interesting discussion. I think something that (understandably, as your thesis here is narrow) should be considered is how ethical theories outside of utilitarianism and deontology should be considered. Although you provided a fairly strong argument against deontology, there was no argument for uniquely utilitarianism as many of the intuition you had against deontology could also apply to most other ethical theories. Perhaps a virtue ethical, care ethical, or communitarian approach could accommodate for the short comings of deontology as well as (or maybe better than!) utilitarianism. 

Thank you for the post!

If we reject any side constraints--which my argument supports--then we get something very near utilitarianism.  

Correct me if Im wrong, but it doesnt seem like virtue ethics or care ethics relies on side constraints -- they seem uniquely deontic. Im not sure that rejecting deontology implies a form consequentialism as virtue or feminist ethics are still viable at that point. 

They will still endorse the same things as side constraints views do (E.g. not killing one to save 5). 

I'm not fully sure to what extent this piece means to argue for i) utilitarianism beats deontology, versus ii) utilitarianism is the correct moral theory or at least close to the correct moral theory. (At times it felt to me like the former, and at times the latter, though it's very possible that I did not read closely enough.)

To the extent that ii is the intended conclusion, I think this is overconfident on a couple of counts. Firstly, one’s all-things-considered view should probably take into account that only 30% of academic philosophers are consequentialists (see Bourget & Chalmers, 2020, p. 8; note that consequentialism is a superset of utilitarianism). Secondly, reasons relating to infinite ethics. As Joe Carlsmith (2022) puts it:

I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure.

But I think infinite ethics changes this picture. As I mentioned above: in the land of the infinite, the bullet-biting utilitarian train runs out of track. You have to get out and wander blindly. The issue isn’t that you’ve become fanatical about infinities: that’s a bullet, like the others, that you’re willing to bite. The issue is that once you’ve resolved to be 100% obsessed with infinities, you don’t know how to do it. Your old thing (e.g., “just sum up the pleasure vs. pain”) doesn’t make sense in infinite contexts, so your old trick – just biting whatever bullets your old thing says to bite – doesn’t work (or it leads to horrific bullets, like trading Heaven + Speck for Hell + Lollypop, plus a tiny chance of the lizard). And when you start trying to craft a new version of your old thing, you run headlong into Pareto-violations, incompleteness, order-dependence, spatio-temporal sensitivities, appeals to persons as fundamental units of concern, and the rest. In this sense, you start having problems you thought you transcended – problems like the problems the other people [e.g., deontologists] had. You start having to rebuild yourself on new and jankier foundations.

All in all, I currently think of infinite ethics as a lesson in humility: humility about how far standard ethical theory extends; humility about [...] how little we might have seen or understood.

It was trying to argue for 2.  I think that if we give up any side constraints, which is what my piece argued for, we get something very near utilitarianism--at the very least consequentialism.  Infinitarian ethics is everyone's problem. 

I respectfully disagree. Firstly, that is by no means the last word on infinite ethics (see papers by Manheim and Sandberg, and a more recent paper out of the Global Priorities Institute). Prematurely abandoning utilitarianism because of infinites is a bit like (obviously the analogy is not perfect) abandoning the general theory of relativity because it can’t deal with infinities.

Secondly, we should act as if we are in a finite world: it would be seen as terribly callous of someone not to have relieved the suffering of others if it turned out we were in a finite universe all along. It is telling that virtually no one has substantively changed their actions as a result of infinite ethics. This is sensible and prudent.

Thirdly, in an infinite world, we should understand that utilitarianism is not about maximising some abstract utility function or number in the sky, but about improving the conscious experiences of sentient beings. Infinities don’t change the fact that I can reduce the suffering of the person in front of me, or the sentient being on the other side of the world, or the fact that this is good for them. And there are good practical, utilitarian reasons not to spend one’s time focusing on other potential worlds.

Thank you for engaging. Respectfully, however, I’m not compelled by your response.

Prematurely abandoning utilitarianism because of infinites is a bit like [...].

I’m not saying that we should prematurely abandon utilitarianism (though perhaps I did not make this clear in my above comment). I’m saying that we do not have an “ultimate argument” for utilitarianism at present, and that there’s a good chance that further reflection on known unknowns such as infinite ethics will reveal that our current conception of utilitarianism—in so far as we’re putting it forward as a “correct moral theory” candidate—is non-trivially flawed.

Secondly, we should act as if we are in a finite world [...] This is sensible and prudent.

I disagree. I think we should act to do the most good, and this may involve, for example, evidentially cooperating with other civilizations across the potentially infinite universe/multiverse. Your sentence “it would be seen as terribly callous of someone not to have relieved the suffering of others if it turned out we were in a finite universe all along” seems to me to be claiming that we should abandon expected value calculus (or that we should set our credence on the universe/multiverse being infinite to zero, notwithstanding the possibility that we could reduce suffering by a greater amount by having and acting on a best guess credence), which I view as incorrect.

Thirdly, [...] Infinities don’t change the fact that I can reduce the suffering of the person in front of me, or the sentient being on the other side of the world

I believe this claim falls foul of the Pareto improvement plus agent-neutrality impossibility result in infinite ethics, once you try to decide on whose suffering to reduce. (Another objection some—e.g., Bostrom—might make is that if there is infinite total suffering, then reducing suffering by a finite amount does nothing to reduce total suffering. But I'm personally less convinced by this flavor of objection.)

Thanks for your response. It seems we disagree on much less than I had initially assumed. My response was mostly intended for someone who has prematurely become a nihilist (as apparently happened to one of Carlsmith’s friends), whereas you remain committed to doing the most good. And I was mainly addressing the last flavour of objection you mention.

Curated and popular this week
Relevant opportunities