Hide table of contents

Biting bullets (accepting a moral argument that's intuitively unpleasant because one feels logically compelled to) is central to many people's experiences in EA. In fact, bullet-biting often resembles a contest to demonstrate one's commitment to EA. I have even heard someone brag of having “teeth of steel for biting bullets”.

However, the compulsion to bite bullets is at odds with moral anti-realism, another popular belief among EAs. If there is no objective morality, then there is no reason to prefer a principled, fundamentalist morality to an ad hoc, intuitionist morality. In fact, for the following reasons, it would be expected that bullet-biting does not accord with most people's experiences of morality.

 

Is morality law-like?

 

The paper “Is Life Law-Like?” criticizes the quest of biologists to derive laws for their observations, pointing out that evolution leads to ad hoc solutions and that biological processes occur probabilistically. So is morality closer to physics or biology? An empirical approach to morality would view it as stemming from evolutionary psychology, social structures, and historical serendipity. Psychology, sociology, and history are some of the only fields even less law-like than biology. Attempts to derive laws for morality are largely confined to recent Western history.

 

Scott Alexander argues for “high-energy ethics”, the idea that only through extreme thought experiments and edge cases can we discern the true nature of morality. The allusion to physics is no accident: it is the only discipline that thrives in such extremes. Applying such an approach to psychology would be absurd. For example, some psychologists believe that tall men are viewed as more attractive. One psychologist tries to disprove this by engineering a nine-foot-tall man, who is not viewed as more attractive. Of course, she didn't actually refute the hypothesis, only demonstrated that observations in the social sciences are context-dependent.

 

Optimizing for multiple values

 

A popular bullet to bite is an argument of the form “X is theft/rape/murder”, where X is an act that is widely believed to be morally acceptable but that has superficial similarity to a serious crime. This has been called the worst argument in the world. The reason that it's naive and often rejected is that serious crimes are typically viewed as immoral for multiple reasons. Murder is immoral because, among other reasons, it causes fear and suffering through the act itself, the victim had a desire to continue living, the victim would have had future positive experiences, and the death brings grief to family and friends. X typically has one or several of these characteristics but not all, so it is commonly judged to be not as bad as murder.

 

Morality becomes even more complex when it involves competing values. There is no inconsistency in believing that airports should X-ray luggage to reduce security risks and simultaneously believing that widespread surveillance of citizens is unjustified. One can value both security and privacy and believe that in some cases one outweighs the other. This point is often lost in bullet-biting morality, which views “inconsistency” as a product of hypocrisy and cowardice.

 

Optimizing for a single value when one has multiple values will almost always lead to the sacrifice of some values. Paying workers by the hour leads to slow work; paying by the task leads to shoddy work. Similarly, optimizing for naive definitions of utility will lead to paperclipping. For example, if we believe in one definition of utility, we may end up with a universe tiled with thermostats. (Each thermostat is “happy” because it is programmed to “want” the temperature of the cosmic background radiation.)

 

Conclusion

 

Moral anti-realism doesn't like neat conclusions: though there's no reason to favor biting bullets, there's no reason to disfavor it either. However, I have two pragmatic observations. First, the pressure to bite bullets (and the implication of irrationality if one does not) can be be off-putting to some EAs. Second, it may be easier to maintain commitments to beliefs if they are sincere, rather than conclusions that you grudgingly accept because you see no other choice.

7

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 5:48 PM

One question is what we want "morality" to refer to under anti-realism. For me, what seems important and action-guiding is what I want to do in life, so personally I think of normative ethics as "What is my goal?".

Under this interpretation, the difference between biting bullets or not is how much people care about their theories being elegant, simple, parsimonious, vs how much they care about tracking their intuitions as closely as possible. You mention two good reasons for favoring a more intuition-tracking approach.

Alternatively, why might some people still want to bite bullets? Firstly, no one wants to accept a view that seems unacceptable. Introspectively biting a bullet can feel "right", if I am convinced that the alternatives feel worse and if I realize that the aversion-generating intuitions are not intuitions that my rational self-image would endorse. For instance, I might feel quite uncomfortable with the thought to send all my money to people far away, while neglecting poor people in my community. I can accept this feeling as a sign that community matters intrinsically to me, i.e. that I care (somewhat) more strongly about the people close to me. Or I could bite the bullet and label "preference for in-group" as a “moral bias” – biased in relation to what I want my life-goals to be about. Perhaps, upon reflection, I decide that some moral intuitions matter more fundamentally to me, say for instance because I want to live for something that is “altruistic”/"universalizable" from a perspective like Harsanyi’s Veil of Ignorance. Given this fundamental assumption, I’ll be happy to ignore agent-relative moral intuitions. Of course, it isn’t wrong to end up with a mix of both ideas if the intuition “people in my community really matter more to me!” is just as strong strong as the intuition that you want your goal to work behind a veil of ignorance.

On Lesswrong, people often point out that human values are complex, and that those who bite too many bullets are making a mistake. I disagree. What is complex are human moral intuitions. Values, by which I mean "goals" or "terminal values", are chosen, not discovered. (Because consequentialists goals are new and weird and hard for humans to have, so why would they be discoverable in a straightforward manner from all the stuff we start out with?) And just because our intuitions are complex – and totally contradicting each other sometimes – doesn't mean that we're forced to choose goals that look the same. Likewise, I think people who think some form of utiltiarianism must be the thing are making a mistake as well.

If values are chosen, not discovered, then how is the choice of values made?

Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value--like "I want my values to be elegant when described in english" or "I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?" Or do you think that the choice of values is made in some other way?

I too think that values are chosen, but I think that the choice involves implicit appeal to "deeper" values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.

Sorry for the late reply. Good question. I would be more inclined to call it a "mechanism" rather than a (meta-)value. You're right, there has to be something that isn't chosen. Introspectively, it feels to me as though I'm concerned about my self-image as a moral/altruistic person, which is what drove me to hold the values I have. This is highly speculative, but perhaps "having a self-image as x" is what could be responsible for how people pick consequentialist goals?

However, the compulsion to bite bullets is at odds with moral anti-realism, another popular belief among EAs.

Well questions of how one ought to act are about ethics, while questions about the nature of morality are about meta-ethics. Meta-ethical principles can inform ethics, but only indirectly. An anti-realist can still have reasons to affirm a consistent view of morality, and a realist can still refuse to accept demanding forms of morality.

So is morality closer to physics or biology? An empirical approach to morality would view it as stemming from evolutionary psychology, social structures, and historical serendipity. Psychology, sociology, and history are some of the only fields even less law-like than biology.

Empiricist approaches to metaethics define morality as something to be learned from human experience. This is notably different from the process of scientific methodology which is applied to fields like psychology and biology, whether law based or not. You generally can't determine any facts about morality by studying its psychology, genealogy and history in society, as those refer to how people act and moral philosophy refers to how they ought to act. Some would argue for ways that you can derive normative conclusions from social science fields, although I believe those ideas are generally limited and contentious. Nevertheless, the nature of the fields' scopes and methodologies are entirely different, so I don't think you can draw meaningful parallels.

A popular bullet to bite is an argument of the form “X is theft/rape/murder”, where X is an act that is widely believed to be morally acceptable but that has superficial similarity to a serious crime.

I'm not aware of this being common. The LessWrong link doesn't seem to be relevant to legitimate moral philosophy. Can you give some examples?

Typically we are dealing with issues where the conclusions of a moral principle are highly counterintuitive. This can take many forms.

Morality becomes even more complex when it involves competing values. There is no inconsistency in believing that airports should X-ray luggage to reduce security risks and simultaneously believing that widespread surveillance of citizens is unjustified. One can value both security and privacy and believe that in some cases one outweighs the other. This point is often lost in bullet-biting morality, which views “inconsistency” as a product of hypocrisy and cowardice.

This is basically true (except that inconsistency is viewed as being irrational or wrong, rather than being slandered or denigrated) although typical utilitarian approaches also lead to similar conclusions about things with instrumental value such as privacy and security, while optimizing for multiple values can still lead to highly counter-intuitive moral conclusions. Mostly any aggregating ethic will have this feature. If we optimize for both autonomy and well being, for instance, I may still find it morally obligatory to do overly demanding things to maximize those values, and I may find cases where causing serious harm to one is worth the benefits to the other.

You can add more and more values to patch the holes and build a really complicated multivariate utility function which might end up producing normal outputs, but at this point I would question why you're optimizing at all, when it looks like what you really want to do is use an intuitionist approach.

Similarly, optimizing for naive definitions of utility will lead to paperclipping. For example, if we believe in one definition of utility, we may end up with a universe tiled with thermostats.

Yes, although most people, moral realists included, would affirm a fundamental difference between phenomenal consciousness and movements of simple systems.

Moral anti-realism doesn't like neat conclusions: though there's no reason to favor biting bullets, there's no reason to disfavor it either.

This is sort of true but, again, it is because meta ethics doesn't have too much to say about ethics in general. Moral realism also doesn't generally favor bullet biting or not bullet biting: there are tons of moral realists who favor intuitive accounts of morality or other 'softer' approaches. Moral principles don't have to be hard and inflexible; they could presumably be spongy and malleable and fuzzy, while still being true.

The rationale for the anti realist to decide how to face counterintuitive moral cases is going to depend on what their reasons are for affirming morality in the first place. Those reasons may or may not be sufficient to convince them to bite bullets, just as is the case for the moral realist.

"You generally can't determine any facts about morality by studying its psychology, genealogy and history in society, as those refer to how people act and moral philosophy refers to how they ought to act."

Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical. What do societies believe about morality? Why do we believe these things (from a social and evolutionary perspective)? We can't derive normative truth from these questions, but they can still be useful.

"An anti-realist can still have reasons to affirm a consistent view of morality"

Consistent is not the same as principled. Of course I believe in internal consistency. But principled morality is no more rational than unprincipled morality.

"I'm not aware of this being common. The LessWrong link doesn't seem to be relevant to legitimate moral philosophy. Can you give some examples?"

Some EAs argue that killing animals for meat is the moral equivalent of murder. There are other examples outside EA: abortion is murder, taxation is theft. Ask tumblr what currently counts as rape... Just because some of these views aren't taken seriously by moral philosophers doesn't mean they aren't influential and shouldn't be engaged with.

"You can add more and more values to patch the holes and build a really complicated multivariate utility function which might end up producing normal outputs, but at this point I would question why you're optimizing at all, when it looks like what you really want to do is use an intuitionist approach."

Correct, I don't think utility function approaches are any better than avoiding utility functions. However, people have many moral values, and under normal circumstances these may approximate utility functions.

"Yes, although most people, moral realists included, would affirm a fundamental difference between phenomenal consciousness and movements of simple systems."

Consequentialism would require building a definition of consciousness into the utility function. Many definitions of consciousness, such as "complexity" or "integration", would fall apart in extreme cases.

Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical.

Anti-realists deny that there is such thing as true moral claims, but they don't think morality is fundamentally confused. There have been many anti-realist philosophers who have proposed some form of ethics: R.M. Hare, J.L. Mackie, the existentialists, etc.

Consistent is not the same as principled. Of course I believe in internal consistency. But principled morality is no more rational than unprincipled morality.

What exactly do you mean by "principled" in this case?

Some EAs argue that killing animals for meat is the moral equivalent of murder. There are other examples outside EA: abortion is murder, taxation is theft.

I think many, hopefully most, of the people who say that have actual moral reasons for saying that. There is no fallacy in claiming a moral equivalency if you base it on actual reasons to believe that it is morally just as bad: it may in fact be the case that there is no significant moral difference between killing animals and killing people. Same goes for those who claim that abortion is murder, taxation is theft, etc. We should be challenged to think about whether, say, abortion is morally bad in the same way that murder is (and if not then why), because sometimes people's beliefs are inconsistent, and because it very well may be the case that, say, abortion is morally bad in the same way that murder is. Of course, these kinds of arguments should be developed further rather than shortened into (fallacious) assertions. However, I don't see this argument structure as central to the issue of counterintuitive moral conclusions.

Consequentialism would require building a definition of consciousness into the utility function. Many definitions of consciousness, such as "complexity" or "integration", would fall apart in extreme cases.

I don't think those are nearly good enough definitions of consciousness either. The consequentialist is usually concerned with sentience - whether there is "something that it's like to be" a particular entity. If we decide that there is something that it's like to be a simple system then we will value their experiences, although in this case it's no longer so counterintuitive, because we can imagine what it's like to be a simple system and we can empathize with them. While it's difficult to find a formal definition for consciousness, and also very difficult to determine what sorts of physical substances and structures are responsible for consciousness, we do have a very clear idea in our heads of what it means to be conscious, and we can easily conceive of the difference between something that is conscious and something that is physically identical but not conscious (e.g. a p-zombie).

[anonymous]8y0
0
0

Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical. What do societies believe about morality? Why do we believe these things (from a social and evolutionary perspective)? We can't derive normative truth from these questions, but they can still be useful.

That is not true in the slightest. If I reject that social action can be placed within a scheme of values which has absolute standing, I suffer from no inconsistency from non-absolutist forms of valuation. Thucydides, Vico, Machiavelli, Marx, Nietzsche, Williams and Foucault were neither moral realists nor refrained from evaluative judgement. But then evaluative thought is an inescapable part of human life. How do you suppose that one would fail to perform it?

I think you can be a moral realist and bite bullets too. If you accept some form of moral casuistry you needn't have any moral laws at all, just a collection of judgements. But you can still be perfectly realist. Of course, you are still right in this case. Anti-realists don't need to bite bullets either. I have started wondering recently if I should be as neutral about biting bullets as I was (I just saw them as a way to ensure consistency). However really, they are occasions when intuitive morality which is the ultimate basis of my moral reasoning, disagrees with my conclusions. So perhaps, as you seem to imply, biting bullets should be viewed negatively (hence why some EAs find it off-putting, they already view it negatively).

Even if people don't believe that morality exists, a perfectly rational agent would still have consistent preferences. That said, there is an argument for Epistemic learned helplessness (made by Scott Alexander on his old blog), http://squid314.livejournal.com/350090.html?page= ; that is not always updating according to logic.

More from Lila
Curated and popular this week
Relevant opportunities