This piece was co-authored by Robert Farquharson and myself in response to what we believe is a key misconception about moral relativism, especially in the context of Effective Altruism.

Cross-posted from my blog here.

Introduction
Peter Singer’s practical ethics argues that we have a remarkable opportunity and moral obligations to help those less geopolitically fortunate than ourselves. This has formed the basis for the Effective Altruism (EA) movement. What makes this model of philanthropy different to previous versions is the focus on effectiveness: EA takes a rigorously quantitative approach to assessing and engaging in ethical behaviour. The results have been more than interesting. As it turns out,saving a life or seriously reducing global poverty (see Peter Singer’s The Life You Can Save)is surprisingly within our reach, giving us cause for hope. Our philanthropic missteps, however, are lamentably all too human. There are many cases of well-meant charitable causes that have, upon analysis, been found to cause more harm than good. By approaching these moral problems with the clarity and rigour of the scientific method, EA combines “the heart with the head”. This better equips us to avoid future missteps, and maximise the positive outcomes we can achieve. As the argument goes, we can and should do the most good possible.

EA has not been embraced without criticism. A prominent counterargument EA receives is that it has no grounds to comment on an objective view of ethics. Science is about measurement, and morality is about values, so we commonly perceive these as independent realms. After all, how can we measure morality? What facts about the world tell us what we should value? This is particularly problematic because, often enough, even members within the EA community itself will concede this point. People who are already dedicated to improving human well-being according to the best available evidence are unwilling to defend the moral objectivity of such a cause, preferring instead some version of moral relativism. Perhaps we can measure something about what wethink about morality, but who’s to say that it can be universalised? Many of us are resigned to this kind of subjective, context sensitive view of morality, particularly when it comes to cross-cultural claims. What seems right to me may not seem so to you, but that’s okay. What’s right in our culture isn’t always going to be right in another culture, and it would be presumptuous at best to impose that view on others. Or so the argument goes.

The aim of this piece is be to discuss this line of criticism. First, the misconception that science has nothing to say about morality will be addressed. There are moral facts to be observed, these facts are just psychological and physical facts, about the world and the conscious creatures within it. Second, a double-standard that is often applied to potential objective claims to morality will be highlighted. A common rebuttal is that a science of morality can’t be fundamentally based on an assumption lest it become ‘subjective’ after all. However, most if not all other scientific domains operate in just such a way, and yet their philosophical and scientific credentials are never in doubt. Being objective is the not the same as being absolute, self-justifying, or unchanging.

It is our view that EA’s integrity as a movement precisely relies on making such objective claims to moral facts, e.g. that not all charities are equal. Being a fledgling, but promising form of a moral science, it is thus crucial for EAs to clear the air on the superiority and validity of the movement’s theoretical commitments. Responding to these criticisms could have implications for our understanding and discourse surrounding morality beyond just the EA community.

We will say here that, as a community, focusing on the ethical obligations over the opportunity that EA presents is potentially not the most effective way to encourage people to become effective altruists, but it is an important concern that we have decided to address here. This should by no means be an introduction to EA if you haven’t heard of it before! I’d recommend one of the many EA books out there, or this TED Talk by Peter Singer.

The Argument from EA

In The Life You Can Save, Peter Singer proposes 3 premises:

1) Suffering and death from lack of food, shelter and medical care are bad.

2) If it is in your power to prevent something bad from happening, it is wrong not to do so.

3) By donating to aid agencies, you can prevent suffering and death from lack of food, shelter and medical care, at little cost to yourself.

If you agree with these premises, then Singer’s conclusion is that by not donating to aid agencies, you are doing something wrong. At the very least, you’re missing an opportunity to do something right. By breaking the logic down into these steps, it is hard to argue with, or to claim that individuals can have different ‘versions’ of morality that are equally as valid. However, while the third premise seems straightforward to most people, the first two are tougher to swallow. Accepting the third leads to a simple matter of calculus, some aid agencies just don’t prevent suffering and death as well as others. The numbers sell most people on the effectiveness clause. But why is altruism good, and why must I help?Isn’t suffering just subjective?  

The Measure of Morality: Suffering, Wellbeing, & How to Find Them.

Philosopher/Neuroscientist Sam Harris invites us to imagine the ‘Worst Possible World’ (WPW). The WPW is hellish, where every conscious creature experiences the worst possible misery it can for as long as it can. Think burning alive, but that is the only conscious experience you will ever have. This is a bad state of affairs, if the word ‘bad’ means anything at all. The pain and misery being suffered in the WPW, and conversely the happiness and flourishing that we relish in this world, are realised as experiences in consciousness. This makes them subjective in one sense, in that they can’t exist in the absence of a conscious subject ‘feeling’ them. However, this doesn’t make judgements about their character merely subjective, in the sense of always ‘relative’.

If you think that pain, misery, and suffering, are merely subjective tastes, and are unsure why you shouldn’t value those states instead of things like love, laughter, and satiety, you’re thoroughly confused. Conscious experience is, by its very nature, already and immediately coloured with a certain kind of character. If you’re not sure whether or not a child dying famished and diseased is having a conscious experience on the negative end of the spectrum somewhere, you’re not playing by the same rules. The only philosophical assumption you need to make here is that suffering the worst possible misery you can for as long as you can is, indeed, the minima of conscious experience. The starving child is hovering somewhere near this minima.
Once we accept this superficial fact about the nature of conscious experience, we can honestly admit that any changes that lead us away from the WPW are what we mean when we say ‘morally good’. Whatever reprieve we can offer the inhabitants of the WPW, however small, would potentially be the clearest case of a moral behaviour there is. If it’s within your power, at little to no cost to you, merely offering a 10 minute window of painless respite for the immiserated sufferers of the WPW is something you ought to do. Singer’s drowning child example leans on the same principle. To leave the dying children of the developing world in such a state of persistent misery, when we could easily do otherwise, is a moral failing.

So, notions of good and bad, right and wrong, have everything to do with the changing character of experience in conscious creatures. Every moral judgement comes down to how much and in what direction an action changes the conscious experience of some agent. Bringing someone closer to the worst possible misery they can experience is movement in the wrong direction. Again, if you’re unsure about this, try and find a conscious merit in dying from forced starvation.

With changes in consciousness as our basis, we can begin measuringthose changes. If we know anything about consciousness at all, we know that it correlates meaningfully with brain states. Brain states are just a kind of physical state though, and thus completely amenable to objective inquiry. Some things lead to those brain states which cause experiences of pain, misery, and suffering, while others lead to the brain states that correlate with euphoria, heightened self-esteem, and the rest of the positive emotions we all crave. Importantly, it’s possible to measure these causal relationships scientifically. That is, we can measure how some actions or social constructs regularly and reliably move conscious experience in particular directions. Our goal is to move towards well-being, loosely defined as having much of one’s experience situated on the positive end of the conscious spectrum.

To sum, if our notions of morality are about experiential changes in conscious creatures, and the manner in which those changes occur are amenable to scientific inquiry, we can measure morality just like any other physical quantity. Moreover, EAs regularly do this. There is an important distinction to make between the character of the experiences themselves, and the things that reliably lead to them. A dangerous kind of moral relativism sneaks in when we confuse these two things. There are many ways to move our conscious experiences, but there will be a fact of the matter about which things move them in what direction. Science has, in fact, a lot to say about morality.

The Big Lebowski Response

While most people are convinced by this consequentialist notion of changes in conscious experience, there is a recalcitrant meme that is often cited in reply. Just like the scene in ‘The Big Lebowski’ where the Dude says, “Yeah, well, you know…that’s just, like, your opinion, man”, we often hear that it’s just all relative. Who’s to say should value experiences like love, euphoria, compassion, and all the rest? Alternatively, who’s to say that my versions of justice and fairness are the same as yours? And, finally, what gives you the right to enforce your version on me? Science is perceived as not just silent on these issues, but in principle incapable of addressing them.

To quote a critic, “The point is this: Effective Altruism, while very welcome, is not an “objective” look at the value of philanthropy; instead it is a method replete with philosophical assumptions. And that’s fine, so long as everyone realizes it”. The problem, it seems, is that nothing ‘objective’ can be based on a ‘philosophical assumption’. If it isn’t truly objective, it’s just your opinion, and therefore lacks any normative force that actual sciences would have. This is plainly false.

Everyday Empiricism

To address the first point regarding the relativity of subjective experience, it’s easy to see how this form of response would be absurd when you transpose it into any other scientific domain. For example, take physical health and medical science. There is nothing in modern health science that can tell you why you ought to value being alive or free of disease, with absolute self-justifying or ‘scientific force’. However, we just do value these things. At the base of medical science are the ‘philosophical assumptions’ that being alive is better than being dead, and consequently, that the goal of medicine is to mitigate and prevent things that cause premature death. Once we all accept this, we can investigate the objective, causal relationships between certain physical quantities and their consequences with regards to how they move us towards that goal. If the Dude were to come to you with a gangrenous leg and say, “Who are you to say you’re healthier than me? That’s just your opinion, I don’t value being free of disease and pain”, we’d dismiss him as either simply ignorant of the facts, or of unsound mind.

We’ve all come to the conclusion that sensible adults value not dying prematurely of preventable ailments, and we don’t need to be medical professionals to know that such a value statement is a good thing. Weought to value being alive, and nothing within medicine ‘scientifically’ justifies that. Finding right and wrong answers about medicine only becomes possible once we all agree that this is what we mean by ‘health’, and that we value it. The objectivity and scientific validity of medicine is never brought into question because of this foundational dependence on a ‘philosophical assumption’ though. So, by assuming that seeking out intrinsically positive conscious states, and avoiding negative ones, as our broad and loose goal set in the moral domain, we’re not doing anything different to the other sciences.

The second point on relativism speaks to the fact that different people or cultures talk about ‘morality’ in different ways. Things like ‘justice’ mean different things in different places, and even in the same place but at different times. It seems like we’re all zeroing in on the same meta-principles, but the devil is very much in the detail. Again, we can look at physical health as a useful analogy.

Take Jasmuheen, an advocate of ‘breatharianism’. She claims that she survived for years on very little to no food or water, but was nourished by “pranic energy” instead. Ostensibly, she’s talking about the same thing my local GP and dietitian are talking about, like ‘health’, ‘energy’, and ‘nourishment’. These are words they use too. What has science got to say about which version of health and nourishment I should value? Quite obviously, a lot.

When asked to demonstrate her claims for a TV experiment, Jasmuheen agreed to live in a hotel room, watched by a security guard to ensure she consumed no food or water, and was regularly monitored by a professional doctor. After 48 hours she was presenting symptoms of acute dehydration, stress, slurred speech, weight loss, and high blood pressure, to name a few. In other words, exactly what medical science predicts will happen if you stop consuming actual nourishment. After 4 days the experiment was abandoned on the advice of the doctor, as kidney failure and death were likely to follow, and the results were broadcast for everyone to see.

Who’s version of health should we value? Demonstrated by the physical consequences, consuming “pranic energy” isn’t as nourishing as terrestrial food and water. Notice that at no point are we obliged to humour Jasmuheen or breatharianism as offering a potential ‘alternative framework’ for physical health. It isn’t just about our opinions. The universe is not forgiving in this way; at least 3 incidents of breatharian followers died after trying to emulate Jasmuheen. The breatharian is talking about the same meta-principles, like health and nutrition, they’re just wrong about how to move towards them.

It isn’t dogmatic or imperialist to say that breatharianism is dangerous, and to point out the obvious; it is not conducive to health and well-being. We as a community, either directly or via the state, are perfectly able, if not obliged, to intervene with clear conscience, the same way we ‘dogmatically’ intervene and tax cigarettes, or vaccinate our children. Once we’re honest with ourselves about what our goals are, there will be evidence to suggest the best and worst ways of achieving them. Following the evidence wherever it may lead is anathema to dogmatism. If someone thinks their infant dying of preventable diseases is a good thing, we simply don’t have to take them seriously. Similarly, if they think exposing their baby to a dangerous disease is an alternative way to inoculate them, we don’t need to indulge their ignorance.

It’s important to notice that we all engage in this kind of empirical scepticism constantly; we’re all everyday empiricists. When I call a plumber to fix my pipes, it’s because I trust they have the relevant expertise to achieve the goal of ‘good plumbing’, i.e. flowing water out of my taps, and having no leaks. Knowledge of the facts of plumbing is what separates me from them. I don’t pretend to know an ‘alternative framework’ for good plumbing, nor do I argue with the valuation that good plumbing entails a lack of leaks. When my faucet spews water I don’t tell guests, “Who are you to say your plumbing is better than mine? To me, good plumbing is about water coming from as many places at once as possible”.

There is no difference when it comes to the domain of morality and the promotion of wellbeing; not all positions are equal. Others may be using the terms ‘morality’, and ‘well-being’. However, the question is not, “what do those things mean for them?”. If they don’t think morality has something to do with changing conscious states for the better, they’re like the weird plumber who values leaks, and we have to admit that openly. The more important question is, instead, “how well are those things working out for them?”. How satisfied are the weird plumber’s customers? On the other hand, if they do value conscious wellbeing, but they think systematically subjugating an entire gender is a possible route to that end, for example, they’re like the breatharian. They’re talking about the same meta-principles, but simply confused about the facts. Again, we have to admit this openly.

For EA’s, this is most relevant when the effective ways to do good are convoluted and counterintuitive. But just because these solutions are hard to find, or intuitively unpalatable, does not mean that there is no answer at all or that we shouldn’t try to find one. For example, an ethical shopper might avoid goods produced in sweatshops so as to not support the exploitative workplace practices. In Doing Good Better, William MacAskill explains that this is well intended, but is not the most effective way to help workers in developing nations, and can actually cause more harm than good. How? The sweatshop jobs are actually the most desired in some countries. The other jobs require hard labour and are lower paid, and for some the choice is between working in a sweatshop and unemployment. Boycotting sweatshops can eliminate these jobs. Furthermore, sweatshop goods tend to cost less than those produced elsewhere, so one is usually better off buying the sweatshop shirt and donating the savings to a charity that helps  the poor. This is just one of the endless examples of counterintuitive ways to maximise well-being.

The Absolute/Objective Conflation

Emphatically, this is not to say that there aren’t or won’t be many equivalent ways to be moral, or to promote well-being. There is no ‘one’ way to be healthy, or to have good plumbing either. Similarly, in light of new evidence and technology, both those definitions could change in the future; they are not absolute. For example, living to a ripe old age of 40 was considered healthy in the past, but with further advances in modern medicine and gerontology, living to 150 could be a modest goal for many people alive today. It just amounts to admitting that once we declare our goals honestly, there are also many ways not to achieve them, and we don’t have to be afraid to admit this. What’s healthier, eating a cucumber or a stick of celery? The answer to this question, if there is one, is probably trivial. That doesn’t undermine the objectivity of dietetics and nutrition. The inability to decide which of the two vegetables to eat for breakfast doesn’t make the distinction between food and poison any less real or consequential though. That is, tough questions we can’t answer yet don’t relegate the easy answers to merely being ‘low hanging fruit’ in an otherwise incomplete or problematic theory. Depending on what they specifically eat, a meat eater may be just as healthy as a vegetarian (think vegetarians who only eat potato chips), but that doesn’t mean we have to elevate breatharianism to the same plane. Having many ways to eat healthily is pluralism, claiming every way of eating is healthy is relativism.

The same goes for morality. There could be many ways to restructure our societies to better promote well-being, but this doesn’t detract from the fact that there will also be many ways to do the reverse. Indeed, we already know that there are many ways to do the reverse; for example, humanity frequently engaged in the slave trade. Pluralism is not the same thing as relativism, nor is being objective the same as being absolute, or unchanging.

The Promise of EA

We don’t suffer any illusions of relativism in most domains of our lives because we value evidence. We update our confidence in particular beliefs to correspond with the weight of the evidence in favour of them. It is this incursion by the scientific method into the realm of morality that makes EA what it is, and allows it to speak from an objective viewpoint, despite its philosophical assumptions. We value subjective well-being as the basis of morality in the same way we value physical health or good plumbing, and the science of well-being can’t begin until we’re similarly honest about that fact. EA is honest about this, and the measurements have already begun.

The separation of science and measurement from the realm of values and morality is a language game we don’t often play. It will be to the detriment of the entire global population if we continue to play it with perhaps the most important question we can ever ask: how can we grow and flourish together, for the well-being of all conscious creatures and the planet that sustains them? There may be multiple right answers, but we have to unapologetically admit that there will be wrong ones too. EA is in a position to lead the way on an empirical project of well-being, it just needs to embrace it.

By Robert Farquharson and Michael Dello-Iacovo

4

0
0

Reactions

0
0

More posts like this

Comments32
Sorted by Click to highlight new comments since: Today at 12:16 PM

This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.

I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. In other words, I think there is an objective function that takes a particular intelligent agent and produces a system of ethics but it is not the constant function.

Assessing the quality of conscious experiences using neuroscience might be a good tool for helping moral judgement, but again it is only useful in light of assumptions about ethics that come from elsewhere. On the other hand neuroscience might be useful for computing the "ethics function" above.

The step from ethical subjectivism to the claim it's wrong to interfere with other cultures seems to me completely misguided, even backwards. If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it (at the same time it can be completely rational for you to resist). There is no universal value of "respecting other cultures" anymore than any other value is universal. If my ethics happens to include the value of "respecting other cultures" then I need to find the optimal trade-off between allowing the bad thing to continue and violating "respect".

Thanks for your remarks.

The is-ought distinction wasn't discussed explicitly to help include those unfamiliar with Hume. However, the opening section of the essay attempts to establish morality as just another domain of the physical world. There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable. Science studies physical reality, and the ambit of morality is a subset of physical reality. Therefore, science studies morality too.

The essay is silent on 'hedonistic' utilitarianism (we do not endorse it, either), as again, a) we think these aren't useful terms with which to structure the debate with as wide an audience as possible, and b) because they are concerns outside the present scope. This essay focuses on establishing the moral domain as just a subset of the physical, and therefore, that there will be moral facts to be obtained scientifically - even if we don't know how to obtain them just yet. How to perfectly balance competing interests, for example, is for a later discussion. First, convincing people that you actually can do that with any semblance of objectivity is required. The baby needs to walk before it can run.

We discuss cross-cultural claims in the section on everyday empiricism.

There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts

This is the nub of the issue (and in my view the crucial flaw in Harris' thesis. You are measuring various physical (or, more broadly, 'natural' properties), but you require an entirely separate philosophical (and largely non-empirical) argument to establish that those properties are moral properties. Whether or not that argument works will be a largely non-empirical question.

The argument you, in fact, give seems to rely on a thought experiment where people imagine a low well-being world and introspectively access their thoughts about it. That's very much non-empirical, non-scientific and not uncontroversial.

Both these comments are zeroing in on the same issue which is at the core of the essay. The thesis above is deflationary about morality and ethics - the central point is that there is no separate realm of moral significance or quality, added on top of and divorced from material facts.

The chain is that 1) the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience. This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.

2) We know the kinds of conscious experiences that are bad. Dying famished and hungry is not merely subjective. It is a subjective state, but one that is universally and always negative. This is not a moral assignment - it is an observable, material fact about the world and about psychological states.

3) The material conditions that lead to changes in conscious experiences are amenable to objective inquiry. The same external stimuli may move different people in different conscious directions, but we can study that relationship objectively. "Dying is bad" is not always a true claim in medical science - it depends on the material context. If you can't save people from the WPW, killing them could be a good thing. This is the principle that euthanasia leans on. Sometimes dying is better, in light of the facts about the further possibility for positive conscious experiences. That doesn't make medical science subjective.

4) The only "non-empirical" assumption you have to make is that what we mean by bad or wrong is movement of consciousness towards, or setting up systems that reliably contain people within, a negative state-space of consciousness.

5) This is how all other physical sciences operate.

We don't try to give additional argument to demonstrate that those properties are moral properties, we argue that moral properties are a subset of natural properties. In the same sense that 'health' is a subset of biological properties, or 'good plumbing' is a subset of various structural/engineering & hydrodynamic properties. Everything we value makes reference to material facts and their utility towards a goal set which must be assumed. But only in the case of morality does anyone ever demand a secondary and unreachable standard of objectivity.

Our thesis is therefore a realist, but deflationary (or 'naturalised') position on morality.

Hi Robert, I'm familiar with moral naturalism, it's a well known philosophical position.

But I still think you've simply not given the philosophical argument necessary to establish moral naturalism, you've merely asserted it.

1) the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience.

This assumption (the first line of your argument) contains the very conclusion you're arguing for. What even is "value"? This is the question you need to answer, so you can't just assume it at the start.

You say "This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike." But physicalism isn't the issue here: the issue is explicating what value is in naturalistic terms.

As an aside, there's an important difference between naturalism/physicalism and reductive physicalism. (http://plato.stanford.edu/entries/physicalism/#RedNonRedPhy) Naturalism is less controversial than reductive physicalism.

2) We know the kinds of conscious experiences that are bad. Dying famished and hungry is not merely subjective. It is a subjective state, but one that is universally and always negative. This is not a moral assignment - it is an observable, material fact about the world and about psychological states.

Again, this is a position statement not an argument or a step in an argument. The core question here, as above, is what does "bad" even mean? I'm not sure, but it reads like you are saying "bad" = (subjectively bad to a person; negatively valenced) in the first sentence, referring to a thing being objectively, universally and always morally "negative" in the second. And referring to some uncontroversial material property (don't know which one) in the third. The whole question that needs to be answered is what "bad"/negative value means. And an argument is needed to show what the connection is between subjectively bad experience, objective value and such and such material properties.

4) The only "non-empirical" assumption you have to make is that what we mean by bad or wrong is movement of consciousness towards, or setting up systems that reliably contain people within, a negative state-space of consciousness.

This is the very thing you need an argument to establish! You say "the only... assumption you have to make" is, and then describe the conclusion you need to argue for. What's to stop me assuming that "bad" refers to some totally different natural property?

5) We don't try to give additional argument to demonstrate that those properties are moral properties, we argue that moral properties are a subset of natural properties.

I'm not sure how to interpret this sentence, so I'll just address the latter claim. You say "we argue that moral properties are a subset of natural properties"- but I don't see an argument for this claim. Maybe you're just starting from the presumption that naturalism is plausible, so moral properties must be a subset of natural properties? But it doesn't follow that there are any moral properties or that moral properties are this rather than that or that they can be reductively defined.

Everything we value makes reference to material facts and their utility towards a goal set which must be assumed. But only in the case of morality does anyone ever demand a secondary and unreachable standard of objectivity.

Right, but, in fact, people seem to have a lot of different goals. It doesn't follow that there is a single, over-arching "moral" goal-set, rather than just a plethora of unrelated goals. It doesn't even follow that there is a single goal for any given domain. For example, many interests and considerations (differing from person to person) influence our plumbing preferences: it doesn't follow there are plumbing properties in any fundamental sense.

This is potentially pretty damning to your thesis. People may simply have lots of different goals, rather than there being a single, universally accepted moral goal. If so there'll simply be nothing specifically "moral" or there'll be moral/value relativism.

Hi David,

I really don't think I can reply without rewriting the essay again. I feel like I've addressed those concerns already (or at least attempted to do so) in the body of the essay, and you've found them unsatisfactory, so we'd just be talking passed each other.

Your replies are much appreciated though.

OK, I'll address some of the points made separately in the body of the text in a new comment.

materialism/physicalism [...] is mostly uncontroversial now among [...] philosophers

That's not really true. For example, in the PhilPapers survey, only 56.5% accepted physicalism in philosophy of mind (though 16.4% chose 'Other'). There's no knock-down argument for physicalism.

the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience. This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.

What're the arguments that scientists or philosophers use for it?

Thanks for replying!

"There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable."

The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is "not science" in the sense that it depends on the entity doing the assignment.

The problem with your breatharianism example is that the claim "you can eat nothing and stay alive" is objectively wrong but the claim "dying is bad" is a moral judgement and therefore subjective. That is, the only sense in which "dying is bad" is a true claim is by interpreting it as "I prefer that people won't die."

but the claim "dying is bad" is a moral judgement and therefore subjective. That is, the only sense in which "dying is bad" is a true claim is by interpreting it as "I prefer that people won't die."

Then by extension you have to say that medical science has no normative force. If it's just subjective, then when medicine says you ought not to smoke if you want to avoid lung cancer, they're completely unjustified when they say ought not to.

Yes, medical science has no normative force. The fact smoking leads to cancer is a claim about causal relationship between phenomena in the physical world. The fact cancer causes suffering and death is also such a relationship. The idea that suffering and death are evil is already a subjective preference (subjective not in the sense that it is undefined but in the sense that different people might have different preferences; almost all people prefer avoiding suffering and death but other preferences might have more variance).

"The step from ethical subjectivism to the claim it's wrong to interfere with other cultures seems to me completely misguided, even backwards." I'm not entirely sure what you mean here. We don't argue that it's wrong to interfere with other cultures.

It is our view that there is a general attitude that each person can have whatever moral code they like and be justified, and we believe that attitude is wrong. If someone claims they kill other humans because it's their moral code and it's the most good thing to do, that doesn't matter. We can rightfully say that they are wrong. So why should there be some cut off point somewhere that we suddenly can't say someone else's moral code is wrong? To quote Sam Harris, we shouldn't be afraid to criticise bad ideas.

"I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. " I agree with the first part in that different people and cultures can possess different ethics, but I reject your notion that there is no objective measure by which one is better than the other. If a culture's ethical code was to brutally maim innocent humans, we don't say 'We disagree with that but it's happening in another culture so it's ok, who are we to say that our version of morality is better?' We would just say that they are wrong.

"If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it" when you say this it sounds like you are trending to our view anyway.

I’ve recently updated toward moral antirealism. You’re not using the terms moral antirealism, moral realism, or metaethics in the essay, so I’m not sure whether your argument is meant as one for moral realism or whether you just want to argue what not all ethical systems are equally powerful – some are more inconsistent, more complex, more unstable, etc. than others – and the latter view is one I share.

“I reject your notion that there is no objective measure by which one is better than the other. If a culture's ethical code was to brutally maim innocent humans, we don't say 'We disagree with that but it's happening in another culture so it's ok, who are we to say that our version of morality is better?' We would just say that they are wrong.”

I would say neither. My morality – probably one shared by many EAs and many, but fewer, people outside EA – is that I care enormously, incomparably more about the suffering of those innocent humans than I care about respecting cultures, traditions, ethical systems, etc., so – other options and opportunity costs aside – I would disagree and intervene decisively, but I wouldn’t feel any need or justification to claim that their morals are factually wrong. This is also my world to shape.

That’s not to say that I wouldn’t compromise for strategic purposes.

You're banking on the general moral consensus just being one that favours you, or coincides with your subjectivist take on morality. There can be no moral 'progress' if that is the case. We could be completely wrong when we say taking slaves is a bad thing, if the world was under ISIS control and the consensus shared by most people is that morality comes from a holy book.

Having a wide consensus for one’s view is certainly an advantage, but I don’t see how the rest follows from that. The direction that we want to call progress would just depend on what each of us sees as progress.

To use Brian’s article as an example, this would, to me, include interventions among wild animals for example with vaccines and birth control, but that’s probably antithetical to the idea of progress of many environmentalists and even Gene Roddenberry.

What do you mean by being “wrong” about the badness of slavery? Maybe that it would be unwise to address the problem of slavery under an ISIS-like regime because it would have zero tractability and keep us from implementing more tractable improvements since we would be executed?

.

[This comment is no longer endorsed by its author]Reply

"I'm not entirely sure what you mean here. We don't argue that it's wrong to interfere with other cultures."

I was refuting what appeared to me as a strawman of ethical subjectivism.

"If someone claims they kill other humans because it's their moral code and it's the most good thing to do, that doesn't matter. We can rightfully say that they are wrong."

What is "wrong"? The only meaningful thing we can say is "we prefer people not the die therefore we will try to stop this person." We can find other people who share this value and cooperate with them in stopping the murderer. But if the murderer honestly doesn't mind killing people, nothing we say will convince them, even if they are completely rational.

By 'wrong' I don't mean the opposite of morally just, I mean the opposite of correct. That is to say, we could rightfully say they are incorrect.

I fundamentally disagree with your final point. I used to be a meat-eater, and did not care one bit about the welfare of animals. To use your wording, I honestly didn't mind killing animals. Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn't mind killing people that murder is wrong is ludicrous.

I completely don't understand what you mean by "killing people is incorrect." I understand that "2+2=5" is "incorrect" in the sense that there is a formally verifiable proof of "not 2+2=5" from the axioms of Peano arithmetic. I understand that general relativity is "correct" in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is "correct" in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don't see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.

"I used to be a meat-eater, and did not care one bit about the welfare of animals... Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn't mind killing people that murder is wrong is ludicrous."

The fact you found your friend's arguments to be persuasive means there was already some foundation in your mind from which "eating meat is wrong" could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it's doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn't mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn't be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.

As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you're wrong. In the same vein, holding a 'version of ethics' that claims that moving towards the WPW is good, you're wrong.

To address you second point, humans are not AGIs, our values are fluid.

I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as "bad" without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.

How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.

This is somewhat of an aside, but I know a person who can argue for veganism almost as well as any vegan, and knows it is wrong to be a carnist, yet chooses to eat meat. They are the first to admit that they are selfish and wrong, but they do so anyway.

I agree with Squark - it's only when we've already decided that, say, saving lives is important that we create health systems to do just that.

But, I agree with the point that EA is not doing anything different to society as a whole - particularly healthcare - in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.

More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can't suddenly decide not to satisfy my own preferences, yet there's no justification for putting my own preferences above those of others'. It seems to me, then, if I am satisfying my own preferences - which it is impossible not to do - I'm obligated to maximise the preference-satisfaction of others too.

We could ask "why act in a logically consistent fashion?" or "why act as logic tells you to act?", but such questions presuppose the existence of logic, so I don't think they're valid questions to ask.

"it's only when we've already decided that, say, saving lives is important that we create health systems to do just that." But no one pays any credence to the few who argue that we shouldn't value saving lives, we don't even shrug and say 'that's their opinion, who am I to say that's wrong?', we just say that they are wrong. Why should ethics be any different?

I think that people often derive their morality through social proof – when other people like me do it or think it, then it’s probably right. Hence it is a good strategy to appeal to their need for consistency that way – “If you think a health care system is a good thing, then don’t you think that this and that aspect of EA is just a natural extension of that, which you should endorse as well?”

I should try this line of argument around my parts, but last time I checked the premise was not universally endorsed in the US. If I remember the proportions correctly, then there was a sizable minority that had an agent-relative moral system and made a clear distinction between their own preferences, which were relevant to them, and other people’s preferences, which were irrelevant to them so long as they didn’t actively violate the other’s preference (according to some fuzzy, intuitive definition of “active”). Hence the argument might not work for those people.

I agree - it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.

Similarly, we could ask "why satisfy my own preferences?", but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.

You don't really have a choice but to satisfy your own preferences.

Suppose you decide to stop satisfying your preferences. Well, you've just satisfied your preference to stop satisfying your preferences.

So the answer to the question is that it's logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you're satisfying your preference when you act on it.

"why satisfy my own preferences?"

That's the lynch pin. You don't have to. You can be utterly incapable of actually following through on what you've deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too)

There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences - they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don't value health and longevity on the basis of their outward behaviour. This doesn't make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences.

Asking the further questions of, "why satisfy my own preferences?", or "what act in a logically consistent fashion?", just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold - you can go nowhere from there. "Why trust my sense data are sometimes veridical?" ...you don't have to, but you'd be mad not to.

So, notions of good and bad, right and wrong, have everything to do with the changing character of experience in conscious creatures. Every moral judgement comes down to how much and in what direction an action changes the conscious experience of some agent.

This is simply false as a claim about the "notions" (concepts) of , etc. held by actual human beings. The moral concepts of individuals are unambiguously not simply about conscious experience. Nor are actual moral judgements simply about conscious experience. Almost everyone values many things other than conscious experience. If you intend to make a normative claim, rather than a descriptive one, then you need some kind of normative argument.

If you think that pain, misery, and suffering, are merely subjective tastes, and are unsure why you shouldn’t value those states instead of things like love, laughter, and satiety, you’re thoroughly confused. Conscious experience is, by its very nature, already and immediately coloured with a certain kind of character.

This looks like non-scientific, non-physical claim. How would you cash this out in purely naturalistic, non-normative terms and once you have done so, why should we care about it?

Bringing someone closer to the worst possible misery they can experience is movement in the wrong direction. Again, if you’re unsure about this, try and find a conscious merit in dying from forced starvation.

I can see a merit in moving myself closer to the worst possible misery in lots of circumstances. For example, I can reasonably prefer to starve myself to death in pursuit of other goals which have nothing to do with conscious experience (of myself or others).

Also there's an asymmetry here. I may not be able to see the merit (from a purely self-interested perspective) in me starving to death, but I can easily see the merit in other people starving to death (I might think it's just what they deserve- if I'm of the right mind I might even derive positive conscious experience from it). What about the fact that I don't like me starving to death, gives me reason to stop others starving to death. If anything, saying 'try and find a merit in [personally experiencing an unpleasant experience]' pushes me away from utilitarian morality: if I can suffer an unpleasant experience and thereby save lots of other people from unpleasant experience, where's the 'merit' in me doing so?

Finding right and wrong answers about medicine only becomes possible once we all agree that this is what we mean by ‘health’, and that we value it.

I'm not convinced that we all agree on what is meant by "health" and that we value it- that's an empirical question. There are lots of reasons to generally fix people's broken legs and the like without invoking the concept of "health." Practicing medicine is entirely possible without a commitment to maximising "health"- one makes a variety of interventions for a variety of reasons which people tend to roughly, but by no means overwhelmingly agree upon.

take physical health and medical science. There is nothing in modern health science that can tell you why you ought to value being alive or free of disease, with absolute self-justifying or ‘scientific force’. However, we just do value these things.

A lot of your argument seems to rest on the claim that people do in fact value certain things (conscious experiences)- so it seems really problematic for this view that on the whole people value lots of things more than conscious experiences.

When my faucet spews water I don’t tell guests, “Who are you to say your plumbing is better than mine? To me, good plumbing is about water coming from as many places at once as possible”.

This rhetorical example relies on the fact that plumbing's value and function is fairly univocal. Still, you might face intractable disagreement about whether X or Y plumbing solution is better based on tradeoffs to do with cost, aesthetics, reliability, noise, space, precisely how much leaking is tolerated, safety, water purity, simplicity, time-to-install, ease of repair, technical skill displayed in construction etc. etc. It seems for your purposes you are committed to insisting that there's a simple single answer to the question of which is better, lest you end up being relativists about plumbing. But why think this is the case?


Your argument seems to rely on describing some judgement made within a particular domain, and then concluding that there is an uncontroversial single value for that domain, which is objective, and reducible to physical properties. But this doesn't seem value. Suppose I say "Michaelangelo's David is better art than this drawing a 5 year old just scribbled 2 seconds ago." That seems passably uncontroversial as statements go, but it doesn't seem to follow that there is a physical property which is objective and which can be scientifically investigated and which art should maximise. I'm not sure what your reply would be? You could accept that this doesn't apply in the art case, and insist that morality is just different. OK but why are moral judgements different to aesthetics? That requires a further philosophical argument. Or you could insist that they're just the same: goodness of art = maximises positively valenced conscious states and science will tell us what the good art is. But this doesn't seem to be what judgements about art are based on (arguably this is less plausible than applied to morality), and moral judgements seem different than aesthetic judgements.

David,

Again, this is fantastic. Thank you.

"Nor are actual moral judgements simply about conscious experience. Almost everyone values many things other than conscious experience."

Without getting too far into this, (I simply want to assume consequentialism, as most EA's would be some variety of that) I think you've misread us here. At root, the other things get imbued with value only by derivation from consciousness. The implicit claim here is that a universe void of consciousness is a universe similarly void of value - there would be nothing around to make value judgements.

Moral judgements aren't simply about consciousness, but they reduce ultimately to how they move the character of conscious experience, including in a broad sense. Valuing my car isn't simply about my conscious experience in the moment, but about how it makes my life easier; I can be more economically productive getting around faster etc. These other goals all affect the character of my conscious experience, however. Being prosperous vs impoverished, freedom of mobility etc., these are all felt experiences.

Even theistic moral claims, homosexuality is bad etc., are about making sure your conscious experience stays positive, or more positive. Those experiences just happen to be a) in the next life, or b) in God's mind, i.e. his approving of you and your piety is a conscious experience. This is all we mean by saying all moral judgements come down to changes in conscious experience. In terms of language, I could have made that clearer and maybe hedged it a bit more.

This looks like non-scientific, non-physical claim. How would you cash this out in purely naturalistic, non-normative terms and once you have done so, why should we care about it?

I think you're confused about consciousness, and maybe about what we're saying about consciousness. If you don't agree that there is a Nagelian "what it is like" inherently present in conscious experience, I don't know how to convince you. But that's all we mean by "coloured" already with a kind of character; consciousness has a feeling about it. Similarly, if you're problem is with the claim that some "colours" are inherently bad, I don't know how to convince you. Hence the line in the essay, if you think the manner in which suffering forever is bad is somehow on par with judgements of taste, e.g. "I like vanilla over chocolate", I don't think you're playing the same game, or with enough seriousness.

Following from this, I'm glad you're familiar with the literature so again, we're assuming some form of physicalist metaphysics. Personally, I favour materialism (e.g. 'Scientific Materialism' of Mario Bunge), as it is not ruthlessly reductive. Bracketing that can of worms, consciousness is, under this image, just another material thing. Granted, we don't have a full science of it yet, but we know there are neural correlates of consciousness. So the "what it is like", the colour of consciousness, is nonetheless material, and amenable to scientific inquiry. The representational theory of mind, for example, posits that the character of consciousness is exhausted by the representational contents therein. Again, if representational contents are going to be reducible (in some sense) to neural structures or emergent features of such structures, then the claim about conscious experience is entirely scientific. The experience of consciousness is a natural phenomenon.

I can see a merit in moving myself closer to the worst possible misery in lots of circumstances. For example, I can reasonably prefer to starve myself to death in pursuit of other goals which have nothing to do with conscious experience (of myself or others).

We can concede that there could be consequential merits in starving, in certain circumscribed situations. Hunger strikes, for example, allow you to reach a political goal, or you want to sacrifice your food to save a number of other people from dying. However, that does have lots to do with the conscious experience of you or others, i.e. you want increased pay, freedom from oppression, or to save their lives so they can continue having conscious experiences. These all make reference to experiential changes in consciousness. I'd be interested to know a worthwhile goal, that has nothing to do with the conscious experience of anyone, which starving yourself to death allows you to reach.

We did stipulate forced starvation, however, alluding to the non-voluntary nature of suffering in the developing world. And the point was more about conscious merit, not consequential merit. Find something intrinsically okay with the experience of forced starvation, as in the experience itself, without reference to secondary goals or outcomes. This is about the initial point that experiences in consciousness aren't merely subjective, like judgements of taste.

It seems for your purposes you are committed to insisting that there's a simple single answer to the question of which is better, lest you end up being relativists about plumbing. But why think this is the case?

Absolutely not. There's an entire section about pluralism and objectivity vs relativism and absolutism where we say this explicitly. There are innumerable trade-offs to evaluate, but that doesn't prevent you from commenting objectively about those trade-offs, and that will involve reference to material facts. Should I eat an entire jar of nutella right now? It might make me extremely happy, but then there are trade-offs about sugar intake on my health to take into account. There might not be an answer to this trade-off involving long/short-term health risk of nutella vs conscious delight, but that doesn't change the biochemical facts about sugar intake on human health, or my root assumption that I shouldn't eat poison. If I was on the verge of diabetes, and we could determine that one more sugar binge would throw me over the edge, then we'd need to update based on these material facts. The trade-off has become clearer now.

There is no single simple answer sometimes, that doesn't change the fact that at root there are simple assumptions about health and longevity though, and the material facts will constrain how you can objectively move towards or away from them. Similarly, new facts will shed light on those currently difficult or trivial scenarios. Just like the cucumber and celery example, it could turn out that in the future we discover that cucumbers inhibit some kind of protein synthesis and reliably increase cancer risk. The answer about which one to eat will immediately become nontrivial in that case, they're no longer just as good as each other. You ought not to eat cucumber if you want to avoid cancer now.

There is no doubt an infinite number of ways to have equally good plumbing arrangements, almost every house will be idiosyncratic in how it balances those considerations you listed. That doesn't stop us from saying there are objectively bad ones. Lead pipes are bad, for example. Pipes that leak and therefore don't get water to their respective taps are bad. That's all we have to admit. Balancing the rest could be completely trivial, until we discover evidence to suggest that it's not. We're pluralists about plumbing, not relativists.

Thanks for the reply Robert.

Moral judgements aren't simply about consciousness, but they reduce ultimately to how they move the character of conscious experience, including in a broad sense... Even theistic moral claims, homosexuality is bad etc., are about making sure your conscious experience stays positive, or more positive.

This is the claim I deny. People value many things other than conscious experience and make moral judgements based on things other than conscious experience (not even indirectly about conscious experience). If you want to argue that actually, despite appearances, all valuations are indirectly about conscious experience, this needs further argument.

I don't think "homosexuality is wrong" can be plausibly analysed as derivatively about changes in conscious experience. That's just not what people's moral judgements are about. But here are some other examples: -On the whole people strongly disvalue experience machines or wireheading -In Haidtian/dumbfounding cases, people disvalue things even when it is made very clear there is no negative result for anyone's conscious experiences. -People care non-derivatively about their projects being fulfilled even if this results in no change to their or anyone else's conscious experience at all -People can, without contradiction, value an empty consciousness-free universe that is pretty more than one that is ugly -People judge that wrongdoers should have (often intense) negative experience; this is not plausibly accounted for as derivatively about their own positive experience about the fact that wrongdoers suffer- that's just not what they are making judgements about. -People will routinely endorse trading off infinite or near infinite positive/negative experience for things which are totally unrelated to conscious experience- it is hard to make sense of this as simply ultimately about caring about conscious experiences.

RF: If you think that pain, misery, and suffering, are merely subjective tastes, and are unsure why you shouldn’t value those states instead of things like love, laughter, and satiety, you’re thoroughly confused. Conscious experience is, by its very nature, already and immediately coloured with a certain kind of character. DM: This looks like non-scientific, non-physical claim. How would you cash this out in purely naturalistic, non-normative terms and once you have done so, why should we care about it? RF: I think you're confused about consciousness, and maybe about what we're saying about consciousness. If you don't agree that there is a Nagelian "what it is like" inherently present in conscious experience, I don't know how to convince you.

What's my confusion? Whether or not I should introspectively recognise that I am conscious and that it has an inherent qualitative (and normative, and valenced!) character, this is not a scientific argument: it's a philosophical argument based on appeal to people introspecting and finding our certain normative truths about their qualia.

Bracketing that can of worms, consciousness is, under this image, just another material thing. Granted, we don't have a full science of it yet, but we know there are neural correlates of consciousness. So the "what it is like", the colour of consciousness, is nonetheless material, and amenable to scientific inquiry... the claim about conscious experience is entirely scientific."

Firstly, this seems to be making things too easy for yourself. You can't just say 'We all know we have intrinsically valenced phenomenal consciousness and these intrinsically valenced conscious experiences are all purely material... IOU one account of the relation between private conscious experiences and material science.'

But the main point here is that the claims about consciousness that your argument relies on are not "entirely scientific", I'm not sure they're even at all scientific. It's not clear at all how you would translate "good"/"bad"/"value" into material, scientific terms. Note that this is a distinct point from saying that you lack an account of how representational content reduce to neural structures- the point here is that the terms contained in your claim about consciousness are all entirely non-scientific.

I'd be interested to know a worthwhile goal, that has nothing to do with the conscious experience of anyone, which starving yourself to death allows you to reach.

See my first and second responses above. I don't think freedom from oppression are simply derivatively valued based on their implications for positively valenced conscious experience. One can have more positive conscious experience under conditions of oppression, injustice, lack of freedom etc. and yet prefer to be free of oppression etc. Likewise retribution or desert judgements are not about conscious experience (indeed they're sometimes about solely worsening conscious experience). Similarly judgements about fairness are non-reducible to judgements about conscious experience. It is commonplace for the fair thing to diverge from the positive conscious experience promoting thing. Scientific investigation of conscious experiences doesn't even begin to tell us why it's unjust to keep someone in a perpetually drugged state so that a gang of people can have their way with them.

There is no doubt an infinite number of ways to have equally good plumbing arrangements, almost every house will be idiosyncratic in how it balances those considerations you listed. That doesn't stop us from saying there are objectively bad ones. Lead pipes are bad, for example. Pipes that leak and therefore don't get water to their respective taps are bad. That's all we have to admit... We're pluralists about plumbing, not relativists.

How do you avoid relativism? Suppose Bill and Ben share a house, and Bill says that the reliable but hard to repair (and so on) plumbing option is best and Ben says the less reliable but easy to repair (and so on) plumbing option is best. A plausible analysis of such cases is that which plumbing solution is "best" makes sense only relative to the values of Bill, Ben or some other imagined valuer. What scientific investigation settles which is the best plumbing option or whether they are both (by chance) exactly equally good plumbing solutions? Of course, one can be a pluralist non-relativist, but I don't see the motivation for the view in cases like this. It's all very well to say "When my faucet spews water I don’t tell guests, “Who are you to say your plumbing is better than mine?" (after all, few people value maximising water leaks) but the same rhetorical force does not extend to things like the Bill/Ben case. Indeed it strikes me as weird to think that there is a determinate and objective "best" plumbing solution (or multiple solutions identically tied for best) and successful plumbing certainly doesn't require it.