Comment author: MagnusVinding 04 June 2018 12:29:13PM 1 point [-]

Thanks for writing this, Lukas. :-)

As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianism (or, in Sidgwick’s terminology, “rational egoism”) implies that we should do what is best for all sentient beings. In other words, subjectivism without indefensibly demarcated subjects (or at least subjects whose demarcation is not granted unjustifiable metaphysical significance) is equivalent with objectivism. Or so I would argue.

As for Moore’s open question argument (which I realize was not explored in much depth here), it seems to me, as has been pointed out by others, that there can be an ontological identity between that which different words refer to even if these words are not commonly reckoned strictly synonymous. For example: Is water the same as H2O? Is the brain the mind? These questions are hardly meaningless, even if we think the answer to both questions is yes. Beyond that, one can also defend the view that “the good” is a larger set of which any specific good thing we can point to is merely a subset, and hence the question can also make sense in this way (i.e. it becomes a matter of whether something is part of “the good”).

To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness). [And I think one can fairly argue that to say such a state has “genuine normative force” is very much an understatement.]

“Normative for the experiencing subject or for all agents?” one may then ask. Yet on my account of personal identity, the open individualist account (cf. https://en.wikipedia.org/wiki/Open_individualism and https://www.smashwords.com/books/view/719903), there is no fundamental distinction, and thus my answer would simply be: yes, for the experiencing subject, and hence for all agents (this is where our intuitions scream, of course, unless we are willing to suspend our strong, Darwinianly adaptive sense of self as some entity that rides around in some small part of physical reality). One may then object that different agents occupy genuinely different coordinates in spacetime, yet the same can be said of what we usually consider the same agent. So there is really no fundamental difference here: If we say that it is genuinely normative for Tim at t1 (or simply Tim1) to ensure that Tim at t2 (or simply Tim2) suffers less, then why wouldn’t the same be true of Tim1 with respect to John1, 2, 3…?

With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its "realism" (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).

Comment author: Lukas_Gloor 05 June 2018 06:27:00PM *  1 point [-]

To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness).

Cool! I think the closest I'll come to discussing this view is in footnote 18. I plan to have a post on moral realism via introspection about the intrinsic goodness (or badness) of certain conscious states.

I agree with reductionism about personal identity and I also find this to be one of the most persuasive arguments in favor of altruistic life goals. I would not call myself an open indvidualist though because I'm not sure what the position is exactly saying. For instance, I don't understand how it differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.
Also, I think it's perfectly coherent to have egoistic goals even under a reductionist view of personal identity. (It just turns out that egoism is not a well-defined concept either, and one has to make some judgment calls if one ever expects to encounter edge-cases for which our intuitions give no obvious answers about whether something is still "me.")

With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its "realism" (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).

Yeah, fair point. I mean, even Railton's own view has plenty of practical relevance in the sense that it highlights that certain societal arrangements lead to more overall well-being or life satisfaction than others. (That's also a point that Sam Harris makes.) But if that's all we mean by "moral realism" then it would be rather trivial. Maybe my criteria are a bit too strict, and I would indeed already regard it as extremely surprising if you get something like One Compelling Axiology that agrees on population ethics while leaving a few other things underdetermined.

Comment author: LanceSBush 30 May 2018 04:37:34PM 2 points [-]

The descriptive task of determining what ordinary moral claims mean may be more relevant to questions about whether there are objective moral truths than is considered here. Are you familiar with Don Loeb's metaethical incoherentism? Or the empirical literature on metaethical variability? I recommend Loeb's article, "Moral incoherentism: How to pull a metaphysical rabbit out of a semantic hat." The title itself indicates what Loeb is up to.

Comment author: Lukas_Gloor 31 May 2018 09:17:49AM 1 point [-]

Inspired by another message of yours, there's at least one important link here that I failed to mention: If moral discourse is about a, b, and c, and philosophers then say they want to make it about q and argue for realism about q, we can object that whatever they may have shown us regarding realism about q, it's certainly not moral realism. And it looks like the Loeb paper also argues that if moral discourse is about mutually incompatible things, that looks quite bad for moral realism? Those are good points!

Comment author: kbog  (EA Profile) 31 May 2018 12:36:19AM *  0 points [-]

For One Compelling Axiology, assuming that "ideal" is defined in a manner that does not beg the question, the theory implies that moral facts allow us to make empirical predictions about the world - for instance, a given philosopher, or group of philosophers, or ASI, or myself, will adopt such-and-such moral attitude with probability p. Moreover, moral facts seem to be defined purely in terms of their empirical ramifications.

This I find to be deeply troubling because it provides no grounds to say that there are any moral facts at all, just empirical ones. Suppose that there is a moral proposition X which states the one compelling axiology, okay. Now on what grounds do you say that X is a moral fact? Merely because it's always compelling? But such a move is a non sequitur.

Of course, you can say that you would be compelled to follow X were you to be an ideal reasoner, and therefore it's reasonable of you to follow it. But again, all we're saying here is that we would follow X were we to have whatever cognitive properties we associate with the word "ideal", and that is an empirical prediction. So it doesn't establish the presence of moral facts, there are just empirical facts about what people aspire to do under various counterfactuals and predicates.

In response to comment by kbog  (EA Profile) on 1. What Is Moral Realism?
Comment author: Lukas_Gloor 31 May 2018 08:53:53AM *  1 point [-]

Do you think your argument also works against Railton's moral naturalism, or does my One Compelling Axiology (OTA) proposal introduce something that breaks the idea? The way I meant it, OTA is just a more extreme version of Railton's view.

I think I can see what you're pointing to though. I wrote:

Note that this proposal makes no claims about the linguistic level: I’m not saying that ordinary moral discourse let’s us define morality as convergence in people’s moral views after philosophical reflection under ideal conditions. (This would be a circular definition.) Instead, I am focusing on the aspect that such convergence would be practically relevant: [...]

So yes, this would be a bad proposal for what moral discourse is about. But it's meant like this: Railton claims that morality is about doing things that are "good for others from an impartial perspective." I like this and wanted to work with that, so I adopt this assumption, and further add that I only want to call a view moral realism if "doing what is good for others from an impartial perspective" is well-specified. Then I give some account of what it would mean for it to be well-specified.

In my proposal, moral facts are not defined as that which people arrive at after reflection. Moral facts are still defined as the same thing Railton means. I'm just adding that maybe there are no moral facts in the way Railton means if we introduce the additional requirement that (strong) underdetermination is not allowed.

Comment author: konrad 24 May 2018 01:15:34PM 1 point [-]

Thanks for writing this up in a fairly accessible manner for laypeople like me. I am looking forward to the next posts. So far, I have only one reflection on the following bit of your thinking. It is a side point but it probably would help me to better model your thinking.

And all I’m thinking is, “Why are we so focused on interpreting religious claims? Isn’t the major question here whether there are things such as God, or life after death!?” The question that is of utmost relevance to our lives is whether religion’s metaphysical claims, interpreted in a straightforward and realist fashion, are true or not. An analysis of other claims can come later.

Do you think analyses of the other claims are never of more value than analyses of the metaphysical claims?

Because my initial reaction to your claim was something like "why would we focus on whether there is a god or life after death - it seems hardly possible to make substantial advances there in a meaningful way and these texts were meant to point at something a lot more trivial. They are disguised as profound and with metaphysical explanations only to make people engage with and respect them in times where no other tools were available to do so on a global level."

I.e. no matter the answer to the metaphysical questions, it could be useful to interpret religious claims because they could be pointing at something that people thought would help to structure society, whether the metaphysical claims hold or not.

Thus, I wonder whether the bible example is a little weak. You would have to clarify that you assume that people sometimes actually believe they are having a meaningful discussion around "what's Real Good?", assuming moral realism through god(?), as opposed to just engaging in intellectual masturbation, consciously or not.

If I do not take those people (who suppose moral realism proven through bible) seriously, I can operate based on the assumption that the authors of such writings supposed any form of moral non-naturalism, subjectivism, intersubjectivism or objectivism, as described by you. Any of which could have led to the idea of creating better mechanisms to enforce either the normative Good, the social contract, or allow everyone to maximally realise their own desires by creating an authority ("god") that allows to move society into a better equilibrium for any of these theories.

In that case, taking the claims about the (metaphysical) nature of that authority to be of any value of information/as providing valuable ground for discussion seems to be a waste of time or even giving them undeserved attention and credit, distracting from more important questions. Your described reaction though takes the ideas seriously and I wonder why you think there is any ground to even consider them as such?

I think this concern is somewhat relevant to the broader discussion, too, because you seem to imply that we can't (or even shouldn't?) make any advances on non-metaphysical claims before we haven't figured out the metaphysical ones. Though, what you mean is probably more along the lines of "be ready to change everything once we have figured out moral philosophy", not implying that we shouldn't do anything else in the meantime. Is that correct? If so, this point might get lost if not pronounced more prominently.

Comment author: Lukas_Gloor 25 May 2018 04:18:40PM *  1 point [-]

Probably intuitions about this issue depend on which type of moral or religious discourse one is used to. As someone who spent a year at a Christian private school in Texas where creationism was taught in Biology class and God and Jesus were a very tangible part of at least some people's lives, I definitely got a strong sense that the metaphysical questions are extremely important.

By contrast, if the only type of religious claims I'd ever came into contact with had been moderate (picture the average level of religiosity of a person in, say, Zurich), then one may even consider it a bit of a strawman to assume that religious claims are to be taken literally.

I think this concern is somewhat relevant to the broader discussion, too, because you seem to imply that we can't (or even shouldn't?) make any advances on non-metaphysical claims before we haven't figured out the metaphysical ones.

Just to be clear, all I'm saying is that I think it's going to be less useful to discuss "what are moral claims usually about." What we should instead do is instead what Chalmers describes (see the quote in footnote 4). Discussing what moral claims are usually about is not the same as making up one's mind about normative ethics. I think it's very useful to discuss normative ethics, and I'd even say that discussing whether anti-realism or realism is true might be slightly less important than making up one's mind about normative ethics. Sure, it informs to some extent how to reason about morality, but as has been pointed out, you can make some progress about moral questions also from a lens of agnosticism about realism vs. anti-realism.

To go back to the religion analogy, what I'm recommending is to first figure out whether you believe in a God or an afterlife that would relevantly influence your priorities now, and not worry much about whether religious claims are "usually" or "best" to be taken literally or taken metaphorically(?).

Comment author: jayquigley 22 May 2018 05:23:09PM 3 points [-]

What do you think are the implications of moral anti-realism for choosing altruistic activities?

Why should we care whether or not moral realism is true?

(I would understand if you were to say this line of questions is more relevant to a later post in your series.)

Comment author: Lukas_Gloor 23 May 2018 01:13:58PM *  3 points [-]

Why should we care whether or not moral realism is true?

I plan to address this more in a future post, but the short answer is this that for some ways in which moral realism has been defined, it really doesn't matter (much). But there are some versions of moral realism that would "change the game" for those people who currently reject them. And vice-versa, if one currently endorses a view that corresponds to the two versions of "strong moral realism" described in the last section of my post, one's priorities could change noticeably if one changes one's mind towards anti-realism.

What do you think are the implications of moral anti-realism for choosing altruistic activities?

It's hard to summarize this succinctly because for most of the things that are straightforwardly important under moral realism (such as moral uncertainty or deferring judgment to future people who are more knowledgeable about morality), you can also make good arguments in favor of them going from anti-realist premises. Some quick thoughts:

– The main difference is that things become more "messy" with anti-realism.

– I think anti-realists should, all else equal, be more reluctant to engage in "bullet biting" where you abandon some of your moral intuitions in favor of making your moral view "simpler" or "more elegant." The simplicity/elegance appeal is that if you have a view with many parameters that are fine-tuned for your personal intuitions, it seems extremely unlikely that other people would come up with the same parameters if they only thought about morality more. Moral realists may think that the correct answer to morality is one that everyone who is knowledgeable enough would endorse, whereas anti-realists may consider this a potentially impossible demand and therefore place more weight on finding something that feels very intuitively compelling on the individual level. Having said that, I think there are a number of arguments why even an anti-realist might want to adopt moral views that are "simple and elegant." For instance, people may care about doing something meaningful that is "greater than their own petty little intuitions" – I think this is an intuition that we can try to cash out somehow even if moral realism turns out to be false (it's just that it can be cashed out in different ways).

– "Moral uncertainty" works differently under anti-realism, because you have to say what you are uncertain about (it cannot be the one true morality because anti-realism says there is no such thing). One can be uncertain about what one would value after moral reflection under ideal conditions. This kind of "valuing moral reflection" seems like a very useful anti-realist alternative to moral uncertainty. The difference is that "valuing reflection" may be underdefined, so anti-realists have to think about how to distinguish having underdefined values from being uncertain about their values. This part can get tricky.

– There was recently a discussion about "goal drift" in the EA forum. I think it's a bigger problem with anti-realism all else equal (unless one's anti-realist moral view is egoism-related.) But again, there are considerations that go into both directions. :)

Comment author: jayquigley 22 May 2018 05:35:16PM 2 points [-]

So you know who's asking, I happen to consider myself a realist, but closest to the intersubjectivism you've delineated above. The idea is that morality is the set of rules that impartial, rational people would advocate as a public system. Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure. There's not much more objective or "facty" about rationality than the fact that basically all vertebrates are disposed to be averse to those things, and it's rather puzzling for someone not to be. People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there's nothing much more objective or "facty" about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red.

I don't know whether or not you'd label that as objectivism about color or about rationality/harm. But I'd classify it as a weak form of realism and objectivism because people can be incorrect, and those who are not reliably disposed to identify cases correctly would be considered blind to color or to harm.

These things I'm saying are influenced by Joshua Gert, who holds very similar views. You may enjoy his work, including his Normative Bedrock (2012) or Brute Rationality (2004). He is in turn influenced by his late father Bernard Gert, whose normative ethical theory Josh's metaethics work complements.

Comment author: Lukas_Gloor 22 May 2018 10:49:57PM *  1 point [-]

The idea is that morality is the set of rules that impartial, rational people would advocate as a public system.

Yes, this sounds like constructivism. I think this is definitely a useful framework for thinking about some moral/morality-related questions. I don't think all of moral discourse is best construed as being about this type of hypothetical rule-making, but like I say in the post, I don't think interpreting moral discourse should be the primary focus.

Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure.

Hm, this sounds like you're talking about a substantive concept of rationality, as opposed to a merely "procedural" or "instrumental" concept of rationality (such as it's common on Lesswrong and with anti-realist philosophers like Bernard Williams). Substantive concepts of rationally always go under moral non-naturalism, I think.

My post is a little confusing with respect to the distinction here, because you can be a constructivist in two different ways: Primarily as an intersubjectivist metaethical position, and "secondarily" as a form of non-naturalism. (See my comments on Thomas Sittler's chart.)

People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there's nothing much more objective or "facty" about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red.

Yeah, it should be noted that "strong" versions of moral realism are not committed to silly views such as morality existing in some kind of supernatural realm. I often find it difficult to explain moral non-naturalism in a way that makes it sound as non-weird as when actual moral non-naturalists write about it, so I have to be careful to not strawman these positions. But what you describe may still qualify as "strong" because you're talking about rationality as a substantive concept. (Classifying something as a "harm" is one thing if done in a descriptive sense, but probably you're talking about classifying things as a harm in a sense that has moral connotations – and that gets into more controversial territory.)

The book title "normative bedrock" also sounds relevant because my next post will talk about "bedrock concepts" (Chalmers) at length, and specifically about "irreducible normativity" as a bedrock concept, which I think makes up the core of moral non-naturalism.

Comment author: ThomasSittler 22 May 2018 08:39:04PM 8 points [-]

Here's a draft chart of meta-ethics I made recently. When I'm less busy I'll hopefully be publishing it with an accompanying blog post.

Comment author: Lukas_Gloor 22 May 2018 10:10:31PM *  3 points [-]

Cool! I think this is helpful both in itself as well as here as a complement to my post. I also thought about making a chart but was too lazy in the end. If I may, I'll add some comments about how this chart relates to the distinctions I made/kept/found:

"Judgment-dependent cognitivism" corresponds to what I labelled subjectivism and intersubjectivism, and "judgment-independent cognitivism" corresponds to "objectivism." (Terminology adopted from the Sayre-McCord essay; but see the Scanlon essay for the other terminology.)

I'm guessing "Kantian rationalism" refers to views such as Scanlon's view. I'm didn't go into detail in my post with explaining the difference between constructivism as an intersubjectivist position and constructivism as a version of non-naturalism. I tried to say something about that in footnote 7 but I fear it'll only become more clear in my next post. Tl;dr is that non-naturalists think that we can have externalist reasons for doing something, reasons we cannot "shake off" by lacking internal buy-in or internal motivation. By contrast, someone who merely endorses constructivism as an intersubjectivist (or "judgment-dependent") view, such as Korsgaard for instance, would reject these externalist reasons.

I agree with the way you draw the lines between the realist and the anti-realist camp. The only thing I don't like about this (and this is a criticism not about your chart, but about the way philosophers have drawn these categories in the first place) is that it makes it seem as though we have to choose exactly one view. But by removing the entire discussion from the "linguistic level" (taking a stance on how we interpret moral discourse), we can acknowledge e.g. that subjectivism or intersubjectivism represent useful frameworks for thinking about morality-related questions, whether moral discourse is completely subjectivist or intersubjectivist in nature or not. And even if moral discourse was all subjectivist (which seems clearly wrong to me but let's say it's a hypothetical), for all we'd know that could still allow for the possibility that an objectivist moral reality exists in a meaningful and possibly action-relevant sense. I like Luke's framing of "pluralistic moral reductionism" because it makes clear that there is more than one option.

15

1. What Is Moral Realism?

This is post one of a sequence of posts I am writing on moral anti-realism and moral reasoning.  Introduction To start off this sequence, I want to give a short description of moral realism; I’ll be arguing against moral realism in later posts, and I want to clearly explain what it is... Read More
Comment author: Lukas_Gloor 21 May 2018 11:03:54AM *  2 points [-]

Talent gap - Middle (~50 people)

If the AI safety/alignment community is altogether around 50 people, that's a large relative gap. Depending on how you count it might be bigger than 50 people, but the talent gap seems large in relative terms in either case. :)

Comment author: Gregory_Lewis 24 April 2018 01:31:05AM 9 points [-]

Very interesting. As you say, this data is naturally rough, but it also roughly agrees with own available anecdata (my impression is somewhat more optimistic, although attenuated by likely biases). A note of caution:

The framing in the post generally implies value drift is essentially value decay (e.g. it is called a 'risk', the comparison of value drift to unwanted weight gain/poor diet/ etc.). If so, then value drift/decay should be something to guard against, and maybe precommitment strategies/'lashing oneself to the mast' seems a good idea, like how we might block social media, don't have sweets in the house, and so on.

I'd be slightly surprised if the account someone who 'drifted' would often fit well with the sort of thing you'd expect someone to say if (e.g.) they failed to give up smoking or lose weight. Taking the strongest example, I'd guess someone who dropped from 50% to 10ish% after marrying and starting a family would say something like, "I still think these EA things are important, but now I have other things I consider more morally important still (i.e. my spouse and my kids). So I need to allocate more of my efforts to these, thus I can provide proportionately less to EA matters".

It is much less clear whether this person would think they've made a mistake in allocating more of themselves away from EA, either at t2-now (they don't regret they now have a family which takes their attention away from EA things), or at t1-past (if their previous EA-self could forecast them being in this situation, they would not be disappointed in themselves). If so, these would not be options that their t1-self should be trying to shut off, as (all things considered) the option might be on balance good.

I am sure there are cases where 'life gets in the way' in a manner it is reasonable to regret. But I would be chary if the only story we can tell for why someone would be 'less EA' are essentially greater or lesser degrees of moral failure, disappointed if suspicion attaches to EAs starting a family or enjoying (conventional) professional success, and caution against pre-commitment strategies which involve closing off or greatly hobbling aspects of one's future which would be seen as desirable by common-sense morality.

Comment author: Lukas_Gloor 24 April 2018 12:04:10PM *  3 points [-]

It is much less clear whether this person would think they've made a mistake in allocating more of themselves away from EA, either at t2-now (they don't regret they now have a family which takes their attention away from EA things), or at t1-past (if their previous EA-self could forecast them being in this situation, they would not be disappointed in themselves). If so, these would not be options that their t1-self should be trying to shut off, as (all things considered) the option might be on balance good. I am sure there are cases where 'life gets in the way' in a manner it is reasonable to regret. But I would be chary [...]

You discuss a case where there is regret from the perspective of both t1 and t2, and a case where there is regret from neither perspective. These are both plausible accounts. But there's also a third option that I think happens a lot in practice: Regret at t1 about the projected future in question, and less/no regret at t2. So the t2-self may talk about "having become more wise" or "having learned something about myself," while the t1-self would not be on board with this description and consider the future in question to be an unfortunate turn of events. (Or the t2-self could even acknowledge that some decisions in the sequence were not rational, but that from their current perspective, they really like the way things are.)

The distinction between moral insight and failure of goal preservation is fuzzy. Taking precautions against goal drift is a form of fanaticism and commonsense heuristics speak against that. OTOH, not taking precautions seems like not taking the things you currently care about seriously (at least insofar as there are things you care about that go beyond aspects related to your personal development).

Unfortunately I don't think there is a safe default. Not taking precautions is tantamount to making the decision to be okay with potential value drift. And we cannot just say we are uncertain about our values, because that could result in mistaking uncertainty with underdetermination. There are meaningful ways of valuing further reflection about one's own values, but those types of "indirect" values, where one values further reflection, they can also suffer from (more subtle) forms of goal drift.

View more: Next