Hide table of contents

In this post, I spell out how I think we ought to approach decision making under uncertainty and argue that the most plausible conclusion is that we ought to act as if theism is true. This seems relevant to the EA community as if this is the case it might impact our cause prioritisation decisions.


Normative realism is the view that there are reasons for choosing to carry out at least some actions. If normative realism is false, then normative anti-realism is true. On this view, there are no reasons for or against taking any action. If normative realism is false then all actions are equally choice-worthy, for they all have both no reasons for and no reasons against them.

Suppose Tina is working out whether she has more reason to go on holiday or to donate the money to an effective charity.

Tina knows that if normative anti-realism is true then there is no fact of the matter about which she has more reason to do, for there are no reasons either way. It seems to make sense for Tina to ignore the part of her probability space taken up by worlds in which normative realism is false and instead just focus on the part of her probability space taken up by worlds in which normative realism is true. After-all, in worlds with normative anti-realism, there isn’t any reason to act either way. So it would be surprising if the possibility of being in one of these worlds were relevant to her decision.

It also seems appropriate for Tina to ignore the part of her probability space taken up by worlds in which she would not have epistemic access to any potential normative facts. Suppose that World 26 is a world in which normative realism held but that agents had no access to the reasons for action which existed. Considering World 26 is going to provide no guidance to Tina on whether to go on holiday or donate the money. As such it seems right for Tina to discount such worlds from her decision procedure.

If the above is true then Tina should only consider worlds in which normative realism is true and there is a plausible mechanism that she would know the normative truths.

It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor's post here for more on the objectivist/non-objectivist distinction.)

To see this, imagine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.

It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to - worlds with normative realism and human access to moral facts - are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.

There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.

The first is that there is some sort of built in teleology to the universe which results in certain ends being brought about. John Leslie’s axiarchism is one example of this, where what exists, exists because it is good. This might plausibly bring about humans with correct moral beliefs as knowing correct moral beliefs might itself be intrinsically good. However many, myself included, will find this sort of metaphysics quite unlikely. Separately the possibility of this theory is unlikely to count against my argument as it's also likely to be a metaphysics in which God exists, as God’s existence itself is typically considered to be a good and so would also be brought about.

The other apparent option is that evolution was guided by some sort of designer. The most likely form of this directing power stems from the existence of God or Gods. If an omniscient God exists, then God would know all moral facts and had he so desired it, could easily engineer it so that humans had reliable moral beliefs.

Another design option is that we were brought about by a simulator, simulators would also have the power to engineer the moral beliefs of humans. However it’s not clear how these simulators would have reliable access to the relevant moral facts themselves in order to correctly program them into us as humans. The question we are asking of how we could trust our moral views on unguided evolution could equally be asked of our simulators, and their simulators in turn if the chain of simulation continues. As a result, it’s not clear that considering worlds in which we are simulated is going to be decision relevant by the second of our two criteria, unless our simulators had their moral beliefs reliably programmed by God.

Given this, the only worlds in which humans end up with reliable moral beliefs seem to be worlds in which God exists. As such, according to our criteria above, when deciding how we ought to act we need only consider possible worlds in which God exists. Therefore, when Tina is choosing between the two options she ought to ask herself, what would be the option she would have most reason to choose if she existed in a theistic world.

To complete her analysis of what action to take, she should consider, of the possible theisms: (i) how likely each is, (ii) how likely each is to co-exist with normative realism, (iii) how likely it is that the God(s) of this theism would give her reliable access to moral facts and (iv) how choice-worthy are the two actions on each theistic possibility.


Appendix on prudential reasons

Other than moral reasons, the other category of reasons commonly discussed are prudential reasons. These are reasons of self-interest. For example, one may have strong moral reasons to jump on a grenade to save the lives of one’s comrades but some think it’s likely that one has a prudential reason not to sacrifice one’s life in this way.

If prudential reasons exist then it seems more plausible that humans would know about them compared to moral reasons: prudential reasons pertain to what it is in my interests to do, and I have at least some access to myself. Still, it’s not guaranteed that we have access to prudential reasons. If prudential reasons exist, a baby presumably has a prudential reason to be inoculated even if it has no access to this fact at the time of the inoculation.

It seems unlikely to me that prudential reasons exist in many worlds in which normative realism holds. However, even if they do exist, we would need to consider how to weigh prudential and moral reasons, especially when moral reasons pull in one way and prudential reasons pull in the other.

It’s tempting to say that it will just depend on the comparative strengths of the moral and prudential reasons in any given case. However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do. As such, I prefer a view where moral reasons have a ‘lexical’ priority over prudential reasons, which is to say that when choosing between two actions, we should do whichever action has most moral reason for it and only consider the prudential reasons if both actions are equally morally choice-worthy.

Still, my previous analysis would need to be tempered by any uncertainty surrounding the possible existence of prudential reasons, for unguided evolution might plausibly give a human access to them. If there is also uncertainty that moral reasons do not always dominate prudential reasons then the possibility of prudential reasons in non-God worlds will need to be factored into one’s decision procedure.

12

0
0

Reactions

0
0

More posts like this

Comments37
Sorted by Click to highlight new comments since:

This seems to be taking an implicit view that our goal in taking actions must be to serve some higher cause. We can also take actions simply because we want to, because it serves our particular goals, and not some goal that is an empirical fact of the universe that is independent of any particular human. (I assume you would classify this position under normative anti-realism, though it fits the definition of normative realism as you've stated it so far.)

Why must the concept of "goodness" or "morality" be separated from individual humans?

Relevant xkcd

My view is broadly that if reasons for action exist which create this sort of binding 'oughtness' in favour of you carrying out some particular thing, then there must be some story about why this binding oughtness applies to some things and not others.

It's not clear to me that mere human desires/goals are going to generate this special property that you now ought to do something. We don't think that the fact that 'my table has four legs results' in itself generates reasons for anyone to do anything, so why should the fact that I have a particular desire generate reasons either?

This is just to say that I don't think that prudential reasons emerge from my mere desires and I don't think moral reasons do either. There needs to be some further account of how these reasons appear. Many people don't think there is a plausible one and so settle for normative anti-realism.

What I'm most convinced of is that mere beliefs don't generate moral reasons for action in the same way that my table doesn't either.

With that terminology, I think your argument is that we should ignore worlds without a binding oughtness. But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons, but I don't really understand that term -- I thought it was synonymous with 'instrumental' reasons, but taking actions for your own desires and goals is certainly not 'instrumental'.

So it seems to me that in worlds with a binding oughtness that you know about, you should take actions according to that binding oughtness, and otherwise you should take actions according to your own desires and goals.

You could argue that binding oughtness always trumps desires and goals, so that your action should always follow the binding oughtness that is most likely, and you can put no weight on desires and goals. But I would want to know why that's true.

Like, I could also argue that actually, you should follow the binding meta-oughtness rule, which tells you how to derive ought statements from is statements, and that should always trump any particular oughtness rule, so you should ignore all of those and follow the most likely meta-oughtness rule. But this seems pretty fallacious. What's the difference?

I think your argument is that we should ignore worlds without a binding oughtness.

Agreed, I'm just using 'binding oughtness' here as a (hopefully) more intuitive way of fleshing out what I mean by 'normative reason for action'.

But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons

So I agree that if there are no normative reasons/'binding oughtness' then you would still have your mere desires. However these just wouldn't constitute normative reason for action and that's just what you need for an action to be choice-worthy. If your desires do constitute normative reason for action then that's just a world in which there are prudential normative reasons. The distinction between normative/prudential is one developed in the relevant literature, see this abstract for a paper by Roger Crisp to get a sense for it. The way prudential reason is used in the relevant literature it is not the same as an instrumental reason.

So it seems to me that in worlds with a binding oughtness that you know about, you should take actions according to that binding oughtness, and otherwise you should take actions according to your own desires and goals.

The issues is that we're trying to work out how to act with uncertainty about what sort of world we're in? So my argument is that you ought only to 'listen' to worlds which have normative realism/'binding oughtness' and ones where you have epistemic access to those normative reasons. As I don't think that mere desires create reasons for action I think we can ignore them unless they are actually prduential reasons.

You could argue that binding oughtness always trumps desires and goals, so that your action should always follow the binding oughtness that is most likely, and you can put no weight on desires and goals. But I would want to know why that's true.

I attempt to give an argument for this claim in the penultimate para of my appendix. Note that I'm interpreting that you think 'desires and goals' result in what I would call prudential reasons for action. I think this is fair because in terms of the way you operationalize the concept.

However these just wouldn't constitute normative reason for action and that's just what you need for an action to be choice-worthy.
[...]
As I don't think that mere desires create reasons for action I think we can ignore them unless they are actually prudential reasons.

I don't know how to argue against this, you seem to be taking it as axiomatic. The one thing I can say is that it seems clearly obvious to me that your desires and goals can make some actions better to choose than others. It only becomes non-obvious if you expect there to be some external-to-you force telling you how to choose actions, but I see no reason to assume that. It really is fine if you're actions aren't guided by some overarching rule granted authority by virtue of being morality.

But I suspect this isn't going to convince you. Can we simply assume that prudential reasons exist and figure out the implications?

The distinction between normative/prudential is one developed in the relevant literature, see this abstract for a paper by Roger Crisp to get a sense for it.

Thanks, I think I've got it now. (Also it seems to be in your appendix, not sure how I missed that before.)

The issues is that we're trying to work out how to act with uncertainty about what sort of world we're in?

I know, and I think in the very next paragraph I try to capture your view, and I'm fairly confident I got it right based on your comment.

However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do.

This seems tautological when you define morality as "binding oughtness" and compare against regular oughtness (which presumably applies to prudential reasons). But why stop there? Why not go to metamorality, or "binding meta-oughtness" that trumps "binding oughtness"? For example, "when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons".

It is again tautologically true that a person who does what there is most metamoral reason to do could not have failed to do what there was most all things considered reason for them to do. It doesn't sound as compelling, but I claim that is because we don't have metamorality as an intuitive concept, whereas we do have morality as an intuitive concept.

Thanks for the really thoughtful engagement.

I don't know how to argue against this, you seem to be taking it as axiomatic.

I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that 'my table has four legs' won't create normative reasons for action, neither will the descriptive fact that 'Harry desires chocolate ice-cream' create them either. It doesn't seem obvious to me that the desire fact is much more likely to create normative reasons than the table fact. If we don't think the table fact would then we shouldn't think the desire fact would either.

This seems tautological when you define morality as "binding oughtness" and compare against regular oughtness (which presumably applies to prudential reasons).

Apologies for a lack of clarity, my use of 'binding oughtness' was meant to apply to both prudential and moral reasons for action, another way of describing the property that normative reasons seem to have is that they create this external rational tug on us to do a particular thing.

So I think both prudential and moral reasons create this sort of rational tug on us, and my further claim is that if both prudential and moral reasons exist and conflict in a given case then the moral reasons will override/outweigh the prudential reasons for the reasons given in your quotation.

Why not go to metamorality, or "binding meta-oughtness" that trumps "binding oughtness"? For example, "when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons".

I worry that I'm not understanding the full force of your objection here. I have a very low credence that your proposed meta-normative rule would be true? What arguments are there for it?

There seems to be something that makes you think that moral reasons should trump prudential reasons. The overall thing I'm trying to do is narrow down on what that is. In most of my comments, I've thought I've identified it, and so I argued against it, but it seems I'm constantly wrong about that. So let me try and explicitly figure it out:

How much would you agree with each of these statements:

  • If there is a conflict between moral reasons and prudential reasons, you ought to do what the moral reasons say.
  • If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions one ought to take, then you ought to do what that process prescribes, regardless of what you desire.
  • If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions to take to maximize utility, then you ought to do what that process prescribes, regardless of what you desire.
  • If there is an external-to-you entity satisfying property X that prescribes actions you should take, then you ought to do what it says, regardless of what you desire. (For what value of X would you agree with this statement?)
I have a very low credence that your proposed meta-normative rule would be true?

I also have a very low credence of that meta-normative rule. I meant to contrast it to the meta-normative rule "binding oughtness trumps regular oughtness", which I interpreted as "moral reasons trump prudential reasons", but it seems I misunderstood what you meant there, since you mean "binding oughtness" to apply both to moral and prudential reasons, so ignore that argument.

I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that 'my table has four legs' won't create normative reasons for action, neither will the descriptive fact that 'Harry desires chocolate ice-cream' create them either.

This makes me mildly worried that you aren't able to imagine the worldview where prudential reasons exist. Though I have to admit I'm confused why under this view there are any normative reasons for action -- surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.

(Btw, random note, I suspect that Ben Pace above and I have very similar views, so you can probably take your understanding of his view and apply it to mine.)

There seems to be something that makes you think that moral reasons should trump prudential reasons.

The reason I have is in my original post. Namely I have a strong intuition that it would be very odd to say that someone who had done what there was most moral reason to do had failed to do what there was most 'all things considered' reason for them to do.

If my intuition here is right then moral reasons must always trump prudential reasons. Note I don't have anything more to offer than this intuition, sorry if I made it seem like I did!

On your list of bullets:

1. 95%

2. 99%

3. 99% (Supposing for simplicity's sake that I had a credence one in utilitarianism - which I don't)

4. I don't think I understand the set up of this question - it doesn't seem to make a coherent sentence to replace X with a number in the way you have written it.

This makes me mildly worried that you aren't able to imagine the worldview where prudential reasons exist.

I think I do have an intuitive understanding of what a prudential reason for action would be. Derek Parfit discusses the case of 'future Tuesday indifference' in On What Matters. Where prior to Tuesday a person is happy to sign up for any amount of pain on Tuesdays for the tiniest benefit before, even though it is really horrible when they get to Tuesdays. My view is that *if* prudential reasons exist, then avoiding future Tuesday indifference would be the most plausible sort of candidate for a prudential reason we might have.

Though I have to admit I'm confused why under this view there are any normative reasons for action -- surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.

So I think my view is similar to Parfit's on this. If normative truths exist then they are 'irreducibly normative' they do not dissolve down to any descriptive statement. If someone has reason to do a descriptive statement X then this just means there is an irreducibly normative fact that makes this the case.

4. I don't think I understand the set up of this question - it doesn't seem to make a coherent sentence to replace X with a number in the way you have written it.

I did mean for you to replace X with a phrase, not a number.

If my intuition here is right then moral reasons must always trump prudential reasons. Note I don't have anything more to offer than this intuition, sorry if I made it seem like I did!

Your intuition involves the complex phrase "moral reason" for which I could imagine multiple different interpretations. I'm trying to figure out which interpretation is correct.

Here are some different properties that "moral reason" could have:

1. It is independent of human desires and goals.

2. It trumps all other reasons for action.

3. It is an empirical fact about either the universe or math that can be derived by observation of the universe and pure reasoning.

My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.

A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but "irreducibly normative" sounds to me like it does not satisfy property 3.

Here are some models of how you might be thinking about moral reasons:

a) Moral reasons are defined as the reasons that satisfy property 1. If I think about those reasons, it seems to me that they also satisfy property 2.

b) Moral reasons are defined as the reasons that satisfy property 2. If I think about those reasons, it seems to me that they also satisfy property 1.

c) Moral reasons are defined as the reasons that satisfy both property 1 and property 2.

My response to a) and b) are of the form "That inference seems wrong to me and I want to delve further."

My response to c) is "Define prudential reasons as the reasons that satisfy property 2 and not-property 1, then prudential reasons and moral reasons both trump all other reasons for action, which seems silly/strange."

My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.

I do think they are correlated, because according to my intuitions both are true of moral reasons. However I wouldn't want to argue that (2) is true because (1) is true. I'm not sure why (2) is true of moral reasons. I just have a strong intuition that it is and haven't come across any defeaters for that intuition.

A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but "irreducibly normative" sounds to me like it does not satisfy property 3.

This seems false to me. It's typically thought that an omniscient being (by definition) could know these non-natural irreducibly normative facts. All we'd need is some mechanism that connects humans with them. One mechanism as I discuss in my post is that God puts them in the brains of humans. We might wonder how God could know the non-natural facts, one explanation might be that God is the truthmaker for them, if he is then it seems plausible he would know them.

On your three options (a) seems closest to what I believe. Note my preferred definitions would be:

'What I have most prudential reason to do is what benefits me most (benefits in an objective rather than subjective sense).'

'What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe).'

To be clear it's very plausible to me that what 'benefits you most' is not necessarily what you desire most as seen by Parfit's discussion of future Tuesday indifference mentioned above. That's why I use the objective caveat.

Okay, cool, I think I at least understand your position now. Not sure how to make progress though. I guess I'll just try to clarify how I respond to imagining that I held the position you do.

From my perspective, the phrase "moral reason" has both the connotation that it is external to humans and that it trumps all other reasons, and that's why the intuition is so strong. But if it is decomposed into those two properties, it no longer seems (to me) that they must go together. So from my perspective, when I imagine how I would justify the position you take, it seems to be a consequence of how we use language.

What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe)

My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.

I've found the conversation productive, thanks for taking the time to discuss.

My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.

Impartial reasons would be reasons that would 'count' even if we were some sort of floating consciousness observing the universe without any specific personal interests.

I probably don't have any more intuitive explanations of impartial reasons than that, so sorry if it doesn't convey my meaning!

My math-intuition says "that's still not well-defined, such reasons may not exist".

To which you might say "Well, there's some probability they exist, and if they do exist, they trump everything else, so we should act as though they exist."

My intuition says "But the rule of letting things that could exist be the dominant consideration seems really bad! I could invent all sorts of categories of things that could exist, that would trump everything I've considered so far. They'd all have some small probability of existing, and I could direct my actions any which way in this manner!" (This is what I was getting at with the "meta-oughtness" rule I was talking about earlier.)

To which you might say "But moral reasons aren't some hypothesis I pulled out of the sky, they are commonly discussed and have been around in human discourse for millennia. I agree that we shouldn't just invent new categories and put stock into them, but moral reasons hardly seem like a new category."

And my response would be "I think moral reasons of the type you are talking about mostly came from the human tendency to anthropomorphize, combined with the fact that we needed some way to get humans to coordinate. Humans weren't likely to just listen to rules that some other human made up, so the rules had to come from some external source. And in order to get good coordination, the rules needed to be followed, and so they had to have the property that they trumped any prudential reasons. This led us to develop the concept of rules that come from some external source and trump everything else, giving us our concept of moral reasons today. Given that our concept of "moral reasons" probably arose from this sort of process, I don't think that "moral reasons" is a particularly likely thing to actually exist, and it seems wrong to base your actions primarily on moral reason. Also, as a corollary, even if there do exist reasons that trump all other reasons, I'm more likely to reject the intuition that it must come from some external source independent of humans, since I think that intuition was created by this non-truth-seeking process I just described."

It's been many years (about 6?) since I've read an argument like this, so, y'know, you win on nostalgia. I also notice that my 12-year old self would've been really excited to be in a position to write a response to this, and given that I've never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I'm going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.

Also, sorry it's so long, I didn't have the time to make it short.

I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here's what seems to me to be a key crux of this arg (I've bolded the key sentences):

It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor's post here for more on the objectivist/non-objectivist distinction.)
...[I]magine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to - worlds with normative realism and human access to moral facts - are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.

Object-level response: this is confused about how values come into existence.

The things I care about aren't written into the fabric of the universe. There is no clause in the laws of physics to distinguish what's good and bad. I am a human being with desires and goals, and those are things I *actually care about*.

For any 'moral' law handed to me on high, I can always ask why I should care about it. But when I actually care, there's no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking "Yeah, but why should I care about this?" These sorts of things I'm happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.

(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright's "The Moral Animal" was really great, and Joshua Greene's "Moral Tribes" is a slightly more abstract version that also contains some key insights about how morality actually works.)

My model of the person who believes the OP wants to say

"Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they're actually good?"

To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem - I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there's something else I should care about instead, the world just makes sense now.

To point to an example of the process turning out the other way: there's a been a variety of updates I've made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:

  • Learning about scope insensitivity and framing effects
  • Learning about how the rate of economic growth has changed so suddenly since the industrial revolution (i.e. very recently in evolutionary terms)
  • Learning about the various dutch book theorems and axioms of rational behaviour that imply a rational agent is equivalent to an expected-utility maximiser.

These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it's what I reflectively care about.

I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn't in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn't how this works.

Meta-level response: I don't trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I'm actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.

Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.

Here's some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):

…the way we know Christianity to be true is by the self-authenticating witness of God’s Holy Spirit. Now what do I mean by that? I mean that the experience of the Holy Spirit is… unmistakable… for him who has it; …that arguments and evidence incompatible with that truth are overwhelmed by the experience of the Holy Spirit…

…it is the self-authenticating witness of the Holy Spirit that gives us the fundamental knowledge of Christianity’s truth. Therefore, the only role left for argument and evidence to play is a subsidiary role… The magisterial use of reason occurs when reason stands over and above the gospel… and judges it on the basis of argument and evidence. The ministerial use of reason occurs when reason submits to and serves the gospel. In light of the Spirit’s witness, only the ministerial use of reason is legitimate. Philosophy is rightly the handmaid of theology. Reason is a tool to help us better understand and defend our faith…

[The inner witness of the Spirit] trumps all other evidence.

My impression is that it's fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.  If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing.  But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for.  In this case, the real algorithm is "Never repair anything expensive."  If this is a good algorithm, fine; if this is a bad algorithm, oh well.  The arguments you write afterward, above the bottom line, will not change anything either way.

My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it's not even worth exploring the particular mistakes made once you've reached this conclusion).

This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it's actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don't think it's worth spending time engaging with intellectually.

If you find yourself confused by a theologian's argument, I don't mean to say you should ignore that and pretend that you're not confused. That's a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don't expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.

---

To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn't respond to further comments in this thread any less than usual :)

Thanks Ben! I'll try and comment on your object level response in this comment and your meta level response in another.

Alas I'm not sure I properly track the full extent of your argument, but I'll try and focus on the parts that are trackable to me. So apologies if I'm failing to understand the force of your argument because I am missing a crucial part.

I see the crux of our disagreement summed up here:

My model of the person who believes the OP wants to say
"Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they're actually good?"
To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem - I have no further confusion.

I don't see how 'understanding that evolution created these cares and desires in me resolves the problem.'

Desires on their own are at most relevant for *prudential* reason for action, i.e. I want chocolate so I have a [prudential] reason to get chocolate. I attempt to deal (admittedly briefly) with prudential reasons in the appendix. Note that I don't think that these sort of prudential reasons (if they exist) amount to moral reasons.

Unless a mere desire finds itself in a world where some broader moral theory is at play i.e. preference utilitarianism, which would itself need to enjoy an appropriate meta-ethical grounding/truthmaker (i.e. perhaps Parfit's Non-Metaphysical Non-Naturalist Normative Cognitivism). Then the mere desire won't create moral reasons for action. However if you do offer some moral theory then this just runs into the argument of my post, how would the human have access to the relevant moral theory?

In short, if you're just saying: 'actually what we talk about as moral reasons for action just boil down to prudential reasons for action as they are just desires I have' then you'll need to decide whether it's plausible to think that a mere desire actually can create an objectively binding prudential reason for action.

If instead you're saying 'moral reasons are just what I plainly and simply comprehend, and they are primitive so can have no further explanation' then I have the simple question about why you think they are primitive when it seems we can ask the seemingly legitimate question which you preempt of 'but why is X actually good?'

However, I imagine that neither of my two summaries of your argument really are what you are driving for, so apologies if that's the case.

*nods* I think what I wrote there wasn't very clear.

To restate my general point: I'm suggesting that your general frame contains a weird inversion. You're supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others' behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.

In the first one, you would be surprised to find out we've randomly been selected to have the right morality by evolution. In the second, it's almost definitional that evolution has produced us to have the right morality. There's still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.

Does the former seem like an accurate description of the way you're proposing to think about morality?

Yep what you suggest I think isn't far from the fact. Though note I'm open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.

I just think this question of 'what grounds this moral experience' is the right one to ask. On the way you've articulated it I just think your mere feelings about behaviours don't amount to normative reasons for action, unless you can explain how these normative properties enter the picture.

Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call 'binding oughtness' that they rationally compel the agent to do particular things. It's not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.

This is my response to your meta-level response.

I don't trust the intellectual tradition of this argumentative style.

It's not obvious that anyone's asking you to trust anything? Surely those offering arguments are just asking you to assess an argument on its merits, rather than by the family of thinkers the argument emerges from?

But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.

I'm reasonably involved in the apologetics community. I think there is a good deal of rationalization going on, probably more so than in other communities, though all communities do this to some extent. However I don't think we need to worry about the intentions of those offering the arguments. We can just assess the offered arguments one by one and see whether they are successful?

William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his belief

I don't think the argument you quote is quite as silly as it sounds, a lot depends on your view within epistemology of the internalism/externalism debate. Craig subscribes to reformed epistemology, where one can be warranted in believing something without having arguments for the belief.

This doesn't seem to me to be as silly as it first sounds. Imagine we simulated beings and then just dropped true beliefs into their heads about complicated maths theorems that they'd have no normal way of knowing. It seems to me that the simulated beings would be warranted to believe these facts (as they emerged from a reliable belief forming process) even if they couldn't give arguments for why those maths theorems are the case.

This is what Craig and other reformed epistemologists are saying that God does when the Holy Spirit creates belief in God in people even if they can't offer arguments for it being the case. Given that Craig believes this, he doesn't think that we need arguments if we have the testimony of the Holy Spirit and that's why he's happy to talk about reason being non-magisterial.

My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality

I have sympathy for your concern, this seems to be a world in which motivated reasoning might naturally be more present than in chemistry. However, I don't know how much work you've done in philosophy of religion or philosophy more generally, but my assessment is that philosophy of religion is as well argued and thoughtful as many of the other branches of philosophy. As a result I don't have this fear that motivated reasoning wipes the field out. As I defended before, we can look at each argument on its own merit.

This conclusion haunts me sometimes, although I've come to it from a different direction. I find it nontrivial to find fault with, given moral uncertainty. I haven't come across the "worlds where we know moral realism" argument before. Upvoted. Here are two possible objections as replies to this comment:

Suppose for sake of argument that I have p < .1% that a being existed who had access to moral facts and could influence the world. Given this, the likelihood that one is confused on some basic question about morality would be higher.

Good point - this sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously won't go through.

I guess from my assessment of the philosophy of religion literature it doesn't seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.

Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.

Reason could give us access to moral facts. There are plenty of informed writers who would claim to have solved the Is-Ought Problem. If one does not solve the Is-Ought problem, I’m not clear why this is better than moral subjectivism, though I didn’t read the link on subjectivism.

edit: spelling

I've never heard a plausible account of someone solving the is-ought problem, I'd love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.

I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can't be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I'd be interested if people here defend a sophisticated view of subjectivism that doesn't have unpalatable results.

To link this to JP's other point, you might be right that subjectivism is implausible, but it's hard to tell how low a credence to give it.

If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).

I'm pretty uncertain about my credence in each of those views though.

This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.

To capture my updated view, it'd be something like this: for those who have what I'd consider a 'rational' probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a 'rational' probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent's decision space should be governed by what reasons the agent would face if theism were true.

Upvote for starting with praise, and splitting out separate threads.

The meat of this post seems to be a version of Plantinga's EAAN.

So I think that's broadly right but it's a much narrower argument than Plantinga's.

Plantinga's argument defends that we can't trust our beliefs *in general* if unguided evolution is the case. The argument I defend here is making a narrower claim that it's unlikely that we can trust our normative beliefs if unguided evolution is the case.

What does it mean to "act as if theism is true"? Are there any ways you'd expect an average member of the EA community to behave differently if they were to suddenly agree with you? I've never understood how to respond to arguments of the "something we could call 'God' must exist" variety, because I've never understood what those arguments want from me.

Some religions argue that gods are to be obeyed, or prayed to. I understand how those beliefs interact with action. But a god that issues no commands and desires no worship offers me no reason to consider it ever again, even if I do accept its existence.

Conditional on theism being true in the sense of this post, it seems especially likely that one of the particular religions that exist currently is most likely to be (approximately) true. If nothing else, you could figure out which religion is true, and then act based on what that religion asks for.

This quest to determine which religion is true sounds a lot like the genesis story of Mormonism. Joseph Smith, its founder, was deeply concerned about which religion had the truth. He prayed to know which religion to join. God the Father and Jesus Christ supposedly appeared to him and told him that none were true, but that he was chosen to restore the true religion.

I worry a lot about the danger of accepting that there are unassailable moral truths established by a supreme being. I’m a former Mormon, and the catalyst for my departure from the religion was my profound disagreement with its anti-LGBT teachings. I was taught to follow these teachings even though they seemed wrong to me because they came from God, and things that come from God are always good, whether we like them or not.

If we are to live like theists and accept that there are objective moral truths that exist because a supreme being said so, aren’t we in danger of deceiving ourselves and causing undo harm?

Not if the best thing to do is actually what the supreme being said, and not what you think is right, which is (a natural consequence of) the argument in this post.

(Tbc, I do not agree with the argument in the post.)

" It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. "

Not sure if I'm misunderstanding something, but couldn't unguided evolution give us general all-purpose reasoning, and then that could be used to give Tina epistemic access to at least enough rationale to guide her actions?

Agreed that unguided evolution might give us generally reliable cognitive faculties. However, there is no obvious story we can give for how our cognitive faculties would have access to moral facts (if they exist). Moral facts don't interact with the world in a way that gives humans the way to ascertain them. They're not like visual data where reflected/emitted photons can be picked up by our eyes. So it's not clear how information about them would enter into our cognitive system?

I'd be interested if you have thoughts on a mechanism whereby information about moral facts could enter our cognitive systems?

There are no obvious structural connections between knowing correct moral facts and evolutionary benefit.

...

There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.

I haven't read Lukas Gloor's post, so I'm not sure whether this counts as "subjectivism" and therefore is implausible to you, but:

Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit. There might be objective facts about exactly which moral systems provide this benefit, and believing in a useful moral system could help you to enact that moral system.

For example, it could be the case that what is "good" is what benefits your genes without benefiting you personally. People could thus correctly believe that there are some actions that are good, in the same way they believe that some actions are "helpful". I think, and have been told, that there are mathematical reasons to think this particular instantiation is not the case, but I haven't fully understood them yet.

Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit.

I wholeheartedly agree with this. However there is no structural reason to think that most possible sets of moral facts would have evolutionary benefit. You outline one option where there would be a connection, however that this is the story behind morality would be surprisingly lucky on our part.

We would also need to acknowledge the possibility that evolution has just tricked us into thinking that common sense morality is correct when really moral facts are all about maximising the number of paperclips and we're all horribly failing to do what is moral.

It's only if there is some sort of guiding control over evolution that we could have reason to trust that we were in the 'jammy' case and not the 'evolution tricking us case'?

Curated and popular this week
Relevant opportunities