Originally posted on my blog.

Over the past year or so I’ve become steadily more aware of and annoyed by a phenomeon I’m going to call, for lack of a better term, ‘intuition jousting’ (‘IJ’). My experience, and obviously I can only speak for my own, is that IJ is a quite a serious phenomenon in the Effective Altruism (‘EA’) community. It also exists amongst academic philosophers, although to much more modest extent. I’ll explain what IJ is, why it’s bad, why I think it’s particularly prevalent in EA and what people should be doing instead.

Intuition jousting is the act of challenging whether someone seriously holds the intuition they claim to have. The implication is nearly always that the target of the joust has the ‘wrong’ intuitions. This is typically the last stage in an argument: you’ve already discussed the pros and cons of a particular topic and have realised you disagree because you’ve just got different starting points. While you’ve now exhausted all logical arguments, there is an additional rhetoric move to make: claiming someone’s fundamental (moral) instincts are just flawed. I call it ‘jousting’ because all it involves is testing how firmly attached someone is to their view: you’re trying to ‘unhorse’ them. Intuition jousting is a test of psychological tenacity, not philosophical understanding.

(It’s possible there’s already a term for this phenomenon somewhere I’ve not come across. I should note it’s similar to giving someone whose argument you find absurd an ‘incredulous stare‘: you don’t provide a reason against their position, you just look them in the eye like they’re mad. The incredulous stare is one potential move in an intuition joust.)

To give an common example, lots of philosophers and effective altruists disagree about the value of future people. To some, it’s just obvious that future lives have value and the highest priority is fighting existential threats to humanity (‘X-risks’). To others, it’s just obvious there is nothing morally good about creating new people and we should focus on present-day suffering. Both views have weird implications, which I won’t go into here (see Greaves 2015 for a summary), but conversation often reaches its finale with one person saying “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.” Typically at that stage the person will fold his arms (it’s nearly always a ‘he’) and look around the room for support expecting he's now won the argument.

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive. I hope it’s clear that IJing isn’t arguing, it’s just disagreeing about who has what intuitions and how good they are. Given that the intuitions are the things you have without reasoning or evidence, IJ has to be pointless. If you reach the stage where someone says “yeah, I can’t give you any further reasons, I just do find the view plausible” and you then decide to tell them those beliefs are stupid, all you’re doing is trying to shame or pressure them in admitting defeat so that you can win the argument. Obviously, where intuition jousting occurs and people feel they will get their personal views attacked if they share them, people will be much less inclined to cooperate or work together. There's also a very real danger of creating accidental group-think and intellectual segregation. This may already be happening: suppose members of some group IJ those they disagree with. Individually, people decide not to participate that group, which people those left in the collective all have similar views and additionally think those views are more commonly held than they really are.

To be clear, I don’t object at all to arguing about things and getting down to what people’s base intuitions are. Particularly if they haven’t thought about them before, this is really useful. People should understand what those intuitions commit them to and whether they are consistent so they can decide if they like the consequences or want to revise their views. My objection is that, once you've worked you way down to someone's base intuitions, you shouldn't mock them just because their intuitions are different from yours. It’s the jousting aspect I think is wrong.

I’ve noticed IJing happens much more among effective altruists than academic philosophers. I think there are two reasons for this. The first is that the stakes are higher for effective altruists. If you’re going to base your entire career on whether view X is right or wrong, getting X right really matters in a way it doesn’t if two philosophers disagree over whether Plato really meant A, A*, or A**. The second is that academic philosophers (i.e. people who have done philosophy at university for more than a couple of years) just accept that people will have different intuitions about topics, it’s normal and there’s nothing you can do about it. If I meet a Kantian and get chatting about ethics, I might believe I’m right and he’s wrong (again, it’s mostly ‘hes’ in philosophy) but there’s no sense fighting over it. I know we’re just going to have started from different places. Whilst there are lots of philosophical-types in effective altruists, by no means all EAs are used to philosophical discourse. So when one EA who has strong views runs into another EA who doesn’t share his views, it’s more likely one or both them will assume there must be a fact of the matter to be found, and one obvious and useful way to settle this fact is by intuition jousting it out until one person admits defeat.

I admit I’ve done my fair share of IJing in my time. I’ll hold my lance up high and confess to that. Doing it is fun and I find it a hard habit to drop. That said, I’ve increasingly realised it’s worth trying to repress my instincts because IJing is counter-productive. Certainly I think the effective altruist community should stop. (I’m less concerned about philosophers because 1. they do it less and 2. lots of philosophy is low-stakes anyway).

What should people do instead? The first step, which I think people should basically always do, is to stop before you start. If you realise you’re starting to line yourself up for the charge you should realise this will be pointless. Instead you say “Huh, I guess we just disagree about this, how weird“. This is the ‘stop jousting’ option. The second step, which is optional but advised, is to trying to gain understanding by working out why a person has those views: “Oh wow. I think about it this way. Why do you think about it that way?” This is more the ‘dismount and talk’ option.

As Toby Ord has argued, it’s possible for people to engage in moral trade. The idea is that two people can disagree about what’s valuable but it can still be good for both parties to cooperate and help each other reach their respective moral goals. I really wish in the EA community I saw more scenarios where, should an X-risk advocate end up speaking to an animal welfare advocate, rather than each dismissing the other person as being wrong or stupid (either out loud or in their head), or jousting over who supports the right cause, they tried to help the other better achieve their objectives. From what I’ve seen philosophers tend to be much better and taking it in turns to develop either others’ views, even if they don’t remotely share them.

And if we really do feel the need to joust, can’t we at least attack the intuitions of those heartless bastards over at Charity Navigator or the Make-a-Wish Foundation instead?*

*This is a joke, I’m sure they are lovely people doing valuable work.

 

Comments28
Sorted by Click to highlight new comments since: Today at 9:24 AM

To some, it’s just obvious that future lives have value and the highest priority is fighting existential threats to humanity (‘X-risks’).

I realize this is just an example, but I want to mention as a side-note that I find it weird what a common framing this is. AFAIK almost everyone working on existential risk think it's a serious concern in our lifetimes, not specifically a "far future" issue or one that turns on whether it's good to create new people.

As an example of what I have in mind, I don't understand why the GCR-focused EA Fund is framed as a "long-term future" fund (unless I'm misunderstanding the kinds of GCR interventions it's planning to focus on), or why philosophical stances like the person-affecting view and presentism are foregrounded. The natural things I'd expect to be foregrounded are factual questions about the probability and magnitude over the coming decades of the specific GCRs EAs are most worried about.

Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk reduced by 1% 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Mea culpa that I switched from "impact on beings alive today" to "benefits over the next 50 years" without noticing.

Agree with the above, but wanted to ask: what do you mean by a 'strong presentist' view? I've not heard/seen the term and am unsure what it is contrasted with.

Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?

"Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?"

In my comment, yes.

Why does this confusion persist among long-time EA thought leaders after many years of hashing out the relevant very simple principles? "Beings currently alive" is a judgment about which changes are good in principle, "benefits the next 50 years" is an entirely different pragmatic scope limitation, and people keep bringing up the first in defense of things that can only really be justified by the second.

I understand how someone could be initially confused about this - I was too, initially. But, it seems like the right thing to do once corrected is to actually update your model of the world so you don't generate the error again. Presentism without negative utilitarianism suggests that we should focus on some combination of curing aging, real wealth creation sufficient to extend this benefit to as many currently alive people as we can, and preventing deaths before we manage to extend this benefit, including due to GCRs likely to happen during the lives of currently living beings.

As it is, we're not making intellectual progress, since the same errors keep popping up, and we're not generating actions based on the principles we're talking about, since people keep bringing up principles that don't actually recommend the relevant actions. What are we doing, then, when we talk about moral principles?

To add on to this, I think the view you're referring to is presentism combined with the deprivationism view on death: presentism = only presently alive people matter + deprivationism = the badness of death is the ammount of happiness the person would have had.

You could be, say, a presentist (or other person-affecting view) and combined with say, Epicureanism about death. That would hold only presently alive people matter and there's no badness in death, and hence no value in extending lives.

If that were your view you'd focus on the suffering of presently humans instead. Probably mental illness or chronic pain. Maybe social isolation if you had a really neat intervention.

But yeah, you're right that person-affecting views doesn't capture the intuitive badnes of animal suffering. You could still be a presentist and v*gan on environmental grounds.

And I agree that presentism + deprivationism suggests trying to cure aging is very important and, depending on details, could have higher EV than suffering relief. I'm unclear that real wealth creation would do very much due to hedonic adaptation and social comparison challenges.

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

I don't have much to add to what Rob W and Carl said, but I'll note that Bostrom defined "existential risk" like this back in 2008:

A subset of global catastrophic risks is existential risks. An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically.

Presumably we should replace "intelligent" here with "sentient" or similar. The reason I'm quoting this is that on the above definition, it sounds like any potential future event or process that would cost us a large portion of the future's value counts as an xrisk (and therefore as a GCR). 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' sounds like a global catastrophic risk to me, on that definition. (From a perspective that does care about long-term issues, at least.)

I'll note that I think there's at least some disagreement at FHI / Open Phil / etc. about how best to define terms like "GCR", and I don't know if there's currently a consensus or what that consensus is. Also worth noting that the "risk" part is more clearly relevant than the "global catastrophe" part -- malaria and factory farming are arguably global catastrophes in Bostrom's sense, but they aren't "risks" in the relevant sense, because they're already occurring.

"counts as an xrisk (and therefore as a GCR)"

My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people

(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).

So, as I was using the term, something being an x-risk does not entail it being a GCR. I'd count 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' as an x-risk but not a GCR.

Interesting (/worrying!) how we're understanding widely-used terms so differently.

Agree that that's the most common operationalization of a GCR. It's a bit inelegant for GCR not to include all x-risks though, especially given that it is used interchangeably within EA.

It would odd if the onset of a permanently miserable dictatorship didn't count as a global catastrophe because no lives were lost.

Could you or Will provide an example of a source that explicitly uses "GCR" and "xrisk" in such a way that there are non-GCR xrisks? You say this is the most common operationalization, but I'm only finding examples that treat xrisk as a subset of GCR, as the Bostrom quote above does.

You're right, it looks like most written texts, especially more formal ones give definitions where x-risks are equal or a strict subset. We should probably just try to roll that out to informal discussions and operationalisations too.

"Definition: Global Catastrophic Risk – risk of events or processes that would lead to the deaths of approximately a tenth of the world’s population, or have a comparable impact." GCR Report

"A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale." - Wiki

"Global catastrophic risk (GCR) is the risk of events large enough to significantly harm or even destroy human civilization at the global scale." GCRI

"These represent global catastrophic risks - events that might kill a tenth of the world’s population." - HuffPo

I'm not totally sure I understand what you mean by IJ. It sounds like what you're getting at is telling someone they can't possible have the fundamental intuition that they claim they have (either that they don't really hold that intuition or that they are wrong to do so). Eg: 'I simply feel fundamentally that what matters most is positive conscious experiences' 'That seems like a crazy thing to think!'. But then your example is

"But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.".

That seems like a different structure of argument, more akin to: 'I feel that what matters most is having positive conscious experiences (X)' 'But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!' The difference is significant: if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful. Since that's the case, I assume you meant IJ to refer to arguments more like the former kind.

I'm strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it's easier to describe and explore a novel theory you feel invested in. It's also more interesting for other philosophers to explore novel theories, so in a sense they don't have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it's absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it's important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. 'No other serious philosophers hold that view' might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say 'Your intuition that A is ludicrous', they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.

Thanks for this Michelle. I don't think I've quite worked out how to present what I mean, which is probably why it isn't clear.

To try again, what I'm alluding to are argumentative scenarios where X and Y are disagreeing, and it's apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.

Intuition jousting is where Y then says things like "but that's nuts!" Note Y isn't providing an argument now. It's a purely rhetorical move that uses social pressure ("I don't want people to think I'm nuts") to try and win the argument. I don't think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say "but your view has different weird implications of its own, and that's more nuts!" It's like a joust because the two people are just testing who's able to hold on to their view under the pressure from the other.

I suppose Y could counter-counter attack X and say "yeah, but more people who have thought about this deeply agree with me". It's not clear what logical (rather than rhetorical) force this adds. It seems like 'deeply' would, in any case, being doing most of the work in that scenario.

I'm somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement ("intuition exchanging") rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it's only the jousting variety I object to.

[anonymous]7y1
0
0

Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible - "I'm happy to accept the repugnant conclusion, not the sadistic one" etc. But intuitions are ten a penny so this doesn't really take us very far - smart people have summoned intuitions against the analytical truth that betterness is transitive.

What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...

  1. One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.

  2. Debunk an anti-aggregative view by appealing to people's failure to grasp large numbers.

  3. Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.

You might want to look at Huemer's stuff on intuitionism. - https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/revisionary-intuitionism/EE5C8F3B9F457168029C7169BA1D62AD

That's helpful, thanks.

Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.

Do people think this debunking approach can go all the way? If it doesn't, it looks like a more refined version of the problem still recurs.

Particularly interesting stuff about prioritarianism.

[anonymous]7y1
0
0

It's a difficult question when we can stop debunking and what counts as successful debunking. But this is just to say that moral epistemology is difficult. I have my own views and what can and can't be debunked. e.g. I don't see how you could debunk the intuition that searing pain is bad. But this is a massive issue.

if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful

For a related example, see Carl's comment on why presentism doesn't have the implications some people claim it does.

Regarding “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.”

I agree that being haughty is typically bad. But the argument "X implies Y, and you claim to believe X. Do you also accept the natural conclusion, Y?" when Y is ridiculous is a legitimate argument to make. At that point, the other person either can accept the implication, change his mind on X, or argue that X does not imply Y. It seems like the thing you have most of a problem with is the tone though. Is that correct?

I've noticed this before, and I think it's a wrong truth-seeking device on a technical level.

Basically, I'm really leery of reductio ad absurdums with statements that are inherently probabilistic in general, but especially when it comes to ethics.

A straightforward reductio ad absurdum goes:

  1. Say we believe in P
  2. P implies Q
  3. Q is clearly wrong
  4. Therefore, not P.

However, in philosophical ethics it's more like

  1. Say we believe in P
  2. A seems reasonable
  3. B seems reasonable
  4. C seems kind of reasonable.
  5. D seems almost reasonable if you squint a little, at least it's more reasonable than P
  6. E has a >50% chance of being right.
  7. P and A and B and C and D and E implies Q
  8. Q is an absurd/unintuitive conclusion.
  9. Therefore, not P

The issue here is that most of the heavy lifting is done by appeals to conjunctions, and conflating >50% probabilities with absolute truths.

A method I've found useful for generating lots of ideas is to assume that reductio ad absurdum is not valid. This might be useful here too, for slightly different reasons.

I'm not sure we're in disagreement. I think that's what I said in the following paragraph:

"To be clear, I don’t object at all to arguing about things and getting down to what people’s base intuitions are. Particularly if they haven’t thought about them before, this is really useful. People should understand what those intuitions commit them to and whether they are consistent so they can decide if they like the consequences or want to revise their views. My objection is that, once you've worked you way down to someone's base intuitions, you shouldn't mock them just because their intuitions are different from yours. It’s the jousting aspect I think is wrong"

This is a problem, both for the reasons you give:

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive.

and through this mechanism, which you correctly point out:

The implication is nearly always that the target of the joust has the ‘wrong’ intuitions.

The above two considerations combine extremely poorly with the following:

I’ve noticed IJing happens much more among effective altruists than academic philosophers.

Another consequence of this tendency, when it emerges, is that communicating a felt sense of something is much harder to do, and less rewarding to do, when there's some level of social expectation that arguments from intuition will be attacked. Note that the felt senses of experts often do contain information that's not otherwise available when said experts work in fields with short feedback loops. (This is more broadly true: norms of rudeness, verbal domination, using microaggressions, and nitpicking impede communication more generally, and your more specific concept of IJ does occur disproportionately often in EA).

Note also that development of a social expectation whereby people believe on a gut level that they'll receive about as much criticism, verbal aggression, and so on regardless of how correct or useful their statements are may be especially harmful (See especially the second paragraph of p.2).

"It also exists amongst academic philosophers"

As far as I can tell virtually all academic philosophy bottoms out at some kind of intuition jousting. In each philosophical sub-field, no matter what axioms you accept common sense is going to suffer some damage, and people differ on where they'd least mind to take the hit. And there doesn't seem to be another means to choose among the most foundational premises on which people's models are built.

I predict nothing will stop people from intuition jousting except a more objective, or dialectically persuasive, way to answer philosophical questions.

See my answer to Michelle below where I try to clarify what I mean.