Comment author: RyanCarey 18 April 2017 02:01:16AM *  3 points [-]

In general, if you're considering takign some arguably seedy action that carries collective risks, and you see that everyone else has been avoiding the action, you should guess that you've underestimated the magnitude of these risks. It's called the Unilateralist's Curse.

In this case, the reputational risks that you've incurred seem to make this a pretty unhelpful post.

The standard way to ward off the unilateralist's curse is to consult others who bear the risk but who hold different views and assumptions in order to help you to make a less biased assessment.

For this post and in general, people should consult others before writing potentially risky posts.

Comment author: MichaelPlant 18 April 2017 09:02:52PM 0 points [-]

I not sure I understand your point but I think you're being a bit harsh. I would have thought floating this on the EA forum as a potential suggestion (rather than a fait accompli) is exactly consulting others to see if it's a good idea. If the EA forum weren't (as far as I can tell) just filled with EAs, I'd agree.

Also, I think it's unhelpful in turn to tell other people they're effectively stupid for floating ideas as that 1. discourages people from sharing their views, which restricts debate only to the bold and 2. makes people feel unwelcome.

Comment author: MichaelPlant 13 April 2017 11:26:44PM 1 point [-]

This is great. Really surprised the opportunity framing is the worst. I think EA's takeoff was due, in part, to the opportunity framework over (what I assumed was) Singer's off putting obligation framing.

I guess the internet and whatever is the explanation.

I'd like to see more on this which a much bigger study.

Comment author: Robert_Wiblin 13 April 2017 10:36:50PM 3 points [-]

I agree it's possible to do these things without being misleading (e.g. give awards to those who deserve them, and put forward good speakers).

I suspect society adapts to ensure 'no free positive signals' (something like a social equivalent of conservation of energy). Imagine that you did put forward a lousy speaker (not that you were advocating doing this). If it's easy to put on events like this in such a way that nobody involved suffers a reputation hit (e.g. nobody attends and the organisation putting on the event couldn't care less that you put forward a bad speaker), then I bet the line 'gave a talk at a law school' won't actually be that useful on a CV. Or it will quickly become devalued by people who read CVs as they cotton on to what's going on.

While at any point in time there are some misleading signals you can grab that haven't yet been devalued, it's probably more efficient (and more enduring) to gain real skills and translate them into credible signals.

But your post is most charitably read as saying 'giving good speakers opportunities to perform', and 'reward people who have done virtuous things'.

Comment author: MichaelPlant 13 April 2017 11:20:47PM 2 points [-]

I'm optimistic about this and think it's potentially a good idea.

EA is potentially unusual (unique?) in having a network of smart people distributed across good universities with the goodwill to help each other. I think EA is sufficiently new and lacking lots of professionals - compared to say, law - that there's probably low hanging fruit to research and talk about. I mean, Oxford has the Prioritisation Project and that's largely undergrads. I don't mean that to demean them; quite the opposite, I think they're doing valuable work and it indicates how much there is to be done which can also be done credibly.

FWIW, I think 'awards from EA clubs' will look strange to non-EA employers who won't understand it, and not obviously meaningful/credible to EA employers. But I'm prepared to be proved wrong and would like to see the idea fleshed out more.

I also think having EAs do research and gives talk to each other is valuable even if it doesn't go on anyone's CV.

Comment author: Carl_Shulman 31 March 2017 07:23:54PM *  13 points [-]

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Comment author: MichaelPlant 03 April 2017 10:22:03AM 2 points [-]

Agree with the above, but wanted to ask: what do you mean by a 'strong presentist' view? I've not heard/seen the term and am unsure what it is contrasted with.

Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?

Comment author: BenHoffman 02 April 2017 07:39:19PM *  1 point [-]

Why does this confusion persist among long-time EA thought leaders after many years of hashing out the relevant very simple principles? "Beings currently alive" is a judgment about which changes are good in principle, "benefits the next 50 years" is an entirely different pragmatic scope limitation, and people keep bringing up the first in defense of things that can only really be justified by the second.

I understand how someone could be initially confused about this - I was too, initially. But, it seems like the right thing to do once corrected is to actually update your model of the world so you don't generate the error again. Presentism without negative utilitarianism suggests that we should focus on some combination of curing aging, real wealth creation sufficient to extend this benefit to as many currently alive people as we can, and preventing deaths before we manage to extend this benefit, including due to GCRs likely to happen during the lives of currently living beings.

As it is, we're not making intellectual progress, since the same errors keep popping up, and we're not generating actions based on the principles we're talking about, since people keep bringing up principles that don't actually recommend the relevant actions. What are we doing, then, when we talk about moral principles?

Comment author: MichaelPlant 03 April 2017 10:18:03AM 1 point [-]

To add on to this, I think the view you're referring to is presentism combined with the deprivationism view on death: presentism = only presently alive people matter + deprivationism = the badness of death is the ammount of happiness the person would have had.

You could be, say, a presentist (or other person-affecting view) and combined with say, Epicureanism about death. That would hold only presently alive people matter and there's no badness in death, and hence no value in extending lives.

If that were your view you'd focus on the suffering of presently humans instead. Probably mental illness or chronic pain. Maybe social isolation if you had a really neat intervention.

But yeah, you're right that person-affecting views doesn't capture the intuitive badnes of animal suffering. You could still be a presentist and v*gan on environmental grounds.

And I agree that presentism + deprivationism suggests trying to cure aging is very important and, depending on details, could have higher EV than suffering relief. I'm unclear that real wealth creation would do very much due to hedonic adaptation and social comparison challenges.

Comment author: Halstead 31 March 2017 12:23:47PM 1 point [-]

Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible - "I'm happy to accept the repugnant conclusion, not the sadistic one" etc. But intuitions are ten a penny so this doesn't really take us very far - smart people have summoned intuitions against the analytical truth that betterness is transitive.

What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...

  1. One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.

  2. Debunk an anti-aggregative view by appealing to people's failure to grasp large numbers.

  3. Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.

You might want to look at Huemer's stuff on intuitionism. - https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/revisionary-intuitionism/EE5C8F3B9F457168029C7169BA1D62AD

Comment author: MichaelPlant 31 March 2017 01:20:10PM 0 points [-]

That's helpful, thanks.

Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.

Do people think this debunking approach can go all the way? If it doesn't, it looks like a more refined version of the problem still recurs.

Particularly interesting stuff about prioritarianism.

Comment author: Daniel_Eth 30 March 2017 06:16:38PM 5 points [-]

Regarding “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.”

I agree that being haughty is typically bad. But the argument "X implies Y, and you claim to believe X. Do you also accept the natural conclusion, Y?" when Y is ridiculous is a legitimate argument to make. At that point, the other person either can accept the implication, change his mind on X, or argue that X does not imply Y. It seems like the thing you have most of a problem with is the tone though. Is that correct?

Comment author: MichaelPlant 31 March 2017 11:14:48AM 0 points [-]

I'm not sure we're in disagreement. I think that's what I said in the following paragraph:

"To be clear, I don’t object at all to arguing about things and getting down to what people’s base intuitions are. Particularly if they haven’t thought about them before, this is really useful. People should understand what those intuitions commit them to and whether they are consistent so they can decide if they like the consequences or want to revise their views. My objection is that, once you've worked you way down to someone's base intuitions, you shouldn't mock them just because their intuitions are different from yours. It’s the jousting aspect I think is wrong"

Comment author: Robert_Wiblin 30 March 2017 11:53:00PM *  2 points [-]

"It also exists amongst academic philosophers"

As far as I can tell virtually all academic philosophy bottoms out at some kind of intuition jousting. In each philosophical sub-field, no matter what axioms you accept common sense is going to suffer some damage, and people differ on where they'd least mind to take the hit. And there doesn't seem to be another means to choose among the most foundational premises on which people's models are built.

I predict nothing will stop people from intuition jousting except a more objective, or dialectically persuasive, way to answer philosophical questions.

Comment author: MichaelPlant 31 March 2017 11:13:29AM 0 points [-]

See my answer to Michelle below where I try to clarify what I mean.

Comment author: Michelle_Hutchinson 31 March 2017 10:27:37AM *  9 points [-]

I'm not totally sure I understand what you mean by IJ. It sounds like what you're getting at is telling someone they can't possible have the fundamental intuition that they claim they have (either that they don't really hold that intuition or that they are wrong to do so). Eg: 'I simply feel fundamentally that what matters most is positive conscious experiences' 'That seems like a crazy thing to think!'. But then your example is

"But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.".

That seems like a different structure of argument, more akin to: 'I feel that what matters most is having positive conscious experiences (X)' 'But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!' The difference is significant: if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful. Since that's the case, I assume you meant IJ to refer to arguments more like the former kind.

I'm strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it's easier to describe and explore a novel theory you feel invested in. It's also more interesting for other philosophers to explore novel theories, so in a sense they don't have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it's absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it's important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. 'No other serious philosophers hold that view' might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say 'Your intuition that A is ludicrous', they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.

Comment author: MichaelPlant 31 March 2017 11:11:59AM 3 points [-]

Thanks for this Michelle. I don't think I've quite worked out how to present what I mean, which is probably why it isn't clear.

To try again, what I'm alluding to are argumentative scenarios where X and Y are disagreeing, and it's apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.

Intuition jousting is where Y then says things like "but that's nuts!" Note Y isn't providing an argument now. It's a purely rhetorical move that uses social pressure ("I don't want people to think I'm nuts") to try and win the argument. I don't think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say "but your view has different weird implications of its own, and that's more nuts!" It's like a joust because the two people are just testing who's able to hold on to their view under the pressure from the other.

I suppose Y could counter-counter attack X and say "yeah, but more people who have thought about this deeply agree with me". It's not clear what logical (rather than rhetorical) force this adds. It seems like 'deeply' would, in any case, being doing most of the work in that scenario.

I'm somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement ("intuition exchanging") rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it's only the jousting variety I object to.

3

Intuition Jousting: What It Is And Why It Should Stop

Originally posted on  my blog . Over the past year or so I’ve become steadily more aware of and annoyed by a phenomeon I’m going to call, for lack of a better term, ‘intuition jousting’ (‘IJ’). My experience, and obviously I can only speak for my own, is that IJ... Read More

View more: Next