In response to Introducing Enthea
Comment author: MichaelPlant 09 August 2017 01:20:06PM 6 points [-]

Hello Milan. I've been working on drug policy reform for the last couple of months and have just put up the 1st of a series of posts on the topic on this forum. I'd be delighted to get your input on this, although the potential recreational benefits of drugs are not really what we're leading with.

19

High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary

In the last 4 months, I’ve come to believe drug policy reform, changing the laws on currently illegal psychoactive substances, may offer a substantial, if not the most substantial, opportunity to increase the happiness of humans alive today. I consider this result very surprising. I’ve been researching how best to... Read More
Comment author: MichaelPlant 06 August 2017 07:01:44PM 2 points [-]

This isn't a criticism of the analysis but a question about the audience for which it was intended. Do many EAs give to medical research? Whilst there are lots of non-EAs who donate to, say, cancer research, I can't remember having a conversation with an EA where they'd said this is what they donate to.

FWIW, I agree that cancer seems to substantially overfunded compared to alternatives.

In response to EAGx Relaunch
Comment author: MichaelPlant 23 July 2017 01:01:06PM 3 points [-]

This is less of a question about EAGxs themselves than the reasoning behind the change. I'm curious about this line of thinking:

EA believes that, at least at the moment, our efforts to improve the world are bottlenecked by our ability to help promising people become fully engaged, rather than attracting new interest.

Could you say 1) why CEA has come to believe this and 2) what this means you'll be trying to do differently (besides these specific changes to EAGxes)?

This isn't a critical question, would just like to know more.

Comment author: Michelle_Hutchinson 19 July 2017 01:47:04PM *  2 points [-]

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

  • We should be thinking about and discussing theories which put constraints on actions you’re allowed to take to increase welfare. Most people think there are some limits on be what we’re allowed to do to others to benefit others. Most philosophers believe there are some deontological principles / agent centred constraints or prerogatives.

  • We should be considering how prioritarian to be. Many people think we should give priority to those who are worst off, even if we can benefit them less than we could others. Many philosophers think that there’s (some degree of) diminishing moral value to welfare.

  • Perhaps we ought to be inclusive of views to the effect that (at least some) non-human sentient beings have little or no moral value. Many people’s actions imply they believe that a large number of animals have little or no moral value, and that robots never could have moral value. Fewer philosophers seem to hold this view.

  • I’m less convinced about being inclusive towards views which place no value on the future. It seems widely accepted that climate change is very bad, despite the fact that most of the harms will accrue to those in the future. It’s controversial what the discount rate should be, but not that the pure time discount rate should be small. Very few philosophers defend purely person-affecting views.

Comment author: MichaelPlant 20 July 2017 10:07:51PM *  0 points [-]

Thanks Michelle.

I agree there's a difficulty in finding a theoretical justification for how inclusive you are. I think this overcooks the problem somewhat as an easier practical principle would be "be so inclusive no one feels their initially preferred theory isn't represented". You could swap "no one" for "few people" with "few" to be further defined. There doesn't seem much point saying "this is what a white supremacist would think" as there aren't that many floating around EA, for whatever reason.

On your suggestions for being inclusive, I'm not sure the first two are so necessary simply because it's not clear what types of EA actions prioritarians and deontologists will disagree about in practice. For which charities will utils and prioritarians diverge, for instance?

On the third, I think we already do that, don't we? We already have lots of human-focused causes people can pick if they aren't concerned about non-human animals.

On the last, the only view I can think of which puts no value on the future would be one with a very high pure time discount. I'm inclined towards person-affecting views and I think climate change (and X-risk) would be bad and are worth worrying about: they could impact the lives of those alive today. As I said to B. Todd earlier, I just don't think they swamp the analysis.

Comment author: MichaelPlant 20 July 2017 09:54:22PM 7 points [-]

This was great and I really enjoyed reading it. It's a pleasure to see one EA disagreeing with another with such eloquence, kindness and depth.

What I would say is that, even as someone doing a PhD in Philosophy, I found a bunch of this hard to follow (I don't really do any work on consciousness), particularly objection 7 and when you introduced QRI's own approach. I'll entirely understand if you think making this more accessible is more trouble that it's worth, I just thought I'd let you know.

Comment author: Alex_Barry 18 July 2017 05:04:02PM 0 points [-]

Hey Michael, sorry I am slightly late with my comment.

To start I broadly agree that we should not be misleading about EA in conversation, however my impression is that this is not a large problem (although we might have very different samples).

I am unsure where I stand on moral inclusivity/exclusivity, although as I discuss later I think this is not actually a particularly major problem, as most people do not have a set moral theory.

I am wondering what your ideal inclusive effective altruism outreach looks like?

I am finding it hard to build up a cohesive picture from your post and comments, and I think some of your different points don't quite gel together in my head (or at least not in an inconvenient possible world).

You give an example of this of beginning a conversation with global poverty before transitioning to explaining the diversity of EA views by:

Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn't accidentally wipe itself out, etc.

For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can't remember exactly): "some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it's probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves." At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.

However trying to make this match the style of an event a student group could actually run, it seems like the closet match (other than a straightforward into to EA event) would be a talk on effective global poverty charity, follow by an addendum on EA being more broad at the end. (I think this due to a variety of practical concerns, such a there being far more good speakers and big names in global poverty, and it providing many concrete examples of how to apply EA concepts etc.)

I am however skeptical that a addendum on the end of a way would create nearly as strong an impression as the subject matter of the talk itself, and people would still leave with a much stronger impression of EA as being about global poverty than e.g. x-risk.

You might say a more diverse approach would be to have talks etc. roughly in proportion to what EAs actually believe is important, so if to make things simple, a third of EAs thought Global poverty was most important, a third x-risk and a third animal suffering, then a third of the talks should be a global poverty, a third on x-risk etc. Each of these could then end with this explanation of EA being more broad etc.

However if people's current perception that global poverty events is best way to get new people into EA is in fact right (at least in the short term) either by having better attendance or conversion ratios this approach could still lead to the majority of new EAs first introduction to EA being through a global poverty talk.

This due to the previous problem of the addendum not really changing peoples impressions enough we could still end up with the situation you say we should want to avoid where:

People should not feel surprised about what EAs value when they get more involved in the movement.

I am approaching this all more from the student group perspective, and so don't have strong views on the website stuff, although I will note that my impression was that 80k does a good job on being inclusive, and GWWC is more of an issue with a lack of updates than anything.

One thing you don't particularly seem to be considering is that almost all people don't actually have strongly formed moral views that conform to one of the common families (utilitarian, virtue ethics etc.) so I doubt (but could be wrong, as there would probably be a lot of survivor bias in this) that a high percentage of newcomers to EA feel excluded by the current implicit assumptions that might often be made of e.g. future people matter.

Comment author: MichaelPlant 18 July 2017 10:09:01PM 1 point [-]

Hello Alex,

Thanks for the comments. FWIW, when I was thinking inclusive I had in mind 1) the websites of EA orgs and 2) introductory pitches at (student) events, rather than the talks involved in running a student group. I have no views on student groups being inclusive in their full roster of talks, not least because I doubt the groups would cohere enough to push a particular moral theory.

I agree that lots of people don't have strong moral views and I think EA should be a place where they figure out what they think, rather than a place where various orgs push them substantially in one direction or another. As I stress, I think even the perception of a 'right' answer is bad for truth seeking. Bed Todd doesn't seem to have responded to my comments on this, so I'm not really sure what he thinks.

And, again FWIW, survivorship bias is a concern. Anecdataly, I know a bunch of people that decided EA weirdness, particularly with reference to the far future, was want made them decide not to come back.

Comment author: Michelle_Hutchinson 12 July 2017 11:13:17AM 7 points [-]

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

Comment author: MichaelPlant 12 July 2017 04:10:45PM 0 points [-]

I hadn't thought the TLYCS as an/the anti-poverty org. I guess I didn't think about it as they're not so present in my part of the EA blogsphere. Maybe it's less of a problem if there are at least charities/orgs to represent different world views (although this would require quite a lot of duplication of work so it's less than ideal).

Comment author: Ben_Todd 10 July 2017 10:28:18PM 0 points [-]

Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don't read the comment threads.

Comment author: MichaelPlant 10 July 2017 11:40:51PM *  1 point [-]

I'll add a note saying you provide a decision tool, but I don't think you do what I suggest (obviously, you don't have to do what I suggest and can think I'm wrong!).

I don't think it's correct to call 80k morally inclusive because you substantially pick a prefered outcome/theory and then provide the decision tool as a sort of after thought. By my lights, being morally inclusive is incompatible with picking a preferred theory. You might think moral exclusivity is, all things considered, the right move, but we should at least be a clear that's the choice you've made. In the OP I suggest there were advantages to inclusivity over exclusivity and I'd be interested to hear if/why you disagree.

I'm also not sure if you disagree with me that the scale of suffering on the living from a X-risk disaster is probably quite small, and that the happiness lost to long-term conditions (mental health, chronic pains, ordinary human unhappiness) is of much larger scale than you've allowed. I've very happy to discuss this with you in person to hear what, if anything, would cause you to change your views on this. It would be a bit of a surprise if every moral view agreed X-risks were the most important thing, and it's also a bit odd if you've left some of the biggest problems (by scale) off the list. I accept I haven't made substantial arguments for all of these in writing, but I'm not sure what evidence you'd consider relevant.

I've also offered to help rejig the decision tool (perhaps subsequently to discussing it with you) and that offer still stands. On a personal level, I'd like the decision tool to tell me what I think the most important problems are and better reflection the philosophical decision process! You may decide this isn't worth your time.

Finally, I think my point about moral uncertainty still stands. If you think it is really important, it should probably feature somewhere. I can't see a mention of it here: https://80000hours.org/career-guide/world-problems/

Comment author: DavidNash 10 July 2017 09:04:47PM 0 points [-]

Might it be that 80k recommend X-risk because it's neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?

Comment author: MichaelPlant 10 July 2017 09:43:16PM 0 points [-]

Sure. But in that case GWWC should take the same sort of line, presumably. I'm unsure how/why the two orgs should reach different conclusions.

View more: Prev | Next