Comment author: kbog  (EA Profile) 10 September 2018 12:08:35AM *  7 points [-]

Discord lets you separate servers into different channels for people to talk about different things. There is already an EA Discord, of course new and near term EAs are welcome there. I think it would be bad if we split things like this because the more the near term EAs isolate themselves, the more and more "alienated" people will feel elsewhere, so it will be a destructive feedback loop. You're creating the problem that you are trying to solve.

Also, it would reinforce the neglect of mid-term causes which have always gotten too little attention in EA.

I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate.

Yeah, this isn't good policy. It should be pretty clear that this is how groupthink happens, and you're establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view? (perish the thought!) And you want to help with the growth of the movement. But hopefully you can find a better way to do this than creating an actual echo chamber. It's clearly a poor choice as far as epistemology is concerned.

You're also creating the problem you're trying to solve in a different way. Whereas most "near-term EAs" enjoy the broad EA community perfectly well, you're reinforcing an assumption that they can't get along, that they should expect EA to "alienate" them, as they hear about your server. As soon as people are pointed towards a designated safe space, they're going to assume that everything on the outside is unfriendly to them, and that will bias their perceptions going forward.

You are likely to have a lighter version of the problem that Hatreon did with Patreon, Voat with Reddit, etc - whenever a group of people has a problem with the "mainstream" option and someone tries to create an alternative space, the first people who jump ship to the alternative will be the highly-motivated people on the extreme end of the spectrum, who are the most closed-minded and intolerant of the mainstream, and they are going to set the norms for the community henceforth. Don't get me wrong, it's good to expand EA with new community spaces and be more appealing to new people, it is always nice to see people put effort into new ideas for EA, but this is very flawed, I strongly recommend that you revise your plans.

Comment author: MichaelPlant 12 September 2018 08:33:47AM *  5 points [-]

I don't find your objections here persuasive.

Yeah, this isn't good policy. It should be pretty clear that this is how groupthink happens, and you're establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view?

If you want to talk about how best to X, but you run into people who aren't interested in X, it seems fine to talk to other pro-Xers. It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up? Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?

You're also creating the problem you're trying to solve in a different way. Whereas most "near-term EAs" enjoy the broad EA community perfectly well, you're reinforcing an assumption that they can't get along, that they should expect EA to "alienate" them, as they hear about your server

To be frank, I think this problem already exists. I've literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say "oh, you're the Michael Plant with the weird views" which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.

Comment author: MichaelPlant 09 September 2018 09:23:12AM *  6 points [-]

On prizes 1) when would you plan to start them from (i.e. what are posts eligible for this) 2) have you thought much about extrinsic motivation crowding out intrinsic motivation? My worry is that by offering financial rewards, it changes how people will think about this e.g. "well, I'm probably not going to win anything, so I won't bother posting" or "there was some really good content this month, I'm going to hold onto mine"

Comment author: MichaelPlant 22 August 2018 10:37:13PM *  2 points [-]

This may be the best moment of my life :) (no. 2 was leading the time I was top of the EA forum karma list...)

Out of interest, could you out how many reactions these got? I'd be curious to see what the distribution of reactions is.

Comment author: Kerry_Vaughan 18 August 2018 12:25:39AM *  11 points [-]

Thanks Sam! This is really helpful. I'd be interested in talking on Skype about this sometime soon (just emailed you about it). Some thoughts below:

Is longtermism a cause?

One idea I've been thinking about is whether it makes sense to treat longtermism/the long-term future as a cause.

Longtermism is the view that most of the value of our actions lies in what happens in the future. You can hold that view and also hold the view that we are so uncertain about what will happen in the future that doing things with clear positive short-term effects is the best thing to do. Peter Hurford explains this view nicely here.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement. Yet, I also think there are substantial and reasonable disagreements about what that means practically speaking. I'd be in favor of us working to ensure that people entering the community understand the details of that disagreement.

My guess is that while CEA is very positive on longtermism, we aren't anywhere near as positive on the cause/intervention combinations that longtermism typically suggests. For example, personally speaking, if it turned out that recruiting ML PhDs to do technical AI-Safety didn't have a huge impact I would be surprised but not very surprised.

Threading the needle

My feeling as I've been thinking about representativeness is that getting this right requires threading a very difficult needle because we need to optimize against a large number of constraints and considerations. Some of the constraints include:

  • Cause areas shouldn't be tribes -- I think cause area allegiance is operating as a kind of tribal signal in the movement currently. You're either on the global poverty tribe or the X-risk tribe or the animal welfare tribe and then people tend to defend the views of the tribe they happen to be associated with. I think this needs to stop if we want to build a community that can actually figure out how to do the most good and then do it. Focusing on cause areas as the unit of analysis for representativeness entrenches the tribal concern, but it's hard to get away from because it's an easy-to-understand unit of analysis.
  • We shouldn't entrench existing cause areas -- we should be aiming for an EA that has the ability to shift its consensus on the most pressing problems as we learn more. Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder.
  • Cause-impartiality can include having a view -- cause impartiality means that you do an impartial calculation of impact to determine what to work on. Such a calculation should lead to developing views on what causes are most important. Intellectual progress probably includes decreasing our uncertainty and having stronger views.
  • The view of CEA staff should inform, but not determine our work -- I don't think it's realistic or plausible for CEA to take actions as if we have no view on the relative importance of different problems, but it's also the case that our views shouldn't substantially determine what happens.
  • CEA should sometimes exercise leadership in the community -- I don't think that social movements automatically become excellent. Excellence typically has to be achieved on purpose by dedicated, skilled actors. I think CEA will often do work that represents the community, but will sometimes want to lead the community on important issues. The allocation of resources across causes could be one such area for leadership although I'm not certain.

There are also some other considerations around methods of improving representativeness. For example, consulting established EA orgs on representativeness concerns has the effect of entrenching the current systems of power in a way that may be bad, but that gives you a sense of the consideration space.

CEA and cause-impartiality

Suggestion: CEA should actively champion cause impartiality

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial. Cause-impartiality means that you do an impartial calculation of the impact of the cause and act on the basis of that. This is certainly what we think we've done when coming to views on specific causes although there's obviously room for reasonable disagreement.

I would find it quite odd if major organizations in EA (even movement building organizations) had no view on what causes are most important. I think CEA should be aspiring to have detailed, nuanced views that take into account our wide uncertainty, not no views on the question.

Making people feel listened to

I broadly agree with your points here. Regularly talking to and listening to more people in the community is something that I'm personally committed to doing.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Just to clarify, I also don't think trying to find a number that defines representativeness is the right approach, but I also don't want this to be a purely philosophical conversation. I want it to drive action.

Comment author: MichaelPlant 20 August 2018 09:59:51PM *  6 points [-]

Longtermism is the view that most of the value of our actions lies in what happens in the future.

You mean 'in the far future', correct? Unless you believe in backwards causality, and excluding the value that occurs at the same moment you act, all the value of our actions is in the future. I presume by 'far future' you would mean actions affecting future people, as contrasted with presently existing people.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement

Cards on the table, I am not a long-termist; I am sympathetic to person-affecting views in population ethics. Given the power CEA has in shaping the community, I think it's the case that any view CEA advocated would eventually become the consensus view: anyone who didn't find it appealing would eventually leave EA.

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial.

I don't think this can be true. If you're a longtermist, you can't also hold person-affecting views in population ethics (at least, narrow, symmetric person-affecting views), so taking the longtermist position requires ruling such views out of consideration. You might think you should rule out, as obviously false, such views in population ethics, but you should concede you are doing that. To be more accurate you could perhaps call it something like "possibilism cause impartiality - selecting causes based on impartial estimates of impact assuming we account for the welfare of everyone who might possibly exist" but then it would seem almost trivially true long-termist ought to follow (this might not be the right name, but I couldn't think of a better restatement off-hand).

Comment author: MichaelPlant 11 August 2018 06:47:01PM 1 point [-]

I don't think McMahan would find what you call a 'solution' very appealing: McMahan doesn't think that morality is demanding in the way e.g. Singer does. Further, what you suggest ought to be the default position - morality is really demanding - is something only a small percentage of philosophers (although many EAs) believe is correct.

Comment author: james_aung 05 August 2018 08:53:41PM 6 points [-]

Just wanted to say that I'd be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.

I encourage you to write up your thoughts in the near-term rather than far future! :P

Comment author: MichaelPlant 06 August 2018 04:02:58PM 1 point [-]

I liked this solely for the pun. Solid work, James.

Comment author: RandomEA 06 August 2018 11:44:30AM 2 points [-]

I actually began to wonder this myself after posting. Specifically, it seems like an Epicurean could think s-risks are the most important cause. Hopefully Michael Plant will be able to answer your question. (Maybe EA Forum 2.0 should include a tagging feature.)

Comment author: MichaelPlant 06 August 2018 04:01:42PM 1 point [-]

I'm not sure I see which direction you're coming from. If you're a symmetric person-affector (i.e. reject the procreatve asymmetry, the view we're neutral about creating happy lives but agasinst creating unhappy lives) then you don't think there's value in creating future life, good or bad. So neither x-risks nor s-risks are a concern.

Maybe you're thinking 'don't those with person-affecting views care about those who are going to exist anyway?' the answer is Yes if you're a necessitarian (No if you're a presentist), but given that what we do changes who comes into existence necessitarianism (holds you value wellbeing of those that exist anyway) collapses, in practice, into presentism (holds you value wellbeing of those that exist right now).

Vollmer, the view that would be care about the quality of the long-term future, but not whether it happens, seems to be averagism.

Comment author: MichaelPlant 05 August 2018 01:54:07PM 3 points [-]

Not a comment on the content, but on the style of writing: I found it very hard to a read a document with so many endnotes - it was about half the scroll length - and gave up: it was too tricky to keep flicking down to the important content and then back up again.

Comment author: MichaelPlant 03 August 2018 09:45:45PM 14 points [-]

Thanks for writing this up. Attempting to read between the lines, I am also increasingly frustrated by the feeling that near-term projects are being squeezed out of EA. I've been asking myself when (I think it's a 'when' rather than 'if') EA will become so far-future heavy there's no point me participating. I give it 2 years.

There are perhaps a couple of bigger conversations to be had here. Are the different causes friends or enemies? Often it feels like the former, and this is deeply disappointing. We do compete over scarce resources (e.g. money) but we should be able to cooperate at a broader societal level (post forthcoming). Further, if/when would it make sense for those of us who feel irked by what seems to be an exclusionary, far-futurist, tilt that seems to be occuring to split off and starting doing our own thing?

Comment author: MichaelPlant 03 August 2018 09:57:30PM *  4 points [-]

Should probably mention I have raised similiar concerns before in this post: 'the marketing gap and a plea for moral inclusivity'

Comment author: MichaelPlant 03 August 2018 09:45:45PM 14 points [-]

Thanks for writing this up. Attempting to read between the lines, I am also increasingly frustrated by the feeling that near-term projects are being squeezed out of EA. I've been asking myself when (I think it's a 'when' rather than 'if') EA will become so far-future heavy there's no point me participating. I give it 2 years.

There are perhaps a couple of bigger conversations to be had here. Are the different causes friends or enemies? Often it feels like the former, and this is deeply disappointing. We do compete over scarce resources (e.g. money) but we should be able to cooperate at a broader societal level (post forthcoming). Further, if/when would it make sense for those of us who feel irked by what seems to be an exclusionary, far-futurist, tilt that seems to be occuring to split off and starting doing our own thing?

View more: Next