11

John_Maxwell_IV comments on An argument for broad and inclusive "mindset-focused EA" - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 17 July 2017 04:23:26AM *  6 points [-]

Does anyone know which version of your analogy early science actually looked like? I don't know very much about the history of science, but it seems worth noting that science is strongly associated with academia, which is famous for being exclusionary & elitist. ("The scientific community" is almost synonymous with "the academic science community".)

Did science ever call itself a "movement" the way EA calls itself a movement? My impression is that the the skeptic movement (the thing that spreads scientific ideas and attitudes through society at large) came well after science proved its worth. If broad scientific attitudes were a prerequisite for science, that predicts that the popular atheism movement should have come several centuries sooner than it did.

If one's goal is to promote scientific progress, it seems like you're better off focusing on a few top people who make important discoveries. There's plausibly something similar going on with EA.

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups? If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

The current situation WRT growth of the EA movement seems like it could be the worst of both worlds. The EA movement does marketing, but we also have discussions internally about how exclusive to be. So people hear about EA because of the marketing, but they also hear that some people in the EA movement think that maybe the EA movement should be too exclusive to let them in. We'd plausibly be better off if we adopted a compromise position of doing less marketing and also having fewer discussions about how exclusive to be.

Growth is a hard to reverse decision. Companies like Google are very selective about who they hire because firing people is bad for morale. The analogy here is that instead of "firing" people from EA, we're better off if we don't do outreach to those people in the first place.

[Highly speculative]: One nice thing about companies and universities is that they have a clear, well-understood inclusion/exclusion mechanism. In the absence of such a mechanism, you can get concentric circles of inclusion/exclusion and associated internal politics. People don't resent Harvard for rejecting them, at least not for more than a month or two. But getting a subtle cold shoulder from people in the EA community will produce a lasting negative impression. Covert exclusiveness feels worse than overt exclusiveness, and having an official party line that "the EA movement must be welcoming to everyone" will just cause people to be exclusive in a more covert way.

Comment author: Kaj_Sotala 17 July 2017 10:00:47AM 2 points [-]

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups?

I must have expressed myself badly somehow - I specifically meant that "low-level EA" would be composed of multiple groups. What gave you the opposite impression?

For example, the current situation is that organizations like the Centre for Effective Altruism and Open Philanthropy Project are high-level organizations: they are devoted to finding the best ways of doing good in general. At the same time, organizations like Centre for the Study of Existential Risk, Animal Charity Evaluators, and Center for Applied Rationality are low-level organizations, as they are each devoted to some specific cause area (x-risk, animal welfare, and rationality, respectively). We already have several high- and low-level EA groups, and spreading the ideas would ideally cause even more of both to be formed.

If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

This seems completely compatible with what I said? On my own behalf, I'm definitely interested in trying to achieve a position of higher influence to better spread these ideas.