26

Anonymous EA comments

After seeing some of the debate last month about effective altruism's information-sharing / honesty / criticism norms (see Sarah Constantin's follow-up and replies from Holly Elmore (1,2), Rob Wiblin (12), Jacy Rees, Christopher Byrd), I decided to experiment with an approach to getting less filtered feedback. I asked folks over social media to anonymously answer this question:

If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.]

I got a lot of high-quality responses, and some people suggested that I cross-post them to the EA Forum for further discussion. I've posted paraphrased version of many of the responses below. Some cautions:

1. I have no way to verify the identities of most of the respondents, so I can't vouch for the reliability of their impressions or anecdotes. Anonymity removes some incentives that keep people from saying what's on their mind, but it also removes some incentives to be honest, compassionate, thorough, precise, etc. I also have no way of knowing whether a bunch of these submissions come from a single person.

2. This was first shared on my Facebook wall, so the responses are skewed toward GCR-oriented people and other sorts of people I'm more likely to know. (I'm a MIRI employee.)

3. Anonymity makes it less costly to publicly criticize friends and acquaintances, which seems potentially valuable; but it also makes it easier to make claims without backing them up, and easier to widely spread one-sided accounts before the other party has time to respond. If someone writes a blog post titled 'Rob Bensinger gives babies ugly haircuts', that can end up widely shared on social media (or sorted high in Google's page rankings) and hurt my reputation with others, even if I quickly reply in the comments 'Hey, no I don't.' If I'm too busy with a project to quickly respond, it's even more likely that a lot of people will see the post but never see my response.

For that reason, I'm wary of giving a megaphone to anonymous unverified claims. Below, I've tried to reduce the risk slightly by running comments by others and giving them time to respond (especially where the comment named particular individuals/organizations/projects). I've also edited a number of responses into the same comment as the anonymous submission, so that downvoting and direct links can't hide the responses.

4. If people run experiments like this in the future, I encourage them to solicit 'What are we doing right?' feedback along with 'What would you change?' feedback. Knowing your weak spots is important, but if we fall into the trap of treating self-criticism alone as virtuous/clear-sighted/productive, we'll end up poorly calibrated about how well we're actually doing, and we're also likely to miss opportunities to capitalize on and further develop our strengths.

Comments (89)

Comment author: RobBensinger 07 February 2017 09:50:00PM 15 points [-]

Anonymous #6:

If I could wave a magic wand and change the EA community, I'd have everyone constantly posting little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets. So: How to reduce Christianity? How to get people to heaven? Best way to speed up nanomedicine? Best way to reduce ageism? Best way to slow down economic progress?

Comment author: BenHoffman 11 February 2017 02:54:16AM *  3 points [-]

Relevant resources:

Fact Posts: How and Why

The Open Philanthropy Project's Shallow Investigations provide nice template examples.

The Neglected Virtue of Scholarship

Scholarship: How to Do It Efficiently

I'm fairly new to the EA Forum, maybe someone who's been here longer knows of other resources on this site.

Comment author: Richard_Batty 11 February 2017 11:25:09AM 6 points [-]

Even simpler than fact posts and shallow investigations would be skyping experts in different fields and writing up the conversation. Total time per expert is about 2 hours - 1 hour for the conversation, 1 hour for writing up.

Comment author: RobBensinger 07 February 2017 10:29:07PM 14 points [-]

Anonymous #22:

I think that mentorship and guidance are lacking and undervalued in the EA community. This seems odd to me. Everyone seems to agree that coordination problems are hard, that we’re not going to solve tough problems without recruiting additional talent, and that outreach in the "right" places would be good. Functionally, however, most individuals in the community, most organizations, and most heads of organizations seem to act as though they can make a difference through brute force alone.

I also don’t get the impression that most EA organizations and heads of EA organizations are keen on meeting or working with new and interested people. People affiliated with EA write many articles about increasing personal productivity; I have yet to read a single article about increasing group effectiveness.

80,000 Hours may be the sole exception to this rule, though I haven’t formally gone through their coaching program, so I don’t know what their pipeline is like. CFAR also seems to be addressing some of these issues, though their workshops are still prohibitively expensive for lots of people, especially newcomers. EA outreach is great, but once people have heard about EA, I don’t think it’s clear what they should do or how they should proceed.

The final reason why I find this odd is because in most professional settings, mentorship is explicitly valued. Even high-status people who have plenty of stuff on their plate will set aside some time for service.

My model for why this is happening has two parts. First, I think there is some selection effect going on; most people in EA are self-starters who came on board and paved their own path. (That's great and all, but do people think that most major organizations and movements got things done solely by a handful of self-starters trying to cooperate?)

Second, I think it might be the case that most people are good at doing cost-benefit analyses on how much impact their pet project will have on the world, but aren’t thinking about the multiplier effect they could have by helping other people be effective. (This is often because they are undervaluing the effectiveness of other, relatively not-high-status people.)

Comment author: Daniel_Eth 08 February 2017 07:40:52AM 6 points [-]

Another possibility is that most people in EA are still pretty young, so they might not feel like they're really in a position to mentor anyone.

Comment author: RobBensinger 07 February 2017 10:48:22PM 10 points [-]

Anonymous #27:

Many practitioners strike me as being dogmatic and closed-minded. They maintain a short internal whitelist of things that are considered 'EA' -- e.g., working at an EA-branded organization, or working directly on AI safety. If an activity isn't on the whitelist, the dogmatic (and sometimes wrong) conclusion is that it must not be highly effective. I think that EA-associated organizations and AI safety are great, but they're not the only approaches that could make a monumental difference. If you find yourself instinctively disagreeing, then you might be in the group I'm talking about. :)

People's natural response should instead be something like: 'Hmm, at first blush this doesn't seem effective to me, and I have a strong prior that most things aren't effective, but maybe there's something here I don't understand yet. Let's see if I can figure out what it is.'

Level of personal involvement in effective altruism: medium-high. But I wouldn't be proud to identify myself as EA.

Comment author: BenHoffman 11 February 2017 02:50:15AM *  1 point [-]

I wish to register my emphatic partial agreement with much of this one, though I do still identify as EA, and have also talked with many people who are quite curious and interested in getting value from learning about new perspectives.

Comment author: RobBensinger 07 February 2017 09:43:55PM *  8 points [-]

Anonymous #1:

My system-1 concerns about EA: the community exhibits a certain amount of conformism, and a general unwillingness to explore new topics.

I think there's some good reasoning behind this: the Pareto rule tells us that obvious things tend to be much more effective than convoluted strategies. However, this also leaves us more vulnerable to unknown unknowns.

The reason I think this is an issue is the general lack of really new proposals in EA discussion posts. I also think that there is a mysterious niche for an EA org dedicated to exploring new ideas, and I have no idea why the niche isn't filled yet. The organization that seemed to me the most promising for dealing with unknown unknowns (CFAR, who are in a unique position to develop new thinking techniques to deal with this) has recently committed to AI risk in a way that compromises the talent they could have directed to innovative EA.

Comment author: RomeoStevens 08 February 2017 09:55:32PM 7 points [-]

a general unwillingness to explore new topics.

this feels really obvious from where I'm sitting but is met with incredulity by most EAs I speak with. Applause lights for new ideas paired with a total lack of engagement when anyone talks about new ideas seems more dangerous than I think we're giving credit.

Comment author: tomstocker 11 February 2017 12:54:28AM 5 points [-]

See recent pain control brief lee sharkey as example, or Auren Forrester's stuff on suicide.

Comment author: DonyChristie 21 February 2017 11:38:57PM 0 points [-]

I have been observing the same thing. What could we do to spark new ideas? Perhaps a recurring thread dedicated to it on this forum or Facebook, or perhaps a new Facebook group? A Giving Game for unexplored topics? How can we encourage creativity?

Comment author: RomeoStevens 22 February 2017 03:04:15AM 1 point [-]

Creativity is a learnable skill and also can be encouraged through conversational/group activity norms. http://malcolmocean.com/2016/05/honing-mode-vs-jamming-mode/ https://vimeo.com/89936101

Comment author: RomeoStevens 08 February 2017 10:12:46PM 7 points [-]

Meta: this seems like it was a really valuable exercise based on the quality of the feedback. Thank you for conceiving it, running it, and giving thought to the potential side effects and systematic biases that could affect such a thing. It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Comment author: Fluttershy 09 February 2017 04:14:08AM 1 point [-]

It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?

Comment author: RobBensinger 09 February 2017 04:56:35AM 4 points [-]

I think this would be too open to abuse; see the concerns I raised in the OP.

An example of a variant on this idea that might work is to take 100 established+trusted community members, give them all access to the same forum account, and forbid sharing that account with any additional people.

Comment author: RomeoStevens 22 February 2017 03:15:04AM 0 points [-]

What about an anonymous forum that was both private and had a strict no object level names, personal or organizational, policy such that ideas could be discussed more freely?

Obviously there'd be grey area on the alluding to object level people and organizations, but I think we can simply elect a king who is reasonable and agree not to squabble about the chosen line.

Comment author: RobBensinger 07 February 2017 11:04:24PM 7 points [-]

Anonymous #39:

Level of involvement: I'm not an EA, but I'm EA-adjacent and EA-sympathetic.

EA seems to have picked all the low-hanging fruit and doesn't know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It's hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of more long-shot interventions are hard to predict, and some of them could also have negative consequences. AI risk is a target for mockery by outsiders, and while the theoretical arguments for its importance seem sound, it's hard to tell whether an organization is effective in doing anything about it. And the space of interventions in politics is here-be-dragons.

The lack of salient progress is a cause of some background frustration. Some of those who think their cause is best try to persuade others in the movement, but to little effect, because there's not much new to say to change people's minds; and that contributes to the feeling of stagnation. This is not to say that debate and criticism are bad; being open to them is much better than the alternative, and the community is good at being civil and not getting too heated. But the motivation for them seems to draw more from ingrained habits and compulsive behavior than from trying to expose others to new ideas. (Because there aren't any.)

Others respond to the frustration by trying to grow the movement, but that runs into the real (and in my opinion near-certain) dangers of mindkilling politics, stifling PR, dishonesty (Sarah Constantin's concerns), and value drift.

And others (there's overlap between these groups) treat EA as a social group, whether that means house parties or memes. Which is harmless fun in itself, but hardly an inspiring direction for the movement.

What would improve the movement most is a wellspring of new ideas of the quality that inspired it to begin with. Apart from that, it seems quite possible that there's not much room for improvement; most tradeoffs seem to not be worth the cost. That means that it's stuck as it is, at best -- which is discouraging, but if that's the reality, EAs should accept it.

Comment author: lukeprog 09 February 2017 10:33:08PM *  4 points [-]

I think EA may have picked the lowest-hanging fruit, but there's lots of low-ish hanging fruit left unpicked. For example: who, exactly, should be seen as the beneficiaries aka allkind aka moral patients? EAs disagree about this quite a lot, but there hasn't been that much detailed + broadly informed argument about it inside EA. (This example comes to mind because I'm currently writing a report on it for OpenPhil.)

There are also a great many areas that might be fairly promising, but which haven't been looked into in much breadth+detail yet (AFAIK). The best of these might count as low-ish hanging fruit. E.g.: is there anything to be done about authoritarianism around the world? Might certain kinds of meta-science work (e.g. COS) make future life science and social science work more robust+informative than it is now, providing highly leveraged returns to welfare?

Comment author: Denkenberger 11 February 2017 01:17:08AM 2 points [-]

There is also non-AI global catastrophic risk, like engineered pandemics, and low hanging fruit for dealing with agricultural catastrophes like nuclear winter.

Comment author: Michael_PJ 07 February 2017 11:58:51PM 0 points [-]

I agree that we're in danger of having picked all the low-hanging fruit. But I think there's room to fix this.

Comment author: tomstocker 11 February 2017 12:51:55AM 0 points [-]

What's wrong with low hanging fruit? Not entertaining enough?

Comment author: RobBensinger 07 February 2017 10:03:17PM 7 points [-]

Anonymous #13:

(I used to work at an EA-associated organization.)

People involved in effective altruism should expect to have to think outside the box. The EA movement may be too focused on supporting and endorsing causes that are well-established, unambiguous (/have minimal Knightian uncertainty), are reputable, and have good virtue signalling value.

The default assumption for people in EA should be that at the very top end of effectiveness, we will probably not find causes that have those properties: the places where you personally can make the biggest difference will be relatively neglected, which makes it likely that the cause is difficult to model, lacks reputability and an appearance of virtuousness, lacks a clear track record, and isn't widely endorsed.

Comment author: RobBensinger 07 February 2017 10:41:40PM 6 points [-]

Anonymous #15:

I wouldn't mind seeing more statistical analysis in a Bayesian framework in effective altruism -- with explicit likelihoods and prior distributions, rather than 'my intuitions about this p-value constitute Bayesian evidence for....' If people really like p-values, they can simulate and get posterior predictive ones.

Comment author: RobBensinger 07 February 2017 10:02:25PM 6 points [-]

Anonymous #12:

I feel that people in people involved in effective altruism are not very critical of the ways that confirmation bias and hero-of-the-story biases slip into their arguments. It strikes me as... convenient... that one of the biggest problems facing humanity is computers and that a movement popular among Silicon Valley professionals says people can solve it by getting comfortable professional jobs in Silicon Valley and donating some of the money to AI risk groups.

This is obviously not the whole story, as the arguments for taking AI risk seriously are not at all transparently wrong -- though I think EA folks are often overconfident regarding the assumptions they make about the future of AI. Still, it seems worth looking into why this community's agenda ended up meshing so neatly with its members' hobbies. In my more uncharitable moments, I can't help but feel that if the trendy jobs were in potato farming, some in EA would be imploring me to deal with the growing threat of tubers.

(I'm EA-adjacent. I seem to know a lot of you, and I'm sympathetic, but I've never been completely sold. Also, I notice that anonymous commentator #3 said something similar.)

Comment author: RobBensinger 09 February 2017 11:21:44PM *  2 points [-]

Three points worth mentioning in response:

  1. Most of the people best-known for worrying about AI risk aren't primarily computer scientists. (Personally, I've been surprised by the number of physicists.)

  2. 'It's self-serving to think that earning to give is useful' seems like a separate thing from 'it's self-serving to think AI is important.' Programming jobs obviously pay well, so no one objects to people following the logic from 'earning to give is useful' to 'earning to give via programming work is useful'; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, 'technology X is a big deal' will frequently imply both 'technology X poses important risks' and 'knowing how to work with technology X is profitable', so it isn't surprising to find those beliefs going together.)

  3. If you were working in AI and wanted to rationalize 'my current work is the best way to improve the world', then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: "The problem here is that AI risk reducers can't win. If they're not computer scientists, they're decried as uninformed non-experts, and if they do come from computer scientists, they're promoting and serving themselves." But the bigger problem is that the latter doesn't make sense as a self-serving motive.)

Comment author: tomstocker 11 February 2017 01:33:25AM 0 points [-]

Except that on point 3, the policies advocated and strategies being tried aren't as if people are trying to reduce x risk, they're as if they're trying to enable AI to work rather than backfire.

Comment author: RobBensinger 07 February 2017 09:46:26PM 6 points [-]

Anonymous #4:

I think that EA as it exists today doesn't provide much value. It focuses mostly on things that are obvious today ('malaria is bad'), providing people a slightly better way to do what they already think is a good idea, rather than making bets on high-impact large-scale interventions. It also places too much emphasis on alleviating suffering, to the exclusion of Kantian, contractarian, etc. conceptions of ethical obligation.

(By this I primarily have in mind that too many EAs are working on changing the subjective experience of chickens and crickets in a particular direction, on the assumption that qualia/subjectivity is a relatively natural kind, that it exhibits commensurate valences across different species, and that these valences track moral importance very closely. It strikes me as more plausible that morality as we know it is, loosely speaking, a human thing -- a phenomenon that's grounded in our brain's motivational systems and directed at achieving cooperate-cooperate equilibria between intelligent agents simulating one another. Since crickets aren't sophisticated enough to form good mental models of humans (or even of other crickets), they just aren't the kinds of physical systems that are likely to be objects of much moral concern, if any. I obviously don't expect all EAs to agree with me on any of these points, but I think far too many EAs rigidly adhere to the same unquestioned views on moral theory, which would be bad enough even if those views were likely to be true.)

The only EA movement-building organization that strikes me as useful for long-run considerations is 80,000 Hours. GiveWell deliberately avoids the kinds of interventions and organizations that are likely to be useful, and Good Ventures doesn't strike me as willing to explore hard enough to do anything interesting. More generally, I feel like a lot of skilled people are now wasting their time on EA (e.g., Oliver Habryka), many of whom would otherwise be working on issues more directly related to AGI.

What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.

Comment author: RomeoStevens 08 February 2017 10:03:25PM *  7 points [-]

I have spoken with two people in the community who felt they didn't have anyone to turn to who would not throw rationalist type techniques at them when they were experiencing mental health problems. The fix it attitude is fairly toxic for many common situations.

If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.

Comment author: Jess_Whittlestone 10 February 2017 10:11:01AM 5 points [-]

If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.

I strongly agree with this, and I hadn't heard anyone articulate it quite this explicitly - thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn't always "use this CFAR technique."

(I think CFAR are great and a lot of their techniques are really useful. But I've also spent a bunch of time feeling bad the fact that I don't seem able to learn and implement these techniques in the way many other people seem to, and it's taken me a long time to realise that trying to 'figure out' how to fix my problems in a very analytical way is very often not what I need.)

Comment author: Fluttershy 09 February 2017 09:02:15AM 4 points [-]

What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.

I'd be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I've thought a lot about how we can support each other. I'm not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.

Speculation ahoy:

1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.

2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).

3) I wonder how much good work could be done on anyone's mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I've never tried something like this before, but I'd eventually like to.

Well, writing that comment was a journey. I doubt I'll stand by all of what I've written here tomorrow morning, but I do think that I'm correct on some points, and that I'm pointing in a few valuable directions.

Comment author: RobBensinger 07 February 2017 10:49:49PM 14 points [-]

Anonymous #28:

I have really positive feelings towards the effective altruism community on the whole. I think EA is one of the most important ideas out there right now.

However, I think that there is a lot of hostility in the movement towards those of us who started off as 'ineffective altruists,' as opposed to coming from the more typical Silicon Valley perspective. I have a high IQ, but I struggled through college and had to drop out of a STEM program as a result of serious mental health disturbances. After college, I wanted to make a difference, so I've spent my time since then working in crisis homeless shelters. I've broken up fistfights, intervened in heroin overdoses, received 2am death threats from paranoid meth addicts, mopped up the blood from miscarriages. I know that the work I've done isn't as effective as what the Against Malaria Foundation does, but I've still worked really hard to help people, and I've found that my peers in the movement have been very dismissive of it.

I'm really looking to build skills in an area where I can do more effective direct work. I keep hearing that the movement is talent-constrained, but it isn't clearly explained anywhere what the talent constraints are, specifically. I went to EA Global hoping for career advice -- an expensive choice for someone in social work! -- but even talking one-on-one with Ben Todd, I didn't get any actionable advice. There's a lot of advice out there for people who are interested in earning to give, and for anyone who already has great career prospects, but for fuck-ups like me, there doesn't seem to be any advice on skills to develop, how to go back to school, or anything of that kind.

When I've tried so hard to get any actionable advice whatsoever about what I should do, and nobody has any, and yet there's nothing but contempt for people in social work or doing local volunteer work to make a difference -- that's a movement that isn't accessible to me, and isn't accessible to a lot of people, and it makes me want to ragequit. If you don't respect the backbreaking work I've done for years while attempting to help people, that's fine, but please have some kind of halfway viable advice for what I should be doing instead if you're going to dismiss what I'm currently doing as ineffective.

Comment author: Telofy  (EA Profile) 08 February 2017 08:15:16PM 11 points [-]

I want to hug this person so much!

Comment author: BenHoffman 11 February 2017 02:49:08AM 3 points [-]

I want to encourage this person to:

  • Write about what you've learned doing direct work that might be relevant to EAs.
  • Reach out to me if I can be helpful with this in any way.
  • Keep doing the good work you know how to do, if you don't see any better options.
  • Stay alert for high-leverage opportunities to do more, including opportunities you can see and other EAs can't, where additional funding or people or expertise that EAs might have would be helpful.

so much!

Comment author: BenMillwood  (EA Profile) 20 February 2017 04:42:47PM 0 points [-]

"Keep doing the good work you know how to do, if you don't see any better options" still sounds implicitly dismissive to me. It sounds like you believe there are better options, and only a lack of knowledge or vision is keeping this person from identifying them.

Breaking up fistfights and intervening in heroin overdoses to me sound like things that have small-to-moderate chances of preventing catastrophic, permanent harm to the people involved. I don't know how often opportunities like that come up, but is it so hard to imagine they outstrip a GWWC pledger on an average or even substantially above-average salary?

Comment author: RobBensinger 07 February 2017 10:56:13PM 5 points [-]

Anonymous #32:

Level of involvement/familiarity: I work at an EA or EA-associated organization. Please post my five points separately so that people can discuss them without tangling the discussion threads.

Comment author: RobBensinger 07 February 2017 10:59:33PM 8 points [-]

Anonymous #32(d):

There seems to be a sense in effective altruism that the existence of one organization working on a given problem means that the problem is now properly addressed. The thought appears to be: '(Organization) exists, so the space of evaluating (organization function) is filled and the problem is therefore taken care of.'

Organizations are just a few people working on a problem together, with some slightly better infrastructure, stable funding, and time. The problems we're working on are too big for a handful of people to fix, and the fact that a handful of people are working in a given space doesn't suggest that others shouldn't work on it too. I'd like to see more recognition of the conceptual distinction between the existence of an organization with a certain mission, and what exactly is and is not being done to accomplish that mission. We could use more volunteers/partners to EA organizations, or even separate organizations addressing the same issue(s) using a different epistemology.

To encourage this, I'd love to see more support for individuals doing great projects who are better suited to the flexibility of doing work independently of any organization, or who otherwise don't fit a hole in an organization.

Comment author: RobBensinger 07 February 2017 11:00:02PM 7 points [-]

Anonymous #32(e):

I'm generally worried about how little most people actually seem to change their minds, despite being in a community that nominally holds the pursuit of truth in such high esteem.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

There are also really strong founder effects in regional EA groups. That is, locals of one area generally seem to converge on one or two causes or approaches being best. Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

Excepting a handful of people who have switched cause areas, it seems like EA as a brand serves more to justify what one is already doing and optimize within one's comfort zone in it, as opposed to actually changing minds.

To fix this, I'd want to lower the barriers to changing one's mind by, e.g., translating the arguments for one cause to the culture of a group often associated with another cause, and encouraging thought leaders and community leaders to be more open about the ways in which they are uncertain about their views so that others are comfortable following suit.

Comment author: IanDavidMoss 08 February 2017 02:37:23PM 2 points [-]

This is a great point. In addition to considering "how can we make it easier to get people to change their minds," I think we should also be asking, "is there good that can still be accomplished even when people are not willing to change their minds?" Sometimes social engineering is most effective when it works around people's biases and weaknesses rather than trying to attack them head on.

Comment author: rohinmshah  (EA Profile) 08 February 2017 06:22:29PM 1 point [-]

I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical.

TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.

Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)

In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas.

Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.

Comment author: Daniel_Eth 08 February 2017 07:37:58AM 1 point [-]

Perhaps one implication of this is it's better to target movement growing efforts at students (particularly undergrads), since they're less likely to have already made up their minds?

Comment author: RobBensinger 07 February 2017 10:58:42PM 7 points [-]

Anonymous #32(c):

Note that this point is a little incoherent.

In the absence of proper feedback loops, we will feel like we are succeeding while we are in fact stagnating and/or missing the mark. Wary of using this as a fully general critique, some of the proxies we use for success seem to be only loosely tracking what we actually care about. (See Goodhart's Law.)

For instance, community growth is used as a proxy for success where it might, in fact, be an indicator of concept and community dilution. Engagement on OMfCT, while 'engaging the EA community,' seems to supplant real, critical engagement. (I'm really uncertain of this claim.) With the exception of a few people, often those from the early days of EA, there's little generation of new content, and more meta-fixation on organizations and community critiques.

Tracking quality and novel content is really hard, but it seems far more likely to move EA into the public sphere, academia, etc. than boosting pretty numbers on a graph. We're going to miss a lot of levers for influence if we keep resting on our intellectual laurels.

I'd like to see more essay contests and social rewards for writing, rather than the only response to such writing being blunt critiques of the content itself. I'd also like to see the development of more sophisticated metrics to gauge community development, rather than treating more quantifiable, scalable metrics as our only rigorous option.

Comment author: RobBensinger 07 February 2017 10:58:05PM 7 points [-]

Anonymous #32(b):

The high-value people from the early days of effective altruism are disengaging, and the high-value people who might join are not engaging. There are people who were once quite crucial to the development of EA 'fundamentals' who have since parted ways, and have done so because they are disenchanted with the direction in which they see us heading.

More concretely, I've heard many reports to the effect: 'EA doesn't seem to be the place where the most novel/talented/influential people are gravitating, because there aren't community quality controls.' While inclusivity is really important in most circumstances, it has a downside risk here that we seem to be experiencing. I believe we are likely to lose the interest and enthusiasm of those who are most valuable to our pursuits, because they don't feel like they are around peers, and/or because they don't feel that they are likely to be socially rewarded for their extreme dedication or thoughtfulness.

I think that the community's dip in quality comes in part from the fact that you can get most of the community benefits without being a community benefactor -- e.g. invitations to parties and likes on Facebook. At the same time, one incurs social costs for being more tireless and selfless (e.g., skipping parties to work), for being more willing to express controversial views (e.g., views that conflict with clan norms), or for being more willing to do important but low-status jobs (e.g., office manager, assistant). There's a lot that we'd need to do in order to change this, but as a first step we should be more attentive to the fact that this is happening.

Comment author: Richard_Batty 08 February 2017 01:52:10AM *  7 points [-]

What communities are the most novel/talented/influential people gravitating towards? How are they better?

Comment author: IanDavidMoss 08 February 2017 02:31:43PM 4 points [-]

I upvoted this mostly because it was new information to me, but I have the same questions as Richard.

Comment author: RobBensinger 07 February 2017 10:57:14PM 2 points [-]

Anonymous #32(a):

There's a lot of mistrust between the different 'clans' in the EA community, and a lot of dismissal of the thinking of other clans. As someone who is relatively in touch with all of them, I gauge the mistrust to be overhyped and the dismissal to be uncalibrated.

If we want to hedge against groupthink, we need to try to reconcile our views with those of others who share our goals. At present, we seem to instead be making enemies of those few who are most able and willing to be our allies.

Yes, this is hard. There are lots of inferential gaps, years of contentious history, empirical unknowns, cultural differences between groups following different thought leaders..... But this is important.

If I could magically change the EA community tomorrow, I would present people with concepts, both old and new, blinded of clan jargon and authorship, so that individuals can evaluate them for their merits. See http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/

Comment author: RobBensinger 07 February 2017 10:54:26PM 5 points [-]

Anonymous #31:

I work for an effective altruism organization. I'd say that over half of my friends are at least adjacent to the space and talk about EA-ish topics regularly.

The thing I'd most like to change is the general friendliness of first-time encounters with EA. I think EA Global is good about this, but house parties tend to have a very competitive, emotionally exhausting 'everyone is sizing you up' vibe, unless you're already friends with some people from another context.

Next-most-important (and related), probably, is that I would want everyone to proactively express how much confidence they have their statements in some fashion, through word choice, body language, and tone of voice, rather than providing a numerical description only when explicitly asked. This can prevent false-consensus effects and stop people from assuming that a person must be totally right because they sound so confident.

More selfishly, another thing I wish for is more social events that consist of 10-20 people doing something in the daytime with minimal drugs, rather than 50-100 people at a wild party. I just enjoy small daytime gatherings so much more, and I would like to get closer to the community, but I rarely have the energy for parties.

Comment author: Daniel_Eth 08 February 2017 07:43:12AM 6 points [-]

Where are all these crazy EA parties that I keep reading about? The only EA parties I've heard of were at EA Global.

Comment author: RomeoStevens 08 February 2017 09:59:46PM 3 points [-]

My guess is that there is a very large underestimation of the value from a higher baseline level of cross pollination of ideas.

Comment author: RobBensinger 07 February 2017 10:31:28PM 5 points [-]

Anonymous #25:

I'm very involved in the EA community, but at this point, it seems unlikely that I'll ever work at an EA organization, because I can't take the pay cut. I want to start a family and raise kids one day, and to me, this is incompatible with a $50k/year, 12h/day job (at least in the Bay Area).

I'm not sure if earning to give is the best solution to this, but sometimes it seems like the only option available.

Comment author: RobBensinger 07 February 2017 10:13:36PM *  5 points [-]

Anonymous #9:

  • Practice humility towards charities working on systemic change or in related fields like development. They have been doing it for decades. Many would consider saving a few lives from malaria as non-utilitarian compared with changing policies that affect millions.

  • Be mindful of the risk of recruiting narcissists to represent the movement, as this makes a lot of people's first impression of effective altruism a condescending one. ('I am the most effective altruist!') The Bay Area's status culture is a turn-off for people in Anglophone countries -- see Tall Poppy Syndrome.

  • I don't know the details now, but the level of investment in EA Global strikes me as disproportionate. The money could be invested in a more sustainable way, such as by using it to build up local groups.

  • The EA project that provides funding was originally intended to help decentralize the movement. It morphed into becoming a fund for start-up projects. If there are updates or accountability, I have missed that. In any case, there remains an issue that groups outside EA Hubs are generally volunteer-led and constrained by funding.

  • It is an issue for me that there isn't a way to cancel the GWWC pledge.

  • The organizations need more diversity to raise collective intelligence and avoid some important biases. Look at the election: 94% of black women voted for Clinton, while 70% of white men voted for Trump.

Julia Wise of CEA replied:

"groups outside EA Hubs are generally volunteer-led and constrained by funding."

CEA provides funding to local groups, and we've actually had trouble getting groups to take as much money as we think they should! We encourage any local group to apply for funding here: https://cea-core.typeform.com/to/sJA6kf

"there isn't a way to cancel the GWWC pledge."

The Pledge has never been intended as something that there is no way out of. The FAQ states:

"The Pledge is not a contract and is not legally binding. It is, however, a public declaration of lasting commitment to the cause. It is a promise, or oath, to be made seriously and with every expectation of keeping it. All those who want to become a member of Giving What We Can must make the Pledge and report their income and donations each year.

If someone decides that they can no longer keep the Pledge (for instance due to serious unforeseen circumstances), then they can simply contact us and cease to be a member. They can of course rejoin later if they renew their commitment. Obviously taking the Pledge is something to be considered seriously, but we understand if a member can no longer keep it."

We realize this information wasn't particularly easy to find, so we're in the process of making this kind of thing clearer on our website. We'll also put up a post soon clarifying this and some other common misunderstandings about the Pledge.

Thanks!

Julia

The post in question went up yesterday: Clarifying the GWWC Pledge.

Comment author: RobBensinger 07 February 2017 10:53:19PM 4 points [-]

Anonymous #29:

I worry that Sarah Constantin's article will make an existing problem worse. The effective altruism community is made up of people who understand that politics isn't their comparative advantage. But sour grapes transforms this into 'and also, politics is The Dark Arts and if you do it you're Voldemort.'

GiveWell needs to make charity recommendations based on what's true, not based on what it can sell. But effective altruism as a whole is a political project. If it's to become more than a hobby, it needs to use political power to change the way existing institutions (charities, aid organizations, governments, large companies) allocate their resources. And doing that means politics.

Keeping the political/persuasive branch of effective altruism from influencing or corrupting the truth-seeking branch is important. And expunging transparent lying (which is bad politics and bad persuasion) from the persuasive branch is important. But it's also important that we be willing and able to manipulate the political institutions that hold all the existing power.

Comment author: RobBensinger 07 February 2017 10:45:36PM 4 points [-]

Anonymous #17:

I would cause the effective altruism community to exhibit less risk aversion and less groupthink.

Comment author: RobBensinger 07 February 2017 09:59:46PM *  9 points [-]

Anonymous #11:

I think that a lot of people in effective altruism who focus on animal welfare as a cause area have demonstrated a pattern of doing extraordinarily uncooperative and epistemically terrible things. Examples: calling people bad names for eating meat, in the hope of changing their behavior via social pressure and stigma (rather than argument); comparing meat-eating to conventional murder, in the hope of taking advantage of the noncentral fallacy; 'direct action everywhere,' which often translates in practice into being rude and threatening to people who disagree about various factual questions; ACE basing conclusions on bad leafletting statistics and 'intuition'; threatening to cause public relations mayhem for the event organizers and damage the community's future work if EA Global didn't go vegetarian.

I wouldn't be OK with trying to 'kick animal welfare people out of the movement,' because a) what would that even mean, and b) we're supposed to be a garden of Niceness and Civilization. But it would be great if the EA community actively called out this bullshit when it happened, and demanded that people focusing on this cause met the same high epistemic standards that are demanded of poverty charities (or even the ridiculously high epistemic standards demanded of AI risk people; but that might be too much to ask).

Buck Shlegeris replied:

I think this comment is somewhat uncharitable. Here are a few minor quibbles.

One clarification:

"'direct action everywhere,' which often translates in practice into being rude and threatening to people who disagree about various factual questions"

Note that "Direct Action Everywhere" is the name of an animal rights org, many of whose members are associated with animal EA. (FWIW, I think most animal-focused EAs don't agree with DxE's methods.)

And now some reasonably minor disagreements and explanations, more in the spirit of clarification than making arguments. Note that I don't necessarily agree entirely with all the arguments I'm about to provide.

"calling people bad names for eating meat, in the hope of changing their behavior via social pressure and stigma (rather than argument)"

One unfortunate part of the relationship between animal-focused EAs and other EAs is that many animal-focused EAs don't really feel that co-operatively inclined towards the EA movement as a whole. They feel that many EAs are overconfident and dismiss animal suffering for really dumb reasons. They view EA more as a source of money and talent than as a community to really engage with and learn from.

Imagine if EAs decided to join some political party for some reason and advocate for stuff. The EAs there would probably not be that interested in learning about the philosophies of the party, they're more there to use it. I think that's a relatively reasonable metaphor for how many animal-focused EAs feel about EA. (I think this is pretty bad behavior on the part of the animal rights people.)

If you ask this kind of animal-focused EA why they do things that aren't rational argument, they'll say it's because EAs aren't actually any good at listening to rational arguments, they just think they are, and so it's pointless trying to reason with them.

"comparing meat-eating to conventional murder, in the hope of taking advantage of the noncentral fallacy"

I don't think people who do this are trying to take advantage of the noncentral fallacy, I think they are honestly explaining how bad they think meat-eating is.

"ACE basing conclusions on bad leafletting statistics and 'intuition'"

This is not an accurate summary of anything which happened

"threatening to cause public relations mayhem for the event organizers and damage the community's future work if EA Global didn't go vegetarian."

Only a few people proposed this, and the majority of animal EA was strongly opposed to this.

"But it would be great if the EA community actively called out this bullshit when it happened, and demanded that people focusing on this cause met the same high epistemic standards that are demanded of poverty charities (or even the ridiculously high epistemic standards demanded of AI risk people; but that might be too much to ask)."

Agreed.

Comment author: Daniel_Eth 08 February 2017 07:35:29AM 8 points [-]

This. As a meat-eating EA who personally does think animal suffering is a big deal, I've found the attitude from some animal rights EAs to be quite annoying. I personally believe that the diet I eat is A) healthier than if I was vegan and B) allows me to be more focussed and productive than if I was vegan, allowing me to do more good overall. I'm more than happy to debate that with anyone who disagrees (and most EAs who are vegan are civil and respect this view), but I have encountered some EAs who refuse to believe that there's any possibility of either A) or B) being true, which feels quite dismissive.

Contrast that attitude to what happened recently at a Los Angeles EA meetup where we went for dinner. Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn't want to eat meat in front of them and offend them. The person next to me said he was vegan, but that if I wanted meat I should order it since "we're all adults and we want the community to be as inclusive as it can." I decided to get a vegan dish anyway, but having him say that made me feel more welcome.

Comment author: IanDavidMoss 08 February 2017 02:23:48PM 6 points [-]

For what it's worth and as an additional data point, I'm a meat eater and I didn't feel like this was a big problem at EA Global in 2016. For a gathering in which animal advocacy/veganism is so prevalent, I would have thought it really weird if the conference served meat anyway. The vegetarian food provided was delicious, and the one time I went out to dinner with a group and ordered meat, nobody got up in my face about it.

Comment author: Daniel_Eth 09 February 2017 02:59:56AM 2 points [-]

Yes, that was my general impression of EA global. I feel like most of the people who do get upset about meat eaters in EA are only nominally in EA, and largely interact with the community via Facebook.

Comment author: Telofy  (EA Profile) 08 February 2017 08:30:42PM 7 points [-]

Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn't want to eat meat in front of them and offend them.

Oh wow, thank you! That’s so awesome of you! I greatly appreciate it!

Comment author: RobBensinger 07 February 2017 11:03:25PM 3 points [-]

Anonymous #37:

I would like to see more humility from people involved in effective altruism regarding metaethics, or at least better explanations for why EAs' metaethical positions are what they are. Among smart friends and family members of mine whom I've tried to convince of EA ideas, the most common complaint is, 'But that's not what I think is good!' I think this is a reasonable complaint, and I'd like it if we acknowledged it in more introductory material and in more of our conversations.

More broadly, I think that rather than having a 'lying problem,' EA has an 'epistemic humility problem' -- both around philosophical questions and around empirical ones, and on both the community level and the individual level.

Comment author: Telofy  (EA Profile) 08 February 2017 09:37:48PM 1 point [-]

It's fascinating how diverse the movement is in this regard. I've only found a single moral realist EA who had thought about metaethics and could argue for it. Most EAs around me are antirealists or haven't thought about it.

(I'm antirealist because I don't know any convincing arguments to the contrary.)

Comment author: Ben_Todd 09 February 2017 10:42:18AM 6 points [-]

My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 ("The Point of View of the Universe").

Comment author: lukeprog 09 February 2017 10:45:38PM 2 points [-]

Plus some who at least put some non-negligible probability on moral realism, in some kind of moral uncertainty framework.

Comment author: Telofy  (EA Profile) 10 February 2017 04:01:39PM 0 points [-]

Ah, cool! I should read it.

Comment author: RobBensinger 07 February 2017 11:01:17PM 3 points [-]

Anonymous #34:

The way that we talk about policy in the effective altruism community is unsophisticated. I understand that this isn't most EAs' area of expertise, but in that case just running around and saying 'we should really get EAs into policy' is pretty unhelpful. Anyone who is fairly inexperienced in 'policy' could quickly get a community-knowledge comparative advantage just by spending a couple of months doing self-study and having conversations, and could thereby start helpfully orienting our general cries for more work on 'policy.'

To be fair, there are some people doing this. But why not more?

Comment author: RobBensinger 07 February 2017 10:44:10PM *  3 points [-]

Anonymous #16:

Level of involvement: Most of my friends are involved in effective altruism and talk about it regularly.

The extent to which AI topics and MIRI seem to have increased in importance in effective altruism worries me. The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling. This is also a noticeable red flag for groupthink. For example, Holden's explanation of why he has become more favorably disposed to MIRI was pretty unconvincing.

Other Open Phil links about AI: 2015 cause report, 2016 background blog post.

Comment author: jimrandomh 08 February 2017 01:03:43AM 2 points [-]

The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling.

I'm confused by the bit about this not being reflected in organizations' public faces? Early in 2016 OpenPhil announced they would be making AI risk a major priority.

Comment author: RobBensinger 07 February 2017 10:36:54PM 3 points [-]

Anonymous #3:

Stop talking about AI in EA, at least when doing EA outreach. I keep coming across effective altruism proponents claiming that MIRI is a top charity, when they seem to be writing to people who aren't in the EA community who want to learn more about it. Do they realize that this comes across as very biased? It makes it seem like 'I know a lot about an organization' or 'I have friends in this organization' are EA criteria. Most importantly, talking about AI in doomsday terms sounds kooky. It stands apart from the usual selections, as it's one of the few that's 'high stakes.' I rarely see effective altruists working towards environmental, political, anti-nuclear, or space exploration solutions, which I consider of a similar genre. I lose trust in an effective altruist's evaluations when they evaluate MIRI to be an effective charity.

I've read a few articles and know a few EA people.

Comment author: RobBensinger 07 February 2017 10:23:16PM *  3 points [-]

Anonymous #23:

I used to work for an organization in EA, and I am still quite active in the community.

1 - I've heard people say things like, 'Sure, we say that effective altruism is about global poverty, but -- wink, nod -- that's just what we do to get people in the door so that we can convert them to helping out with AI / animal suffering / (insert weird cause here).' This disturbs me.

2 - In general, I think that EA should be a principle, not a 'movement' or set of organizations. I see no reason that religious charities wouldn't benefit from exposure to EA principles, for example.

3 - I think that the recent post on 'Ra' was in many respects misguided, and that in fact a lack of 'eliteness' (or at least some components of it) is one of the main problems with many EA organizations.

There's a saying, I think from Eliezer, that 'the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.' That saying is true, but people seem to use this as an excuse sometimes. There's not really any reason for EA organizations to be as unprofessional and inefficient as they are. I'm not saying that we should all be nine-to-fivers, but I'd be very excited to see the version of the Centre for Effective Altruism or the Center for Applied Rationality that cared a lot about being an elite team that's really actually trying to get things done, rather than the version that's sorta ad-hoc 'these are the people who showed up.'

4 - Things are currently spread over way too many sources: Facebook, LessWrong, the EA Forum, various personal blogs, etc.

Rob Bensinger replied:

I'd be interested to hear more about examples of things that CEA / CFAR / etc. would do differently if they were 'an elite team that's really actually trying to get things done'; some concreteness there might help clarify what the poster has in mind when they say there are good things about Ra that EA would benefit from cultivating.

For people who haven't read the post, since it keeps coming up in this thread: my impression is that 'Ra' is meant to refer to something like 'impersonal, generic prestige,' a vague drive toward superficially objective-seeming, respectable-seeming things. Quoting Sarah's post:

"Ra involves seeing abstract, impersonal institutions as more legitimate than individuals. For instance, I have the intuition that it is gross and degrading to pay an individual person to clean your house, but less so to hire a maid service, and still less so if a building that belongs to an institution hires a janitor. Institutions can have authority and legitimacy in a way that humans cannot; humans who serve institutions serve Ra.

"Seen through Ra-goggles, giving money to some particular man to spend on the causes he thinks best is weird and disturbing; putting money into a foundation, to exist in perpetuity, is respectable and appropriate. The impression that it is run collectively, by 'the institution' rather than any individual persons, makes it seem more Ra-like, and therefore more appealing. [...]

"If Horus, the far-sighted, kingly bird, represents "clear brightness" and "being the rightful and just ruler", then Ra is a sort of fake version of these qualities. Instead of the light that distinguishes, it’s the light too bright to look at. Instead of clear brightness, it’s smooth brightness.

"Instead of objectivity, excellence, justice, all the "daylight" virtues associated with Horus (what you might also call Apollonian virtues), Ra represents something that’s also shiny and authoritative and has the aesthetic of the daylight virtues, but in an unreal form.

"Instead of science, Ra chooses scientism. Instead of systematization and explicit legibility, Ra chooses an impression of abstract generality which, upon inspection, turns out to be zillions of ad hoc special cases. Instead of impartial justice, Ra chooses a policy of signaling propriety and eliteness and lack of conflicts of interest. Instead of excellence pointed at a goal, Ra chooses virtuosity kept as an ornament.

"(Auden’s version of Apollo is probably Ra imitating the Apollonian virtues. The leadership-oriented, sunnily pragmatic, technological approach to intellectual affairs is not always phony — it’s just that it’s the first to be corrupted by phonies.)

"Horus is not Ra. Horus likes organization, clarity, intelligence, money, excellence, and power — and these things are genuinely valuable. If you want to accomplish big goals, it is perfectly rational to seek them, because they’re force multipliers. Pursuit of force multipliers — that is, pursuit of power — is not inherently Ra. There is nothing Ra-like, for instance, about noticing that software is a fully general force multiplier and trying to invest in or make better software. Ra comes in when you start admiring force multipliers for no specific goal, just because they’re shiny.

"Ra is not the disposition to seek power for some goal, but the disposition to approve of power and to divert it into arbitrariness. It is very much NOT Machiavellian; Machiavelli would think it was foolish."

Nick Tarleton replied:

Huh. I really like and agree with the post about Ra, but also agree that there are things about... being a grown-up organization?... that some EA orgs I'm aware of have been seriously deficient in in the past. I don't know whether some still are; it seems likely a priori. I can see how a focus on avoiding Ra could cause neglect of those things, but I still think avoiding Ra is critically important, it just needs to be done smarter than that. (Calling the thing 'eliteness', or positively associating it with Ra, feels like a serious mistake, though I can't articulate all of my reasons why, other than that it seems likely to encourage focusing on image over substance. I think calling it 'grown-upness' can encourage that as well, and I don't know of a framing that wouldn't (this is an easy thing to mistake image for / do fronting about, and focusing on substance over image seems like an irreducible skill / mental posture), but 'eliteness' feels particularly bad. 'Professionalism' feels in between.)

Anonymous #23 replied:

CEA's internal structure is very ad-hoc and overly focused on event planning and coordination, at least in my view. It also isn't clear that what they're doing is useful. I don't really see the value add of CEA over what Leverage was doing back when Leverage ran the EA Summit.

Most of the cool stuff coming out of the CEA-sphere seems to be done by volunteers anyway. This is not to denigrate their staff, just to question 'Where's the beef?' when you have 20+ people on the team.

For that matter, why do conversations like these mostly happen on meme groups and private Facebook walls instead of being facilitated or supported by CEA?

Looking at the CFAR website, it seems like they have something like 14-15 employees, contractors, and instructors, of which only 3-4 have research as part of their job? That's... not a good ratio for an organization with a mission that relies on research, and maybe this explains why there hasn't been too much cool new content coming out of that sector?

To put things another way, I don't have a sense of rapid progress being made by these organizations, and I suspect that it could be with the right priorities. MIRI certainly has its foibles, but if you look over there it seems like they're much more focused/productive, and it's readily apparent how each of their staffers contributes to the primary objective. Were I to join MIRI, I think I would have a clear sense of, 'Here I am, part of a crack team working to solve this big problem. Here's how we're doing it.' I don't get that sense from any other EA organizations.

As for 'Ra,' it's not that I think fake prestige is good; it's that I think people way overcorrect, shying away from valid prestige in the name of avoiding fake prestige. This might be a reflection of the Bay Area and Oxford 'intellectual techie' crowds more than EA in general, but it's silly any way you slice it.

I want an EA org whose hiring pitch is: 'We're the team that is going to solve (insert problem), and if you join us everyone you work with will be smart, dedicated, and hardworking. We don't pay as much as the private sector, but you'll do a ton more, with better people, more autonomy, and for a better cause. If that sounds good, we'd love to talk to you.'

This is a fairly 'Ra'-flavored pitch, and obviously it has to actually be true, but I think a lot of EAs shy away from aiming for this sort of thing, and instead wind up with a style that actually favors 'scrappiness' and 'we're the ones who showed up.' I bet my pitch gets better people.

Julia Wise of CEA replied:

The best place to read about about what CEA is doing, and why we're doing it, is our annual update and fundraising prospectus.

Comment author: RobBensinger 07 February 2017 10:07:37PM *  6 points [-]

Anonymous #14:

I've worked with EA-related organizations, as have many of my friends.

On a system-1 level, I honestly just want to scrap the entire EA project and start over. EAA strikes me as particularly scrappable, but that's just my values.

On a system-2 level, I see the community being eaten by Moloch, roughly as a consequence of Darwinian pressure towards growth conflicting with a need for bona fide epistemic rigor. The reason that we seem to be getting especially eaten by this is that there's a widespread belief that our cause is just, so we're rapidly developing notions that play the functional role of 'heresy' for EA. I've seen well-respected and high-profile organizations going after critics of their work, engaging in internal purges of folk associated with them, and compelling simple lies to manage the optics of criticism. I have direct knowledge of times when such organizations have decided to ignore the content of criticism and instead spin.

This has been a problem since at least 2013. It's not merely the Open Philanthropy Project / Good Ventures / GiveWell drama, Intentional Insights, the pledge, or Sarah Constantin's honesty thing. It's systemic, and those examples are merely so public as to be unavoidable on Facebook.

How to fix this problem? I think the meta level of EA needs to be pruned. Organizations with specific goals (e.g., GiveWell's top recommended charities or the AI folk) seem to have less of a problem, because they have to actually engage with reality somewhere. At worst, they're just wrong. The problem arises much more with GiveWell or the Centre for Effective Altruism and their spawn, because there aren't clear progress metrics. The incentives to 'grow first, be right later' then are strong.

Anonymous #14 added:

[When I mentioned 'the Open Philanthropy Project / Good Ventures / GiveWell drama'] I was referring to the question of whether the Open Philanthropy Project was wrong about fully funding the Against Malaria Foundation. I'm aware that Open Phil is now asserting that their last dollar will go somewhere more effective (in expectation) than AMF. But I don't buy their reasoning, and I find it (evidentially) deeply troubling that they are using it. I suspect strongly that organizations that claim that they should keep money when there are obvious effective things to do right now -- and claim this on the basis of less-than-fully-formal, less-than-fully-public models -- will in fact not end up using the money. More generally, I suspect strongly that organizations with a tendency to push the flock to do one thing while they do something else will not be goal-oriented.

Link to the Open Philanthropy Project's current view on giving now vs. later: http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update

Comment author: RobBensinger 07 February 2017 11:04:48PM 5 points [-]

Anonymous #40:

I'm the leader of a not-very-successful EA student group. I don't get to socialize with people in EA that much.

I wish the community were better at supporting its members in accomplishing things they normally couldn't. I feel like almost everyone just does the things that they normally would. People that enjoy socializing go to meetups (or run meetups); people that enjoy writing blog posts write blog posts; people that enjoy commenting online comment online; etc.

Very few people actually do things that are hard for them, which means that, for example, most people aren't founding new EA charities or thinking original thoughts about charity or career evaluation or any of the other highly valuable things that come out of just a few EA people. And that makes sense; it doesn't work to just force yourself to do this sort of thing. But maybe the right forms of social support and reward could help.

Comment author: RobBensinger 07 February 2017 09:54:32PM *  5 points [-]

Anonymous #10:

I basically agree with Sarah Constantin and Ben Hoffman's critiques. The community is too large and distributed to avoid principal-agent problems and Ribbonfarm-Sociopaths. The more people that are involved, the worse decision-making processes get. So I'd prefer to fragment the community in two, with one focused on projects that are externally-facing and primarily interact with non-EAs, and another that's smaller, denser, and inward-facing, that can be arbitrarily ambitious. The second group has to avoid the forces that attract Sociopaths and Ra, which means it must be relatively small, must be highly socially interconnected, must expand organically, and must have very high standards.

As a related mechanism towards the same end, I would want the community to stop agreeing to disagree on cause areas and how to spend money. The returns for focusing on a cause are superlinear with the amount of thought and resources that go into it. As such, we're paying a tax in epistemics and outcomes in order to have a wider community, which I don't think gives us all that much.

More or less all of the people I interact with are associated with the effective altruism and/or rationality communities. I'm connected to the MIRI/CFAR cluster of people, though I'm not generally directly involved in what they do.

Comment author: RobBensinger 07 February 2017 10:38:52PM 4 points [-]

Anonymous #8:

If I could change the effective altruism community tomorrow, I would move it somewhere other than the Bay Area, or at least make it more widely known that moving to the Bay is defecting in a tragedy of the commons and makes you Bad.

If there were large and thriving EA communities all over the place, nobody would need to move to the Bay, we'd have better outreach to a number of communities, and fewer people would have to move a long distance, get US visas, or pay a high rent in order to get seriously involved in EA. The more people move to the Bay, the harder it is to be outside the Bay, because of the lack of community. If everyone cooperated in developing relatively local communities, rather than moving to the bay, there'd be no need to move to the Bay in the first place. But we, a community that fangirls over 'Meditations on Moloch' (http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) and prides itself on working together to get shit done, can't even cooperate on this simple thing.

I know people who are heartbroken and depressed because they need community and all their partners are in the Bay and they want to contribute, but they can't get a US visa or they can't afford Bay Area rent levels, so they're stuck friendless and alone in whatever shitty place they were born in. This should not be a hard problem to solve if we apply even a little thought and effort to it; any minimally competent community could pull this off.

Comment author: Michael_PJ 08 February 2017 12:06:17AM 3 points [-]

There's a lot of EA outside the Bay! The Oxford/London cluster in particular is quite nice (although I live there, so I'm biased).

Comment author: Foster 09 February 2017 10:44:24AM 1 point [-]

+1 London community is awesome. Also heard very good things about the Berlin & Vancouver communities.

Comment author: Telofy  (EA Profile) 08 February 2017 08:46:34PM 2 points [-]

I can recommend Berlin! Also biased. ;-)

Comment author: RobBensinger 07 February 2017 11:01:40PM 2 points [-]

Anonymous #35:

I would not feel like people in the EA community would backstab me if the benefit to them outweighed the harm. (Where benefit to them often involves their lofty goals, so it can go under the guise of 'effective altruism.')

Comment author: RobBensinger 07 February 2017 10:46:56PM 2 points [-]

Anonymous #21:

A meta comment: This post has gotten a lot of good replies! Like, Jesus, where are all of these people, and why do I never hear from them otherwise? I assume most of them must be people I've run into somewhere, on Facebook or at parties or conferences or whatever. But I guess they must just not say anything.

I don't agree with everything, obviously, but I see lots of things that I normally wouldn't expect to hear on Facebook. If any of you would like to continue these conversations over email, I've given Rob my contact information and given him permission to share it with parties who ask for it.

Comment author: RobBensinger 07 February 2017 10:30:23PM 2 points [-]

Anonymous #24:

Intentional Insights makes me cringe for its obvious clickbaityness. I am totally consequentialist, and if it helps to raise the sanity waterline, go for it -- but I'm skeptical that it does. I feel a bit repelled by the low-effort content and shady attention-grabbing techniques; it causes me to feel slightly less respectful of the EA community, and less like I belong there. I hope that's just me.

If you are going to use any shady techniques, fabricated content or praise, non-organic popularity on social networks, or anything along those lines: please do it in a way so that nobody notices! I am not trying to offend you. Just don't let stuff like this happen: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/

I'm a LessWrong and EA person.

Comment author: RobBensinger 07 February 2017 10:37:28PM 3 points [-]

Anonymous #5:

At multiple EA events that I've been to, new people who were interested and expressed curiosity about what to do next were given no advice beyond 'donate money and help spread the message' -- even by prominent EA organizers. My advice to the EA community would be to stop focusing so much on movement-building until (a) EA's epistemics have improved, and (b) EAs have much more developed and solid views (if not an outright consensus) about the movement's goals and strategy.

To that end, I recommend clearly dividing 'cause-neutral EA' from 'cause-specific effectiveness'. The lack of a clear divide contributes to the dilution of what EA means. (Some recent proposals I've seen framed by people as 'EA' have included a non-profit art magazine and a subcommunity organized around fighting Peter Thiel.) If we had a notion of 'in this space/forum/organization, we consider the most effective thing to do given that one cares primarily about art' or 'given that one is focused on ending Alzheimer's, what is the most effective thing to do?', then people could spend more time seriously discussing those questions and less bickering over what counts as 'EA.'

The above is if we want a big-tent approach. I'm also fine with just cause-neutral evaluation and the current-seemingly-most-important-from-a-cause-neutral-standpoint causes being deemed 'EA' and all else clearly being not, no matter who that makes cranky.

Comment author: IanDavidMoss 08 February 2017 02:11:46PM *  1 point [-]

I think I'm the one being called out with the reference to "a non-profit art magazine" being framed as EA-relevant, so I'll respond here. I endorse the commenter's thought that

If we had a notion of 'in this space/forum/organization, we consider the most effective thing to do given that one cares primarily about art' or 'given that one is focused on ending Alzheimer's, what is the most effective thing to do?', then people could spend more time seriously discussing those questions and less bickering over what counts as 'EA.'

If I'm understanding the proposal correctly, it's envisioning something like a reddit-style set of topic-specific subforums in which EA principles could be discussed as they relate to that topic. What I like about that solution is that it allows for the clarity of discussion boundaries that the commenter desires, but still includes discussions of cause-specific effectiveness within the broader umbrella of EA, which helps to facilitate cross-pollination of thinking across causes and from individual causes to the more global cause-neutral space.

Comment author: RobBensinger 07 February 2017 10:45:49PM 1 point [-]

Anonymous #18:

Speaking regarding the Bay Area effective altruism community: There's something about status that could be improved. On the whole, status (and what it gets you) serves a valuable purpose; it's a currency used to reward those producing what the community values. The EA community is doing well at this in that it does largely assign status to people for the right things. At the same time, something about how status is being done is leaving many people feeling insecure and disconnected.

I don't know what the solution is, but you said magic wand, so I'll punt on what the right response should be."

Comment author: RobBensinger 07 February 2017 10:32:14PM 1 point [-]

Anonymous #26:

I probably classify as 'talent,' since I've been repeatedly shortlisted to EA organizations. I'm glad that superior applicants applied and got the jobs! It's been a shame for me personally, though, because a work environment like that would have been ideal for overcoming my longstanding depression.

Ordinarily, I'd just say 'that's life,' but it seems worthwhile to point out the value of the shared ethos, including EA's interest in personal productivity, etc. I'm sure that I'd be achieving exponentially greater benefits for the world after a short time in such a working environment, but it's not easy to find. Maybe someone out there has ideas for how to make it more available!

Comment author: RobBensinger 07 February 2017 09:53:01PM *  1 point [-]

Anonymous #7:

Remove Utilitarianism as a pillar, platform, assumption, or commonly held ethical belief.

Rob Bensinger replied:

If the author reads this, I'd be curious to see a follow-up that says more about what they mean by "utilitarianism". Lots of EAs don't strictly identify with utilitarianism (and utilitarianism isn't generally treated as a pillar of EA), but think it's useful to think in vaguely "utilitarianism-ish" terms: focusing on the consequences of one's actions in deciding what to do; among the consequences, heavily weighing the impact one will have on people's well-being; mostly trying to help people in general, rather than strongly privileging one group of people over another; trying to make ethical decisions in a consistent and principled way; etc. Following those constraints in one's altruistic activities will cause one to mostly "look utilitarian" from the outside even if your real decision criterion is something else.

Comment author: RobBensinger 07 February 2017 11:00:24PM 0 points [-]

Anonymous #33:

I think people in EA should give up on trying not to seem cultish and just go full-blown weird.

Comment author: RobBensinger 07 February 2017 11:00:48PM 3 points [-]

Anonymous #38:

Anonymous #33's comment makes me angry. I am trying to build a tribe that I can live in while we work on the future; please stop trying to kick people in the face for being normal whenever they get near us.

Comment author: RobBensinger 11 February 2017 03:04:03PM 2 points [-]

There are versions of this I endorse, and versions I don't endorse. Anon #38 seems to be interpreting #33 as saying 'let's be less tolerant of normal people/behaviors', but my initial interpretation of #33 was that they were saying 'let's be more tolerant of weird people/behaviors'.

Comment author: RobBensinger 07 February 2017 11:02:13PM -1 points [-]

Anonymous #36:

I'd like to see more information from the EA community about which organizations are most effective at addressing environmental harm, and at reducing greenhouse gas emissions in particular. More generally, I'd like to see more material from the EA community about which organizations or approaches are most effective in the category in which they fall.

Many EA supporters doubtless accept a broadly utilitarian ethical framework, according to which all activities can be ranked in order of their effect on aggregate welfare. I think the notion of aggregate welfare is incoherent. For that reason, I'm not interested in anyone's opinion about whether reducing CO2 emissions is as cost-effective as saving children from malaria, or whether enabling people to buy better roofs is as cost-effective as reducing the risk of an intelligence explosion in AI.

When I decide that I want to reduce CO2 emissions, however, I would like to know which organizations are reducing emissions the most per dollar. That is a comparison that makes sense! If I am interested in helping to distribute malaria nets, I would like to have some sense of what impact my donation is likely to have. I suspect that there are a lot of people like me out there: not interested in ranking the importance of possible altruistic goals, but interested in information about how to pursue a given altruistic goal effectively.

Level of involvement: I have donated to GiveWell-endorsed charities for several years, though not at the level Peter Singer would recommend. I would not identify myself as a member of the EA movement.

Comment author: RobBensinger 07 February 2017 10:36:23PM -1 points [-]

Anonymous #2:

I'd prefer it if more people in EA were paid on a contract basis, if more people were paid lower salaries, if there were more mechanisms for the transfer of power in organizations (e.g., a 2- or 3-year term limit for CEOs and a maximum age at entry), and if there were more direct donations. Also: better systems to attract young people. More people in biology. More optimism. More willingness to broadcast arguments against working on animal welfare that have not been refuted.

Comment author: Evan_Gaensbauer 22 February 2017 05:48:12AM 1 point [-]

I originally downvoted this comment, because some of the suggestions obviously suck, but some of the points here could be improved.

I'd prefer it if more people in EA were paid on a contract basis.

There are a lot of effective altruists who have just as good ideas as anyone working at an EA non-profit, or a university, but due to a variety of circumstances, they're not able to land those jobs. Some effective altruists already run Patreons for their blogs, and I think the material coming out of them is decent, especially as they can lend voices independent of institutions on some EA subjects. Also, they have the time to cover or criticize certain topics other effective altruists aren't since their effort is taken up by a single research focus.

I'd prefer it if more people in EA were paid on a contract basis.

Nothing can be done about this criticism if some numbers aren't given. Criticizing certain individuals for getting paid too much, or criticizing certain organizations for paying their staff too much, isn't an actionable criticism unless one gets specific. I know EA organizations whose staff, including the founders who decide the budget, essentially get paid minimum wage. On the other hand, Givewell's cofounders Holden and Elie get paid well into the six figures each year. While I don't myself much care, I've privately chatted with people who perceive this as problematic. Then, there may be some staff at some EA organizations who may appear to others to get paid more than they deserve, especially when their salaries may be able to pay for one or more full-time salaries as other individuals perceived to be just as competent. That last statement was full of conditionals, I know, but it's something I'm guessing they anonymous commenter was concerned about.

f there were more mechanisms for the transfer of power in organizations (e.g., a 2- or 3-year term limit for CEOs and a maximum age at entry),

Again, they'd need to be specific about what organization they're talking about. The biggest problem with this comment is the commenter made broad, vague generalizations which aren't actionable. It's uncomfortable to make specific criticisms of individuals or organizations, yes, but the point of an anonymous criticism is to be able to do that if it's really necessary with virtual impunity, while bad commentary which are more or less character assassinations can easily be written off without a flamewar ensuing, or feelings not getting as hurt.

Anyway, I too can sympathize with demands for more accountability, governance and oversight at EA organizations. For example, many effective altruists have been concerned time and again with the influence of major organizations like the Centre for Effective Altruism which, even if its not their intent, may be perceived to represent and speak for the movement as a whole. This could be a problem. However, while EA need not only be a social movement predicated on and mediated through registered NPOs, it by and large is and will continue to be in practice, as many social movements which are at all centralized are. Making special asks for changes in governance at these organizations to become more democratic without posting to the EA Forum directly and making the suggestions consistent with how NPOs at all operate in a given jurisdiction will just not result in change. These suggestions really stand out considering they're more specific than I've seen anyone call for, as if this is a desperate problem in EA, when at most I've seen similar sentiments at most expressed as vague concerns on the EA Forum.

and if there were more direct donations.

The EA Forum and other channels like the 'Effective Altruism' Facebook group appear dominated by fundraisers and commentary on and from metacharities because those are literally some of the only appropriate outlets for metacharities to fundraise or to publish transparency reports. Indeed, that they're posting material besides fundraisers beyond their own website is a good sign, as it's the sort of transparency and peer review the movement at large would demand of metacharities. Nonetheless, between this and constant chatter about metacharities on social media, I can see how the perception most donations are indirect and go to metacharities arises. However, this may be illusory. The 2015 EA Survey, the latest date for which results are available, show effective altruists overwhelmingly donate to Givewell's recommended charities. Data isn't available on the amounts of money self-identified effective altruists are moving to each of these given charities. So, it's possible lots of effective altruists earning to give are making primarily indirect donations. However, anecdotally, this doesn't seem to be the case. If one wants to make that case, and then mount a criticism based on it, one must substantiate it with evidence.