After seeing some of the debate last month about effective altruism's information-sharing / honesty / criticism norms (see Sarah Constantin's follow-up and replies from Holly Elmore (1,2), Rob Wiblin (1, 2), Jacy Rees, Christopher Byrd), I decided to experiment with an approach to getting less filtered feedback. I asked folks over social media to anonymously answer this question:
If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.]
I got a lot of high-quality responses, and some people suggested that I cross-post them to the EA Forum for further discussion. I've posted paraphrased version of many of the responses below. Some cautions:
1. I have no way to verify the identities of most of the respondents, so I can't vouch for the reliability of their impressions or anecdotes. Anonymity removes some incentives that keep people from saying what's on their mind, but it also removes some incentives to be honest, compassionate, thorough, precise, etc. I also have no way of knowing whether a bunch of these submissions come from a single person.
2. This was first shared on my Facebook wall, so the responses are skewed toward GCR-oriented people and other sorts of people I'm more likely to know. (I'm a MIRI employee.)
3. Anonymity makes it less costly to publicly criticize friends and acquaintances, which seems potentially valuable; but it also makes it easier to make claims without backing them up, and easier to widely spread one-sided accounts before the other party has time to respond. If someone writes a blog post titled 'Rob Bensinger gives babies ugly haircuts', that can end up widely shared on social media (or sorted high in Google's page rankings) and hurt my reputation with others, even if I quickly reply in the comments 'Hey, no I don't.' If I'm too busy with a project to quickly respond, it's even more likely that a lot of people will see the post but never see my response.
For that reason, I'm wary of giving a megaphone to anonymous unverified claims. Below, I've tried to reduce the risk slightly by running comments by others and giving them time to respond (especially where the comment named particular individuals/organizations/projects). I've also edited a number of responses into the same comment as the anonymous submission, so that downvoting and direct links can't hide the responses.
4. If people run experiments like this in the future, I encourage them to solicit 'What are we doing right?' feedback along with 'What would you change?' feedback. Knowing your weak spots is important, but if we fall into the trap of treating self-criticism alone as virtuous/clear-sighted/productive, we'll end up poorly calibrated about how well we're actually doing, and we're also likely to miss opportunities to capitalize on and further develop our strengths.
Three points worth mentioning in response:
Most of the people best-known for worrying about AI risk aren't primarily computer scientists. (Personally, I've been surprised by the number of physicists.)
'It's self-serving to think that earning to give is useful' seems like a separate thing from 'it's self-serving to think AI is important.' Programming jobs obviously pay well, so no one objects to people following the logic from 'earning to give is useful' to 'earning to give via programming work is useful'; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, 'technology X is a big deal' will frequently imply both 'technology X poses important risks' and 'knowing how to work with technology X is profitable', so it isn't surprising to find those beliefs going together.)
If you were working in AI and wanted to rationalize 'my current work is the best way to improve the world', then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: "The problem here is that AI risk reducers can't win. If they're not computer scientists, they're decried as uninformed non-experts, and if they do come from computer scientists, they're promoting and serving themselves." But the bigger problem is that the latter doesn't make sense as a self-serving motive.)
Except that on point 3, the policies advocated and strategies being tried aren't as if people are trying to reduce x risk, they're as if they're trying to enable AI to work rather than backfire.