I’m writing this in response to the recent post about Intentional Insights documenting the many ways in which Gleb and the organisation he runs has acted in ways not representing EA values. Please take this post as representative of the views of the Centre for Effective Altruism (CEA) on the matter.
As documented in the Open Letter, Intentional Insights have been systematically misleading in their public communications on many occasions, have astroturfed, and have engaged in morally dubious hiring practices. But what’s been most remarkable about this affair is how little Gleb has been willing to change his actions in light of this documentation. If I had been in his position, I’d have radically revised my activities, or quit my position long ago. Making mistakes is something we all do. But ploughing ahead with your plans despite extensive, deep and well-substantiated criticism of them by many thoughtful members of the EA community — who are telling you not just that your plans are misguided but that they are actively harmful — is not ok. It’s the opposite of what effective altruism stands for.
Because of this, we want to have no association with Intentional Insights. We do not consider them a representative of EA, we do not want to have any of CEA’s images or logos (including Giving What We Can) used in any of Intentional Insights’ promotional materials; we will not give them a platform at EAG or EAGx events; and we will encourage local group leaders not to have them speak.
- Someone using the effective altruism brand to solicit “donations” to a company that was not and could not become a non-profit, using text taken from other EA websites
- People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)
- People co-opting the term ‘effective altruism’ to justify activities that they were already doing that clearly wouldn’t be supported by EA reasoning
- Someone making threats of physical violence to another member of the EA community for not supporting their organisation
Problems like these, it seems to me, will only get worse over time. As the community grows, the likelihood of behaviour like this increases, and the costs of behaviour like this increases too, because bad actors taint the whole movement.
At the moment, there’s simply no system set up within the community to handle this. What currently happens is: someone starts engaging in bad activities -> bad activities are tolerated for an extended period of time, aggravating many -> repeated public complaints start surfacing, but still no action -> eventually a coalition of community members gather together to publicly denounce the activities. This, it seems to me, is a bad process. It’s bad for actually preventing inappropriate behaviour, because the response to that behaviour is so slow, and because there’s no real sanction that others in the community can make. It’s bad for the community members who have to spend hundreds of hours of their time documenting the inappropriate behaviour. It’s bad for those who receive the criticism, because they will naturally feel they’ve been ganged up upon, and have not had a ‘fair trial’. And it’s bad for onlookers who, not knowing all the details of the situation, will see a fractious movement engaging in witch hunts.
I think that in the mid to long term the consequences of this could be very great. The default outcome for any social movement is to fizzle or fragment, and we should be looking for ways that this will happen with EA. If the number of examples of bad behaviour continues to grow - which we should expect to see if we let the status quo continue - then this seems like an obvious way in which the EA movement could fail, whether through effective altruism becoming known as a community where people engage in morally dubious activities for the greater good, where the community gets a reputation for being unpleasant, or where the term ‘effective altruism’ has lost the meaning that it currently has and people start using it to refer to any attempt to make a difference that makes at least a passing nod to using data.
People often look to CEA to resolve examples of bad behaviour, but so far we have been coy about doing so. Primarily, we’re worried about overreach: effective altruism is a movement that is much larger than any one organisation, and we have not wanted to create further ‘mob rule’ dynamics by interfering in affairs that people in the community might judge to be none of CEA’s business.
For example, internally we discussed whether we should ban Gleb from the EA Forum, which we help to run, for a three-month period. I think that this response would easily be warranted in light of Intentional Insights’ activities. But, for me, that proposal rang alarm bells of overreach: the EA Forum seems to me to be a community good, and it seems to me that CEA doesn’t have the legitimacy to take that action. But, unfortunately, neither does anyone else.
So I’d like there to exist a more formal process by which we can ensure that people taking action under the banner of effective altruism are acting in accordance with EA values, and strengthening rather than damaging the movement. I think that this is vital if the EA community is going to grow substantially and reach its full potential. If we did this successfully, this process would avoid feelings that EA is run by mob rule, it would ensure that bad behaviour is nipped in the bud, rather than growing to the point where the community spends hundreds of hours dealing with it, and it would give allegedly bad actors a transparent and fair assessment.
To this end, what I’d propose is:
- Creating a set of EA guiding principles
- Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.
I’d imagine that, in the first instance, if there was an example of egregious violation of the guiding principles of EA, the community panel would make recommendations to the actor in question. For example, after GiveWell’s astroturfing incident, the organisation self-sanctioned: one of the cofounders was demoted and both cofounders were fined $5000. If the matter couldn't be resolved in this way, then the panel could make recommendations to the rest of the community.
There are a lot of details to be worked out here, but I think that the case for creating something like this is strong. We’re going to try sketching out a proposal, trying to get as much feedback from the community as possible along the way. I’d be interested in people’s thoughts and reactions in the comments below.
Disclosures: I know personally all of the authors of the Open Letter. Jeff Kaufman is a donor to CEA and is married to Julia Wise, an employee of CEA; Greg Lewis is a donor to CEA and has previously volunteered for CEA; Oliver Habryka is an employee of CEA, but worked on the Open Letter on his personal time. I wasn’t involved in any capacity with the creation of the open letter.
I am sorry to hear that your encounters with feminism have primarily been divisive. My experience has been a bit different, and it may help for me to go into some quick details (OK, actually this post became quite long, which I apologize for - it's probably approaching blog length) and draw parallels with EA.
It took me a year to actually start engaging with EA. I love cost effectiveness, marginal thinking, and rigorously thinking about how to do the most good. My friends and colleagues do as well, but they do not engage with EA. To me, EA appeared, from the outside, to be a group that lays claim to something that is not unique to them, and then looks down on others - a very insular community with members that actively trash and condescend people who 'are not EA enough'. Other critics have expressed this view as well, and my initial forays into EA did not help this perception - some of my views are not standard EA views, and I had multiple people without economics backgrounds jump on me to explain that I was wrong while condescendingly explaining basic economics to me. This would be fine, if they were actually correct to do so - most of the times the loudest critiques were the most rudimentary and off the mark (for reference, I got my masters in economics and work directly in integrating economic thinking into aid programs, so I have a decent idea of what bad economic thinking looks like). Needless to say, these experiences and others left a sour taste in my mouth, and so I stopped engaging for a while.
This is similar to some people's experiences with feminism - when initially trying to break in, it can seem like a very insular community driven entirely by yelling at people who are not 'feminist enough'. I liked feminist ideals in undergrad, similar to how I enjoyed EA ideals, but avoided it because my perception was that I would not get anything from engaging in feminism because I would be expunged for 'not being feminist enough' (similar to why I avoided EA). I also didn't see a clear reason for engaging, since many of my friends already had feminist ideals without being a direct part of the feminist movement (similar to my friends and colleagues who hold EA ideas without engaging with EA).
The moment that really changed everything was in the first year of my masters, where I was hitting a economic problem that the tools I was using just could not solve - I went to my adviser, complaining that no one seemed to have thought about this problem before, to which he retorted "you know that the feminist economists have been working on this for decades, right? Talk to Professor XYZ and they'll help you". And I did, and next thing I knew I was getting a specialty in gender analysis of economics - because as I started to get more involved, I realized that behind that initial barrier was a rich world of diverse thinking on a variety of topics. I truly believe now that the most advanced and innovative thinking in economics today comes from feminist economists.
And it wasn't just academic feminists - once I got past that initial barrier, I started looking more into the very groups I originally avoided, and I soon realized that a lot of feminist activists were actively fighting to break down the barrier that I encountered, by advocating for 'calling in' rather than 'calling out' (among other things). Once you're inside, it is a very supportive and tolerant community, and it has helped me (and many others) grow as a person and as a thinker more than anything else in my life has.
Going back to EA, as I mentioned before there is a very similar barrier, in which to an outside person a lot of the people 'representing EA' online can be quite nasty to outsiders and divergent views. Once I got past this initial barrier, I realized that the majority of people identifying with EA are actually quite nice, and I realized that there are many in the EA movement who are actively trying to make people's first experience of EA more amicable and to make the movement as a whole more tolerant and respective of divergent views. It's essentially the EA movement's equivalent of the 'calling-in' problem, and the point that these discussions are happening make me very hopeful for the future.
None of this really helps answer the 'what about a formal mechanism' question directly, I just want to try and express my belief that better engagement with social movements like feminism (all of whom have dealt with similar problems to the EA movement!) is important. Offhand saying that 'feminism failed on this point, so we can't learn from them' without really engaging with members of the feminist movement is not a strong way forward.
In terms of examples off the top of my head of how feminist actors have tried to mitigate the 'bad actor' problem, my first thought is the issue of problematic 'allies'. The response has to write guidance (less formal version here) on how to be a good ally, and to generally set forth 'community norms' that show up in various places (blogs, posters, listservs, whatever). When someone does not adhere to these norms, in the best of cases you can help them understand why going against the norm is bad and help them be a better ally, and in the worst of situations the movement as a whole at least has some plausible deniability ("don't tell us that person is representative of us, they're clearly breaking all of the norms that we've clearly detailed all of these places!").
Thanks for providing such a detailed comment as a response to what I must admit was one of my lazier comments.
I should make my critique more nuanced. I don't believe that all feminists or even the majority of feminists are involved in or necessarily support the kind of witch-hunts or social shaming that I see occurring on a regular basis. My claim is simply a) these witchhunts occur b) they occur regularly c) feminism does not appear (from my admittedly limited external perspective) to have made much progress dealing with this issue. That said, I will defi... (read more)