I’m writing this in response to the recent post about Intentional Insights documenting the many ways in which Gleb and the organisation he runs has acted in ways not representing EA values. Please take this post as representative of the views of the Centre for Effective Altruism (CEA) on the matter.
As documented in the Open Letter, Intentional Insights have been systematically misleading in their public communications on many occasions, have astroturfed, and have engaged in morally dubious hiring practices. But what’s been most remarkable about this affair is how little Gleb has been willing to change his actions in light of this documentation. If I had been in his position, I’d have radically revised my activities, or quit my position long ago. Making mistakes is something we all do. But ploughing ahead with your plans despite extensive, deep and well-substantiated criticism of them by many thoughtful members of the EA community — who are telling you not just that your plans are misguided but that they are actively harmful — is not ok. It’s the opposite of what effective altruism stands for.
Because of this, we want to have no association with Intentional Insights. We do not consider them a representative of EA, we do not want to have any of CEA’s images or logos (including Giving What We Can) used in any of Intentional Insights’ promotional materials; we will not give them a platform at EAG or EAGx events; and we will encourage local group leaders not to have them speak.
- Someone using the effective altruism brand to solicit “donations” to a company that was not and could not become a non-profit, using text taken from other EA websites
- People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)
- People co-opting the term ‘effective altruism’ to justify activities that they were already doing that clearly wouldn’t be supported by EA reasoning
- Someone making threats of physical violence to another member of the EA community for not supporting their organisation
Problems like these, it seems to me, will only get worse over time. As the community grows, the likelihood of behaviour like this increases, and the costs of behaviour like this increases too, because bad actors taint the whole movement.
At the moment, there’s simply no system set up within the community to handle this. What currently happens is: someone starts engaging in bad activities -> bad activities are tolerated for an extended period of time, aggravating many -> repeated public complaints start surfacing, but still no action -> eventually a coalition of community members gather together to publicly denounce the activities. This, it seems to me, is a bad process. It’s bad for actually preventing inappropriate behaviour, because the response to that behaviour is so slow, and because there’s no real sanction that others in the community can make. It’s bad for the community members who have to spend hundreds of hours of their time documenting the inappropriate behaviour. It’s bad for those who receive the criticism, because they will naturally feel they’ve been ganged up upon, and have not had a ‘fair trial’. And it’s bad for onlookers who, not knowing all the details of the situation, will see a fractious movement engaging in witch hunts.
I think that in the mid to long term the consequences of this could be very great. The default outcome for any social movement is to fizzle or fragment, and we should be looking for ways that this will happen with EA. If the number of examples of bad behaviour continues to grow - which we should expect to see if we let the status quo continue - then this seems like an obvious way in which the EA movement could fail, whether through effective altruism becoming known as a community where people engage in morally dubious activities for the greater good, where the community gets a reputation for being unpleasant, or where the term ‘effective altruism’ has lost the meaning that it currently has and people start using it to refer to any attempt to make a difference that makes at least a passing nod to using data.
People often look to CEA to resolve examples of bad behaviour, but so far we have been coy about doing so. Primarily, we’re worried about overreach: effective altruism is a movement that is much larger than any one organisation, and we have not wanted to create further ‘mob rule’ dynamics by interfering in affairs that people in the community might judge to be none of CEA’s business.
For example, internally we discussed whether we should ban Gleb from the EA Forum, which we help to run, for a three-month period. I think that this response would easily be warranted in light of Intentional Insights’ activities. But, for me, that proposal rang alarm bells of overreach: the EA Forum seems to me to be a community good, and it seems to me that CEA doesn’t have the legitimacy to take that action. But, unfortunately, neither does anyone else.
So I’d like there to exist a more formal process by which we can ensure that people taking action under the banner of effective altruism are acting in accordance with EA values, and strengthening rather than damaging the movement. I think that this is vital if the EA community is going to grow substantially and reach its full potential. If we did this successfully, this process would avoid feelings that EA is run by mob rule, it would ensure that bad behaviour is nipped in the bud, rather than growing to the point where the community spends hundreds of hours dealing with it, and it would give allegedly bad actors a transparent and fair assessment.
To this end, what I’d propose is:
- Creating a set of EA guiding principles
- Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.
I’d imagine that, in the first instance, if there was an example of egregious violation of the guiding principles of EA, the community panel would make recommendations to the actor in question. For example, after GiveWell’s astroturfing incident, the organisation self-sanctioned: one of the cofounders was demoted and both cofounders were fined $5000. If the matter couldn't be resolved in this way, then the panel could make recommendations to the rest of the community.
There are a lot of details to be worked out here, but I think that the case for creating something like this is strong. We’re going to try sketching out a proposal, trying to get as much feedback from the community as possible along the way. I’d be interested in people’s thoughts and reactions in the comments below.
Disclosures: I know personally all of the authors of the Open Letter. Jeff Kaufman is a donor to CEA and is married to Julia Wise, an employee of CEA; Greg Lewis is a donor to CEA and has previously volunteered for CEA; Oliver Habryka is an employee of CEA, but worked on the Open Letter on his personal time. I wasn’t involved in any capacity with the creation of the open letter.
Thanks for providing such a detailed comment as a response to what I must admit was one of my lazier comments.
I should make my critique more nuanced. I don't believe that all feminists or even the majority of feminists are involved in or necessarily support the kind of witch-hunts or social shaming that I see occurring on a regular basis. My claim is simply a) these witchhunts occur b) they occur regularly c) feminism does not appear (from my admittedly limited external perspective) to have made much progress dealing with this issue. That said, I will definitely read the "calling in" vs. "calling out" article.
I have to agree with AGB that most feminists seem relatively unconcerned with these issues. They may point out that this is not all feminists or even most feminists and that it is unfair to hold them responsible for the actions of other actors; both of which are true. Nonetheless, they generally fail to acknowledge that this is a systematic issue within feminism or that feminism tends to have these occur more often or with more viciousness than in many other movements. Furthermore, if this kinds of incidents occurred within EA at even a fraction of the rate that they seem to occur within feminism, then I would be incredibly concerned. This holds even if the amount of drama within feminism is "normal" - a "normal" amount of drama would still not be good for the movement.
After all, they reason, some incidents will always occur in any movement once it reaches a certain size. I, and many only observers, think that, on the contrary this is a problem that is especially bad for feminism and is a directly result of several ideas existing within the movement without any corresponding counter-balancing ideas. Nonetheless, I cannot provide any proof of this, because this is not the kind of statement that can easily be verified.
Let's take for example the idea that external optics is the fault of the patriarchy. It is undoubtedly true that much of the criticism of feminism, especially from the right, is extremely unfair and motivated because feminism is challenging certain "patriarchial" ideas, such as traditional ideas of the family and gender. On the other hand, this can be used as a fully general purpose response to all criticism and it makes it very easy for people to dismiss criticism. On the other hand, within EA, there is a social norm that it is acceptable to Devil's Advocate any criticism, without anyone doubting that you are on their side.
Another idea is the concept of "mansplaining". I'm sure that many men do come into conversations with an extremely limited view or understanding. But again, this serves as a fully general purpose counter-argument and it would be against EA social norms to use an ad hominem to dismiss someone's argument just for being somewhat naive.
So even though many EAs may believe that the current criticism is largely poor quality and motivated by entrenched interests or "emotional" arguments and even though many EAs may fail to intellectually respect their opponents (as per your critiques), the current social norms act to limit the damage by ensuring a minimum standard of decency.
Regarding economics, you are probably right that many EAs with think that they know more economics than they actually do. I make this mistake sometimes. This is definitely a problem - but at least it is a better situation compared to most other social movements. I continually hear critiques of capitalism from people with no economics knowledge whatsoever (some people with economics knowledge also critique capitalism, but these are drowned out out by the mass of people without such knowledge). EA seems to have a high enough proportion of economics majors or otherwise quantitative people, that a large proportion will have enough economics exposure to produce at least a shallow understanding of economics. This has its disadvantages, but I still consider it superior to them having no knowledge.
I'm not convinced that the "Ally" formula is an example of successful mitigation. I imagine that some people have certainly been bad allies in the past, which has motivated these issues, but I am also worried that it will harm the intellectual diversity of the feminism movement by limiting the ability of allies to defend views that don't match that of the movement as a whole.