I’m writing this in response to the recent post about Intentional Insights documenting the many ways in which Gleb and the organisation he runs has acted in ways not representing EA values. Please take this post as representative of the views of the Centre for Effective Altruism (CEA) on the matter.
As documented in the Open Letter, Intentional Insights have been systematically misleading in their public communications on many occasions, have astroturfed, and have engaged in morally dubious hiring practices. But what’s been most remarkable about this affair is how little Gleb has been willing to change his actions in light of this documentation. If I had been in his position, I’d have radically revised my activities, or quit my position long ago. Making mistakes is something we all do. But ploughing ahead with your plans despite extensive, deep and well-substantiated criticism of them by many thoughtful members of the EA community — who are telling you not just that your plans are misguided but that they are actively harmful — is not ok. It’s the opposite of what effective altruism stands for.
Because of this, we want to have no association with Intentional Insights. We do not consider them a representative of EA, we do not want to have any of CEA’s images or logos (including Giving What We Can) used in any of Intentional Insights’ promotional materials; we will not give them a platform at EAG or EAGx events; and we will encourage local group leaders not to have them speak.
- Someone using the effective altruism brand to solicit “donations” to a company that was not and could not become a non-profit, using text taken from other EA websites
- People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)
- People co-opting the term ‘effective altruism’ to justify activities that they were already doing that clearly wouldn’t be supported by EA reasoning
- Someone making threats of physical violence to another member of the EA community for not supporting their organisation
Problems like these, it seems to me, will only get worse over time. As the community grows, the likelihood of behaviour like this increases, and the costs of behaviour like this increases too, because bad actors taint the whole movement.
At the moment, there’s simply no system set up within the community to handle this. What currently happens is: someone starts engaging in bad activities -> bad activities are tolerated for an extended period of time, aggravating many -> repeated public complaints start surfacing, but still no action -> eventually a coalition of community members gather together to publicly denounce the activities. This, it seems to me, is a bad process. It’s bad for actually preventing inappropriate behaviour, because the response to that behaviour is so slow, and because there’s no real sanction that others in the community can make. It’s bad for the community members who have to spend hundreds of hours of their time documenting the inappropriate behaviour. It’s bad for those who receive the criticism, because they will naturally feel they’ve been ganged up upon, and have not had a ‘fair trial’. And it’s bad for onlookers who, not knowing all the details of the situation, will see a fractious movement engaging in witch hunts.
I think that in the mid to long term the consequences of this could be very great. The default outcome for any social movement is to fizzle or fragment, and we should be looking for ways that this will happen with EA. If the number of examples of bad behaviour continues to grow - which we should expect to see if we let the status quo continue - then this seems like an obvious way in which the EA movement could fail, whether through effective altruism becoming known as a community where people engage in morally dubious activities for the greater good, where the community gets a reputation for being unpleasant, or where the term ‘effective altruism’ has lost the meaning that it currently has and people start using it to refer to any attempt to make a difference that makes at least a passing nod to using data.
People often look to CEA to resolve examples of bad behaviour, but so far we have been coy about doing so. Primarily, we’re worried about overreach: effective altruism is a movement that is much larger than any one organisation, and we have not wanted to create further ‘mob rule’ dynamics by interfering in affairs that people in the community might judge to be none of CEA’s business.
For example, internally we discussed whether we should ban Gleb from the EA Forum, which we help to run, for a three-month period. I think that this response would easily be warranted in light of Intentional Insights’ activities. But, for me, that proposal rang alarm bells of overreach: the EA Forum seems to me to be a community good, and it seems to me that CEA doesn’t have the legitimacy to take that action. But, unfortunately, neither does anyone else.
So I’d like there to exist a more formal process by which we can ensure that people taking action under the banner of effective altruism are acting in accordance with EA values, and strengthening rather than damaging the movement. I think that this is vital if the EA community is going to grow substantially and reach its full potential. If we did this successfully, this process would avoid feelings that EA is run by mob rule, it would ensure that bad behaviour is nipped in the bud, rather than growing to the point where the community spends hundreds of hours dealing with it, and it would give allegedly bad actors a transparent and fair assessment.
To this end, what I’d propose is:
- Creating a set of EA guiding principles
- Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.
I’d imagine that, in the first instance, if there was an example of egregious violation of the guiding principles of EA, the community panel would make recommendations to the actor in question. For example, after GiveWell’s astroturfing incident, the organisation self-sanctioned: one of the cofounders was demoted and both cofounders were fined $5000. If the matter couldn't be resolved in this way, then the panel could make recommendations to the rest of the community.
There are a lot of details to be worked out here, but I think that the case for creating something like this is strong. We’re going to try sketching out a proposal, trying to get as much feedback from the community as possible along the way. I’d be interested in people’s thoughts and reactions in the comments below.
Disclosures: I know personally all of the authors of the Open Letter. Jeff Kaufman is a donor to CEA and is married to Julia Wise, an employee of CEA; Greg Lewis is a donor to CEA and has previously volunteered for CEA; Oliver Habryka is an employee of CEA, but worked on the Open Letter on his personal time. I wasn’t involved in any capacity with the creation of the open letter.
"I think that a panel sounds like a good idea, but I'd to request that someone plays Devil's Advocate for the other side, so we are aware of what issues may arise."
Hi*.
(1) As soon as you write down something formal, bad actors can abuse that process.
Let's define bad actors as people engaged in activities harmful to the EA movement, and then divide bad actors into the categories of 'malicious' and 'incompetent'.
This argument relates to the malicious actors. When dealing with malicious actors, there are generally strong advantages to keeping vague, non-public, commonsense rules. This is because as soon as you have a rigid, public set of rules, there will be loopholes. This is essentially unavoidable; no organisation has ever succeeded in defining a set of rules that rules out every possible imaginable bad behaviour.
Of course, once they go through a loophole, we could modify the rules to close the loophole. But that tends to look, both to internal and external observers, very arbitrary and even like you're just trying to pick on particular individuals the central 'people with power' don't like. It defeats a lot of the point of having processes in the first place.
(2) Being publicly shamed in front of literally hundreds or thousands of people is something that most human beings find toxic to the point that they would never knowingly risk it. Accordingly, we should expect that most people caught by this will be unknowingly risking it.
This mostly relates to the incompetent actors, who I believe greatly outnumber the malicious actors. It's particularly bad if people do actually start skipping steps in the process you described (see below) and jumping straight to community-wide sanctions. The simple fact of the matter is that for every person like Gleb, there are many more people who were incompetent actors, were quietly tapped on the shoulder and told to cease-and-desist, and actually did desist. Of course, we don't hear about those people, which makes it hard to assess their number. Unfortunately, without knowing how many problems were quietly headed off in this way without any drama or fuss we don't actually know that our current process is a bad one.
What happens to an incompetent actor who is shamed in this way? Presumably, they dissociate from the movement. If they found the EA movement in the first place then they probably know other people on the periphery of the movement and some people in the movement. They talk to those other people, who are generally sympathetic. Those other people dissociate. And so on; we get an organic expanding flow of people leaving the movement. I think it would be extremely easy for us to lose entire cities, or even countries in their early stages, in this way, with no meaningful hope of recovery. Of course, those people probably don't lose interest in EA ideas entirely, so they might keep doing a lot of EA things without staying under the EA 'brand'. Maybe they set up their own movement. And we have fragmented.
I honestly think the above circumstance is just a matter of time; if you have a death penalty then eventually you always end up executing someone innocent. Except that unlike in the analogy, the innocent martyr can actively recruit others in disavowing the community who treated them so poorly.
(3) Nope, other movements don't do this.
If, as I argued above, this example doesn't improve that our process is bad, where else can we look for evidence that its bad? One obvious place would be the reference class of other social movements. If other movements had panels like this and also had fewer problems with bad actors relative to their size, that would be moderately strong evidence in favour of it being a good idea.
Unfortunately, despite the claim in the OP that 'the existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.", I don't think this holds. It's not clear to me exactly what Will meant by 'societies', but the most-often used references for EA are other global social movements like feminism, LGBT rights movement, civil rights movement, animal rights movement, environmentalism, and so on.
Play this game with a friend: write down the set of people who you think best fills the role of the panel Will describes for each of the five movements mentioned above. How many did you agree on? Do you think you would still agree if you'd picked a friend in a different country? What about a friend from a different socio-economic background?
Panels only work if (almost) everybody involved agrees that's where authority lies. I think it's transparently obvious that the five global movements listed above do not agree where global authority lies, even if some subsets (e.g. the German Green youth movement) might agree where local authority lies (the German Green party).
(4) It ossifies our current lack of diversity. Or if we keep fluidly changing it, it may become emblematic of the problems its trying to solve.
I think most people have a strong intuition that any such panel should be as diverse and be as broadly representative of the views of the EA movement at large as is reasonably possible given size constraints. I agree with this intuition. However, I would like to flag up that being as diverse as the EA movement itself, while the correct bar, is really not a very high bar on many metrics. If the EA movement continues to become more diverse and continues to grow rapidly, which I hope it does, then the panel will soon be skewed away from the actual makeup of the EA community in undesirable ways. For instance, suppose a 4th major cause area gains standing in the movement on a par with Global Poverty/Animal Rights/Far Future over the next five years. That area should be represented on the panel, and by default it wouldn't be. So we need to keep changing the makeup of the panel to match the makeup of EA, and the latter isn't something we can measure particularly scientifically so there's no obvious Schelling point for people to agree how to do this. It's not even obvious what the list of characteristics we should care about ensuring approximate representation in even is.
In many societies that do have such panels, the panels are elected by members of the society. We could do that, but this is extremely messy. It will get political. Some group x will end up feeling unfairly excluded from the inner group of power brokers. At that point you have a powder keg of resentment making to explode.
And then group x has an innocent executed. Goodbye group x.
Perhaps most importantly, all this management of the panel is going to use up a ton of time to do it well at all, and it seems like the main complaint right now is that too much time got spent in the motivating case.
*I don't actually know what I think about this, so I don't know if this qualifies as 'Devil's Advocate', more like just 'Advocate'.
**"someone starts engaging in bad activities -> bad activities are tolerated for an extended period of time, aggravating many -> repeated public complaints start surfacing, but still no action -> eventually a coalition of community members gather together to publicly denounce the activities"
General advice for rapidly growing (for-profit) organizations is to focus on your next order of magnitude growth.
It seems not just reasonable but almost certain that the optimal strategy for EA right now (~1K core members?) is different than the strategy for the environmental movement (~10M core members?).