This is in response to Sarah Constantin's recent post about intellectual dishonesty within the EA community.
I roughly agree with Sarah's main object level points, but I think this essay doesn't sufficiently embody the spirit of cooperative discourse it's trying to promote. I have a lot of thoughts here, but they are building off a few existing essays. (There's been a recent revival over on Less Wrong attempting to make it a better locus for high quality discussion. I don't know if it's especially succeeded, but I think the concepts behind that intended revival and very important)
- Why Our Kind Can't Cooperate (Eliezer Yudkowsky)
- A Return to Discussion (Sarah Constantin)
- The Importance of [Less Wrong, OR another Single Conversational Locus] (Emphasis mine) (Anna Salamon)
- The Four Layers of Intellectual Conversation (Eliezer Yudkowsky)
I think it's important to have all three concepts in context before delving into: - EA has a lying problem (Sarah Constantin)
I recommend reading all of those. But here's a rough summary of what I consider the important bits. (If you want to actually argue with these bits, please read the actual essays before doing so, so you're engaging with the full substance of the idea)
- Intellectuals and contrarians love to argue and nitpick. This is valuable - it produces novel insights, and keeps us honest. BUT it makes it harder to actually work together to achieve things. We need to understand how working-together works on a deep enough level that we can do so without turning into another random institution that's lost it's purpose. (See Why Our Kind... for more)
- Lately, people have tended to talk on social media (Facebook, Tumblr, etc) rather than in formal blogs or forums that encourage longform discussion. This has a few effects. (See A Return to Discussion for more)
- FB discussion is fragmented - it's hard to find everything that's been said on a topic. (And tumblr is even worse)
- It's hard to know whether OTHER people have read a given thing on a topic.
- A related point (not necessarily in "A Return to Discussion" is that social media incentives some of the worst kinda of discussion. People share things quickly, without reflection. People read and respond to things in 5-10 minute bursts, without having time to fully digest them.
- Having a single, long form discussion area that you can expect everyone in an intellectual community to have read, makes it much easier to building knowledge. (And most of human progress is due, not to humans being smart, but being able to stand on the shoulders of giants). Anna Salamon's "Importance of a Single Conversational Locus" is framed around x-risk, but I think it applies to all aspects of EA: the problems the world faces are so huge that they need a higher caliber of thinking and knowledge-building than we currently have in order to solve.
- In order to make true intellectual progress, you need people to be able to make critiques. You also need those critics to expect their criticism to in turn be criticized, so that the criticism is high quality. If a critique turns out to be poorly thought out, we need shared, common knowledge of that so that people don't end up rehashing the same debates.
- And finally, (one of) Sarah's points in "EA has a lying problem" is that, in order to be different from other movements and succeed where they failed, EA needs to hold itself to a higher standard than usual. There's been much criticism of, say, Intentional Insights for doing sketchy, truth-bendy things to gain prestige and power. But that plenty of "high status" people within the EA community do things that are similar, even if to a different degree. We need to be aware of that.
I would not argue as strongly as Sarah does that we shouldn't do it at all, but it's worth periodically calling each other out on it.
Cooperative Epistemology
So my biggest point here, is that we need to be more proactive and mindful about how discussion and knowledge is built upon within the EA community.
To succeed at our goals:
- EA needs to hold itself to a very high intellectual standard (higher than we currently have, probably. In some sense anyway)
- Factions within EA needs to be able to cooperate, share knowledge. Both object level knowledge (i.e. how cost effective is AMF?) and meta/epistemic knowledge like:
- How do we evaluate messy studies
- How do we discuss things online so that people actually put effort into reading and contributing the discussion.
- What kinds of conversational/debate norms lead people to be more transparent.
- We need to be able to apply all the knowledge to go out and accomplish things, which will probably involve messy political stuff.
I have specific concerns about Sarah's post, which I'll post in a comment when I have a bit more time.
This definitely isn't the kind of deliberate where there's an overarching plot, but it's not distinguishable from the kind of deliberate where a person sees a thing they should do or a reason to not write what they're writing and knowingly ignores it, though I'd agree in that I think it's more likely they flinched away unconsciously.
It's worth noting that while Vegan Outreach is not listed as a top charity it is listed as a standout charity, with their page here: https://animalcharityevaluators.org/research/charity-review/vegan-outreach/
I don't think it is good to laud positive evidence but refer to negative evidence only via saying "there is a lack of evidence", which is what the disclaimers do- in particular there's no mention of the evidence against there being any effect at all. Nor is it good to refer to studies which are clearly entirely invalid as merely "poor" while still relying on their data. It shouldn't be "there is good evidence" when there's evidence for, and "the evidence is still under debate" when there's evidence against, and there shouldn't be a "gushing praise upfront, provisos later" approach unless you feel the praise is still justified after the provisos. And "have reservations" is pretty weak. These are not good acts from a supposedly neutral evaluator.
Until the revision in November 2016, the VO page opened with: "Vegan Outreach (VO) engages almost exclusively in a single intervention, leafleting on behalf of farmed animals, which we consider to be among the most effective ways to help animals.", as an example of this. Even now I don't think it represents the state of affairs well.
If in trying to resolve the matter of whether it has high expected impact or not, you went to the main review on leafleting (https://animalcharityevaluators.org/research/interventions/leafleting/), you'd find it began with "The existing evidence on the impact of leafleting is among the strongest bodies of evidence bearing on animal advocacy methods.".
This is a very central Not Technically a Lie (http://lesswrong.com/lw/11y/not_technically_lying/); the example of a not-technically-a-lie in that post being using the phrase "The strongest painkiller I have." to refer to something with no painkilling properties when you have no painkillers. I feel this isn't something that should be taken lightly:
"NTL, by contrast, may be too cheap. If I lie about something, I realize that I'm lying and I feel bad that I have to. I may change my behaviour in the future to avoid that. I may realize that it reflects poorly on me as a person. But if I don't technically lie, well, hey! I'm still an honest, upright person and I can thus justify visciously misleading people because at least I'm not technically dishonest."
The disclaimer added now helps things, but good judgement should have resulted in an update and correction being transparently issued well before now.
The part which strikes me as most egregious was in the deprioritising of updating a review on what was described in a bunch of places as the most cost effective (and therefore most effective) intervention. I can't see any reason for that, other than that the update would have been negative.
There may not have been conscious intent behind this- I could assume that this was as a result of poor judgement rather than design- but it did mislead the discourse on effectiveness, that already happened, and not as a result of people doing the best thing given information available to them but as a result of poor decisions given this information. Whether it got more donations or not is unclear- it might have tempted more people into offsetting, but on the other hand each person who did offsetting would have paid less because they wouldn't have actually offset themselves.
However something like this is handled is also how a bad actor would be handled, because a bad actor would be indistinguishable from this; if we let this by without criticism and reform, then bad actors would also be let by without criticism and reform.
I think when it comes to responding to some pretty severe stuff of this sort, even if you assume the people made them in good faith and just made some rationality failings, more needs to be said than "mistakes were made, we'll assume you're doing the best you can to not make them again". I don't have a grand theory of how people should react here, but it needs to be more than that.
My inclination is to at the least frankly express how severe I think it is- even if it's not the nicest thing I could say.
Thanks for the response, it helps me understand where you're coming from.
I agree that the sentence you cite could be better written (and in general ACE could improve, as could we all). I disagree with this though:
At the object level: ACE is distinguishable from a bad actor, for example due to the fact th... (read more)