Hide table of contents

This is in response to Sarah Constantin's recent post about intellectual dishonesty within the EA community.

I roughly agree with Sarah's main object level points, but I think this essay doesn't sufficiently embody the spirit of cooperative discourse it's trying to promote. I have a lot of thoughts here, but they are building off a few existing essays. (There's been a recent revival over on Less Wrong attempting to make it a better locus for high quality discussion. I don't know if it's especially succeeded, but I think the concepts behind that intended revival and very important)

  1. Why Our Kind Can't Cooperate (Eliezer Yudkowsky)
  2. A Return to Discussion (Sarah Constantin)
  3. The Importance of [Less Wrong, OR another Single Conversational Locus] (Emphasis mine) (Anna Salamon)
  4. The Four Layers of Intellectual Conversation (Eliezer Yudkowsky)

    I think it's important to have all three concepts in context before delving into:

  5. EA has a lying problem (Sarah Constantin)

I recommend reading all of those. But here's a rough summary of what I consider the important bits. (If you want to actually argue with these bits, please read the actual essays before doing so, so you're engaging with the full substance of the idea)

  • Intellectuals and contrarians love to argue and nitpick. This is valuable - it produces novel insights, and keeps us honest. BUT it makes it harder to actually work together to achieve things. We need to understand how working-together works on a deep enough level that we can do so without turning into another random institution that's lost it's purpose. (See Why Our Kind... for more)

  • Lately, people have tended to talk on social media (Facebook, Tumblr, etc) rather than in formal blogs or forums that encourage longform discussion. This has a few effects. (See A Return to Discussion for more)
    1. FB discussion is fragmented - it's hard to find everything that's been said on a topic. (And tumblr is even worse)
    2. It's hard to know whether OTHER people have read a given thing on a topic.
    3. A related point (not necessarily in "A Return to Discussion" is that social media incentives some of the worst kinda of discussion. People share things quickly, without reflection. People read and respond to things in 5-10 minute bursts, without having time to fully digest them. 

  • Having a single, long form discussion area that you can expect everyone in an intellectual community to have read, makes it much easier to building knowledge. (And most of human progress is due, not to humans being smart, but being able to stand on the shoulders of giants). Anna Salamon's "Importance of a Single Conversational Locus" is framed around x-risk, but I think it applies to all aspects of EA: the problems the world faces are so huge that they need a higher caliber of thinking and knowledge-building than we currently have in order to solve.

  • In order to make true intellectual progress, you need people to be able to make critiques. You also need those critics to expect their criticism to in turn be criticized, so that the criticism is high quality. If a critique turns out to be poorly thought out, we need shared, common knowledge of that so that people don't end up rehashing the same debates.

  • And finally, (one of) Sarah's points in "EA has a lying problem" is that, in order to be different from other movements and succeed where they failed, EA needs to hold itself to a higher standard than usual. There's been much criticism of, say, Intentional Insights for doing sketchy, truth-bendy things to gain prestige and power. But that plenty of "high status" people within the EA community do things that are similar, even if to a different degree. We need to be aware of that.

    I would not argue as strongly as Sarah does that we shouldn't do it at all, but it's worth periodically calling each other out on it.

Cooperative Epistemology

So my biggest point here, is that we need to be more proactive and mindful about how discussion and knowledge is built upon within the EA community.

To succeed at our goals:

  • EA needs to hold itself to a very high intellectual standard (higher than we currently have, probably. In some sense anyway)
  • Factions within EA needs to be able to cooperate, share knowledge. Both object level knowledge (i.e. how cost effective is AMF?) and meta/epistemic knowledge like:
    1. How do we evaluate messy studies
    2. How do we discuss things online so that people actually put effort into reading and contributing the discussion.
    3. What kinds of conversational/debate norms lead people to be more transparent.
  • We need to be able to apply all the knowledge to go out and accomplish things, which will probably involve messy political stuff.

I have specific concerns about Sarah's post, which I'll post in a comment when I have a bit more time.

 

31

0
0

Reactions

0
0

More posts like this

Comments79
Sorted by Click to highlight new comments since: Today at 11:39 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

This is completely true.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

There are at least a dozen people for whom this is true.

3
Evan_Gaensbauer
7y
I feel like this is true for me too. I'd guess I've got more spare time on my hands than you guys. I also don't currently work for any EA charities. It's really hard to make your beliefs pay rent when you're in near mode and you're constantly worried about how if you screw up a criticism you'll lose connections and get ostracized, or you'll hurt the trajectory of a cause or charity you like by association because as much as we like to say we're debiased a lot of time affective rationalizations sneak into our motivations. Well, we all come from different walks of life, and a lot of us haven't been in communities trying to be as intellectually honest and epistemically virtuous as EA tries to be. It's hard to overcome that aversion to keeping our guard up because everywhere else we go in life our new ideas are treated utterly uncharitably, like, worse than anything in EA on a regular day. It's hard to unlearn those patterns. We as a community need to find ways to trust each other more. But that takes a lot of work, and will take a while. In the meantime, I don't have a lot to lose by criticizing EA, or at least I can take a hit pretty well. I mean, maybe there are social opportunity costs, what I won't be able to do in the future if I became low-status, but I'm confident I'm the sort of person who can create new opportunities for himself. So I'm not worried about me, and I don't think anyone else should either. I've never had a cause selection. Honestly, it felt weird to talk about, but this whole model uncertainty thing people are going for between causes now is something I've implicitly grasped the whole time. Like, I never understood why everyone was so confident in their views on causes when a bunch of this stuff requires figuring out things about consciousness, or the value of future lives, which seem like some philosophically and historically mind-boggling puzzles to me. If you go to my EAHub profile, you'll notice the biggest donation I made was in 2014 for $1
8
Benjamin_Todd
7y
Interesting. Which groups could we learn the most from?
5
jsteinhardt
7y
I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm. My impression is that criticism has traditionally been a strong part of Jewish culture, but I'm not culturally Jewish so can't speak directly. I heard that Bridgewater did a bunch of stuff related to feedback/criticism but again don't know a ton about it. Of course, none of these examples address the fact that much of the criticism of EA happens over the internet, but I do feel that some of the barriers to criticism online also carry over in person (though others don't).
3
Daniel_Dewey
7y
Thanks! One guess is that ritualization in academia helps with this -- if you say something in a talk or paper, you ritually invite criticism, whereas I'd be surprised to see people apply the same norms to e.g. a prominent researcher posting on facebook. (Maybe they should apply those norms, but I'd guess they don't.) Unfortunately, it's not obvious how to get the same benefits in EA.
5
Brian_Tomasik
7y
I'm surprised to hear that people see criticizing EA as incurring social costs. My impression was that many past criticisms of EA have been met with significant praise (e.g., Ben Kuhn's). One approach for dealing with this could be to provide a forum for anonymous posts + comments.
5
Peter Wildeford
7y
I think it really depends on who you criticize. I perceive criticizing particular people or organizations as having significant social costs (though I'm not saying whether those costs are merited or not).
4
jsteinhardt
7y
In my post, I said I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)
5
Daniel_Dewey
7y
This is a great point -- thanks, Jacob! I think I tend to expect more from people when they are critical -- i.e. I'm fine with a compliment/agreement that someone spent 2 minutes on, but expect critics to "do their homework", and if a complimenter and a critic were equally underinformed/unthoughtful, I'd judge the critic more harshly. This seems bad! One response is "poorly thought-through criticism can spread through networks; even if it's responded to in one place, people cache and repeat it other places where it's not responded to, and that's harmful." This applies equally well to poorly thought-through compliments; maybe the unchallenged-compliment problem is even worse, because I have warm feelings about this community and its people and orgs! Proposed responses (for me, though others could adopt them if they thought they're good ideas): * For now, assume that all critics are in good faith. (If we have / end up with a bad-critic problem, these responses need to be revised; I'll assume for now that the asymmetry of critique is a bigger problem.) * When responding to critiques, thank the critic in a sincere, non-fake way, especially when I disagree with the critique (e.g. "Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk to help the community. Thank you! [response to critique]") * Agree or disagree with critiques in a straightforward way, instead of saying e.g. "you should have thought about this harder". * Couch compliments the way I would couch critiques. * Try to notice my disagreements with compliments, and comment on them if I disagree. Thoughts?
0
RyanCarey
7y
Not sure how much this helps because if the criticism is thoughtful and you fail to engage with it, you're still being rude and missing an opportunity, whether or not you say some magic words.
0
Daniel_Dewey
7y
I agree that if engagement with the critique doesn't follow those words, they're not helpful :) Editing my post to clarify that.

Issue 1:

The title and tone of this post is playing with fire, i.e courting controversy, in a way that (I think, but am not sure) undermines its goals.

A, there's the fact that describing these as "lying" seems approximately as true as the first two claims, which other people have mentioned. In a post about holding ourselves to high standards, this is kind of a big deal. Others have mentioned this.

B: Personal integrity/honesty is only one element you need to have a good epistemic culture. Other elements you need include trust, and respect for people's time, attention, and emotions.

Just as every decision to bend the truth has consequences, every decision to inflame emotions has consequences, and these can be just as damaging.

I assume (hope) it was a deliberate choice to use a provocative title that'd grab attention. I think part of the goal was to punish the EA Establishment for not responding well to criticism and attempting to control said criticism.

That may not be a bad choice. Maybe it's necessary but it's a questionable one.

The default world (see: modern politics, and news) is a race to the bottom of outrage and manufactured controversy. People love controversy. I lo... (read more)

Hi everyone! I’m here to formally respond to Sarah’s article, on behalf of ACE. It’s difficult to determine where the response should go, as it seems there are many discussions, and reposting appears to be discouraged. I’ve decided to post here on the EA forum (as it tends to be the central meeting place for EAs), and will try to direct people from other places to this longer response.

Firstly, I’d like to clarify why we have not inserted ourselves into the discussion happening in multiple Facebook groups and fora. We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member). We are aware that this might come across as “radio silence” or lack of concern for the criticism at hand—but that is not the case. Whenever there are legitimate critiques about our work, we take it very seriously. When there are accusations of intent to deceive, we do not take them lightly. The last thing we want to do is respond in haste only to realize that we had not given the criticism enough consideration. We also want to allow the... (read more)

After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be...until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations

When there are debates about how readers are interpreting text, or potentially being misled by it, empirical testing (e.g. having Mechanical Turk readers view a page and then answer questions about the topic where they might be misled) is a powerful tool (and also avoids reliance on staff intuitions that might be affected by a curse of knowledge). See here for a recent successful example.

5
EricHerboso
7y
Well said, Erika. I'm happy with most of these changes, though I'm sad that we have had to remove the impact calculator in order to ensure others don't get the wrong idea about how seriously such estimates should be taken. Thankfully, Allison plans on implementing a replacement for it at some point using the Guesstimate platform. For those interested in seeing the exact changes ACE has made to the site, see the disclaimer at the top of the leafleting intervention page and the updates to our mistakes page.
4
JBeshir
7y
Thank you for the response, and I'm glad that it's being improved, and that there seems to be a honest interest in doing better. I feel "ensure others don't get the wrong idea about how seriously such estimates should be taken" is understating things- it should be reasonable for people to ascribe some non-zero level of meaning to issued estimates, and especially it should be that using them to compare between charities doesn't lead you massively astray. If it's "the wrong idea" to look at an estimate at all, because it isn't the true best reasoned expectation of results the evaluator has, I think the error was in the estimate rather than in expectation management, and find the deflection of responsibility here to the people who took ACE at all seriously concerning. The solution here shouldn't be for people to trust things others say less in general. Compare, say, GiveWell's analysis of LLINs (http://www.givewell.org/international/technical/programs/insecticide-treated-nets#HowcosteffectiveisLLINdistribution); it's very rough and the numbers shouldn't be assumed to be close to right (and responsibly, they describe all this), but their methodology makes them viable for comparison purposes. Cost-effectiveness is important- it is the measure of where putting your money does the most good and how much good you can expect to do, and a fully inclusive of risks and data issues cost effectiveness estimate is basically what one is arriving at when one determines what is effective. Even if you use other selection strategies for top charities, incorrect cost effectiveness estimates are not good.
7
EricHerboso
7y
I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don't want others to "get the wrong idea", I'm not claiming that the readers were at fault. I'm claiming that the ACE communications staff was at fault. Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time. Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future. Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn't a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public. You said "I think the error was in the estimate rather than in expectation management" because you felt the estimate itself wasn't good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it's just that the way we were talking a
5
Dawn Drescher
7y
Fwiw, I’ve been following ACE closely the past years, and always felt like I was the one taking cost-effectiveness estimates too literally, and ACE was time after time continually and tirelessly imploring me not to.
4
JBeshir
7y
This all makes sense, and I think it is a a very reasonable perspective. I hope this ongoing process goes well.
1
Jeff Kaufman
7y
Is this policy available anywhere? Looking on your site I'm finding only a different Social Media Policy that looks like maybe it's intended for people outside ACE considering posting on ACE's fb wall?
1
Raemon
7y
Major props for the response. Your new social media policy sounds probably-wise. :)
9
Brian_Tomasik
7y
I find such social-media policies quite unfortunate. :) I understand that they may be necessary in a world where political opponents can mine for the worst possible quotes, but such policies also reduce the speed and depth of engagement in discussions and reduce the human-ness of an organization. I don't blame ACE (or GiveWell, or others who have to face these issues). The problem seems more to come from (a) quoting out of context and (b) that even when things are quoted in context, one "off" statement from an individual can stick in people's minds more strongly than tons of non-bad statements do. There's not an easy answer, but it would be nice if we could cultivate an environment in which people aren't afraid to speak their minds. I would not want to work for an organization that restricted what I can say (ignoring stuff about proprietary company information, etc.).
1
Raemon
7y
I agree that these are tradeoffs and that that's very sad. I don't have a very strong opinion on the overall net-balance of the policy. But (it sounds like we both agree?) that they are probably a necessary evil for organizations like this.
1
Brian_Tomasik
7y
I'm not sure what to do. :) I think different people/organizations do it differently based on what they're most comfortable with. There's a certain credibility that comes from not asking your employees to toe a party line. Such organizations are usually less mainstream but also have a more authentic feel to them. I discussed this a bit more here.
5
erikaalonso
7y
I share the same concerns about internal social media policies, especially when it comes to stifling discussion staff members would have otherwise engaged in. The main reason I rarely engage in EA discussions is that I'm afraid what I write will be mistaken as representative of my employer—not just in substance, but also tone/sophistication. I think it's fairly standard now for organizations to request that employees include a disclaimer when engaging in work-related conversations—something like "these are my views and not necessarily those of my employer". That seems reasonable to include in the first comment, but becomes cumbersome in subsequent responses. And in instances where comments are curated without context, the disclaimer might not be included at all. Also, I wonder how much the disclaimer helps someone distinguish the employee from the organization? For highly-visible people in leadership roles, I suspect their views are often conflated with the views of the organization.
4
Brian_Tomasik
7y
I agree with these concerns. :) My own stance on this issue is driven more by my personality and "virtue ethics" kinds of impulses than by a thorough evaluation of the costs and benefits. Given that I, e.g., talk openly about (minuscule amounts of) suffering by video-game characters, it's clear that I'm on the "don't worry about the PR repercussions of sharing your views" side of the spectrum. I've noticed the proliferation of disclaimers about not speaking for one's employer. I personally find them cumbersome (and don't usually use them) because it seems to me rare that anyone does actually speak for one's employer. (That usually only happens with big announcements like the one you posted above.) But presumably other people have been burned here in the past, which is why it's done.

Issue 2: Running critical pieces by the people you're criticizing is necessary, if you want a good epistemic culture. (That said, waiting indefinitely for them to respond is not required. I think "wait a week" is probably a reasonable norm)

Reasons and considerations:

a) they may have already seen and engaged with a similar form of criticism before. If that's the case, it should be the critic's responsibility to read up on it, and make sure their criticism is saying something new. Or, that it's addressing the latest, best thoughts on the part of the person-being-criticized. (See Eliezer's 4 layers of criticism)

b) you may not understand their reasons well. Especially with something off-the-cuff on facebook. The principle of charity is crucial because our natural tendency is to engage with weaker versions of ideas.

c) you may be wrong about things. Because our kind have trouble cooperating because we tend to criticize a lot, it's important for criticism of Things We Are Currently Trying to Coordinate On to be made as-accurate-as-possible through private channels before unleashing the storm.

Controversial things are intrinsically "public facing" (see: Scott Alexander'... (read more)

I note Constantin's post, first, was extraordinary uncharitable and inflammatory (e.g. the title for the section discussing Wiblin's remark "Keeping promises as a symptom of Autism", among many others); second, these errors were part of a deliberate strategy to 'inflame people against EA'; third, this strategy is hypocritical given the authors (professed) objections to any hint of 'exploitative communication'. Any of these in isolation is regrettable. In concert they are contemptible.

{ETA: Although in a followup post Constantin states her previous comments which were suggestive of bad faith were an "emotional outburst", it did not reflect her actual intentions either at the time of writing or subsequently.}

My view is that, akin to Hostadter's law, virtues of integrity are undervalued even when people try to account for undervaluing them: for this reason I advocate all-but lexical priority to candour, integrity, etc. over immediate benefits. The degree of priority these things should be accorded seems a topic on which reasonable people can disagree: I recommend Elmore's remarks as a persuasive defence of according these virtues a lower weight.

'Lower', however, s... (read more)

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity eva... (read more)

One bit of progress on this front is Open Phil and GiveWell starting to make public and private predictions related to grants to improve their forecasting about outcomes, and create track records around that.

There is significant room for other EA organizations to adopt this practice in their own areas (and apply it more broadly, e.g. regarding future evaluations of their strategy, etc).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

This is part of my thinking behind promoting donor lotteries: by increasing the effective size of donors, it lets them more carefully evaluate organizations and opportunities, providing better incentives and resistance to exploit... (read more)

Prediction-making in my Open Phil work does feel like progress to me, because I find making predictions and writing them down difficult and scary, indicating that I wasn't doing that mental work as seriously before :) I'm quite excited to see what comes of it.

3
Raemon
7y
Wanted to offer something stronger than an up vote in starting the prediction-making: that sounds like a great idea and want to see how it goes. :)
9
kbog
7y
I like your thoughts and agree with reframing it as epistemic virtue generally instead of just lying. But I think EAs are always too quick to think about behavior in terms of incentives and rational action. Especially when talking about each other. Since almost no one around here is rationally selfish, some people are rationally altruistic, and most people are probably some combination of altruism, selfishness and irrationality. But here people are thinking that it's some really hard problem where rational people are likely be dishonest and so we need to make it rational for people to be honest and so on. We should remember all the ways that people can be primed or nudged to be honest or dishonest. This might be a hard aspect of an organization to evaluate from the outside but I would guess that it's at least as internally important as the desire to maximize growth metrics. For one thing, culture is important. Who is leading? What is their leadership style? I'm not in the middle of all this meta stuff, but it's weird (coming from the Army) that I see so much talk about organizations but I don't think I've ever seen someone even mention the word "leadership." Also, who is working at EA organizations? How many insiders and how many outsiders? I would suggest that ensuring that a minority of an organization is composed of identifiable outsiders or skeptical people would compel people to be more transparent just by making them feel like they are being watched. I know that some people have debated various reasons to have outsiders work for EA orgs - well here's another thing to consider. I don't have much else to contribute, but all you LessWrong people who have been reading behavioral econ literature since day one should be jumping all over this.
0
Richard_Batty
7y
What sort of discussion of leadership would you like to see? How was this done in the Army?
1
kbog
7y
Military has a culture of leadership, which is related to people taking pride in their organization, as I described in a different comment. There are training classes and performance evaluations emphasizing leadership, but I don't think those make a large difference.
5
atucker
7y
I suspect that a crux of the issue about the relative importance of growth vs. epistemic virtue is whether you expect most of the value of the EA community comes from novel insights and research that it does, or through moving money to the things that are already known about. In the early days of EA I think that GiveWell's quality was a major factor in getting people to donate, but I think that the EA movement is large enough now that growth isn't necessarily related to rigor -- the largest charities (like Salvation Army or YMCA) don't seem to be particularly epistemically rigorous at all. I'm not sure how closely the marginal EA is checking claims, and I think that EA is now mainstream enough that more people don't experience strong social pressure to justify it.
2
TruePath
7y
The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness. No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing). If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing 'Give Now' impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices. Moreover, there is nothing morally wrong with putting your organization's best foot forward or using standard charity/advertising tactics. Despite the joke it's not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it. I mean it's important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It's unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries. ---------------------------------------- What EA char

This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that's indistinguishable from how a dedicated EA might act—but it's not a part of my identity anymore.

I've also met plenty of great EAs, and it's a shame that the poor interactions I've had overshadow the many good ones.

Part of what disturbs me about Sarah's post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromise on honesty and act non-cooperatively more in person than online. I'm sure that others have had better experiences, so if this isn't as prevalent in your experience, I'm glad! It's just that I could have used stronger examples if I had written the post, instead of Sarah.

I'm not comfortable sharing examples that might make people identifiable. I'm too scared of social backlash to even think about whether outing specific people and organizations would even be a utilitarian thing for me to do right now. But being laughed at for being an "Effective Kantian" because you're the only one in your friend group who wasn't willing to do something illegal? That isn't fun. Listening to hardcore EAs approvingly talk about how other EAs have manipulated non-EAs for their own gain, because doing so might conceivably lead them to donate more if they had more resources at their disposal? That isn't inspiring.

I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.

4
JBeshir
7y
I find it difficult to combine "I want to be nice and sympathetic and forgiving of people trying to be good people and assume everyone is" with "I think people are not taking this seriously enough and want to tell you how seriously it should be taken". It's easier to be forgiving when you can trust people to take it seriously. I've kind of erred on the side of the latter today, because "no one criticises dishonesty or rationalisation because they want to be nice" seems like a concerning failure mode, but it'd be nice if I were better at combining both.
1
Dawn Drescher
7y
Thanks. May I ask what your geographic locus is? This is indeed something that I haven’t encountered here in Berlin or online. (The only more recent example that comes to mind was something like “I considered donating to Sci-Hub but then didn’t,” which seems quite innocent to me.) Back when I was young and naive, I asked about such (illegal or uncooperative) options and was promptly informed of their short-sightedness by other EAs. Endorsing Kantian considerations is also something I can do without incurring a social cost.
3
Fluttershy
7y
Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah's original post, even when I felt that you were under attack . That was really cool. <3 I'm from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)
1
Dawn Drescher
7y
Oh, thank you! <3 I’m trying my best. Oh yeah, the Berkeley community must be huge, I imagine. (Just judging by how often I hear about it and from DxE’s interest in the place.) I hope the mourning over Derek Parfit has also reminded people in your circles of the hitchhiker analogy and two-level utilitarianism. (Actually, I’m having a hard time finding out whether Parfit came up with it or whether Eliezer just named it for him on a whim. ^^)
1
Linch
7y
The hitchhiker is mentioned in Chapter One of Reasons and Persons. Interestingly, Parfit was more interested in the moral implications than the decision-theory ones.
1
Dawn Drescher
7y
Thanks!

One very object-level thing which could be done to make longform, persistent, not hit-and-run discussion in this particular venue easier: Email notifications of comments to articles you've commented in.

There doesn't seem to be a preference setting for that, and it doesn't seem to be default, so it's only because I remember to come check here repeatedly that I can reply to things. Nothing is going to be as good at reaching me as Facebook/other app notifications on my phone, but email would do something.

2
Peter Wildeford
7y
https://github.com/tog22/eaforum/issues/65
0
Tom_Ash
7y
Yes, the .impact team (particular Patrick Brinich-Langlois) have been working on this for a little while, and it's long been part of our to do list for the forum.
0
John_Maxwell
7y
Less Wrong has a "subscribe" feature that might be importable.

My thoughts on this are too long for a comment, but I've written them up here - posting a link in the spirit of making this forum post a comprehensive roundup: http://benjaminrosshoffman.com/honesty-and-perjury/

I have very mixed feelings about Sarah's post; the title seems inaccurate to me, and I'm not sure about how the quotes were interpreted, but it's raised some interesting and useful-seeming discussion. Two brief points:

  • I understand what causes people to write comments like "lying seems bad but maybe it's the best thing to do in some cases", but I don't think those comments usually make useful points (they typically seem pedantic at best and edgy at worst), and I hope people aren't actually guided by considerations like those. Most EAs I work wit
... (read more)

Copying my post from the Facebook thread:

Some of the stuff in the original post I disagree on, but the ACE stuff was pretty awful. Animal advocacy in general has had severe problems with falling prey to the temptation to exaggerate or outright lie for a quick win today. especially about health, and it's disturbing that apparently the main evaluator for the animal rights wing of the EA movement has already decided to join it and throw out actually having discourse on effectiveness in favour of plundering their reputation for more donations today. A mistake ... (read more)

9
Peter Wildeford
7y
I'm involved with ACE as a board member and independent volunteer researcher, but I speak for myself. I agree with you that the leafleting complaints are legitimate -- I've been advocating more skepticism toward the leafleting numbers for years. But I feel like it's pretty harsh to think ACE needs to be entirely replaced. I don't know if it's helpful, but I can promise you that there's no intentional PR campaign on behalf of ACE to over-exaggerate in order to grow the movement. All I see is an overworked org with insufficient resources to double check all the content on their site. Judging the character of the ACE staff through my interactions with them, I don't think there was any intent to mislead on leaflets. I'd put it more as negligence arising from over-excitement from the initial studies (despite lots of methodological flaws), insufficient skepticism, and not fully thinking through how things would be interpreted (the claim that leafleting evidence is the strongest among AR is technically true). The one particular sentence, among the thousands on the site, went pretty much unnoticed until Harrison brought it up.
9
JBeshir
7y
Thanks for the feedback, and I'm sorry that it's harsh. I'm willing to believe that it wasn't conscious intent at publication time at least. But it seems quite likely to me from the outside that if they thought the numbers were underestimating they'd have fixed them a lot faster, and unless that's not true it's a pretty severe ethics problem. I'm sure it was a matter of "it's an error that's not hurting anyone because charity is good, so it isn't very important", or even just a generic motivation problem in volunteering to fix it, some kind of rationalisation that felt good rather than "I'm going to lie for the greater good"- the only people advocating that outright seem to be other commenters- but it's still a pretty bad ethics issue for an evaluator to succumb to the temptation to defer an unfavourable update. I think some of this might be that the EA community was overly aggressive in finding them and sort of treating them as the animal charity GiveWell, because EA wanted there to be one, when ACE weren't really aiming to be that robust. A good, robust evaluator's job should be to screen out bad studies and to examine other peoples' enthusiasm and work out how grounded it was, with transparent handling of errors (GiveWell does updates that discuss them and such) and updating in response to new information, and from that perspective taking a severely poor study at face value and not correcting it for years, resulting in a large number of people getting wrong valuations was a pretty huge failing. Making "technically correct" but very misleading statements which we'd view poorly if they came from a company advertising itself is also very bad in an organisation whose job is basically to help you sort through everyone else's advertisements. Maybe the sensible thing for now is to assume that there is no animal charity evaluator that's good enough to safely defer to, and all there are are people who may point you to papers which caveat emptor, you have to check yours
5
DavidNash
7y
Maybe I'm being simple about this, but I find it's helpful to point people towards ACE because there doesn't seem to be any other charity researchers for that cause. Just by suggesting people donate to organisations that focus on animal farming, that seems like it can have a large impact even if it's hard to pick between the particular organisations.
5
Ben_West
7y
This seems like an exaggerated and unhelpful thing to say.
5
JBeshir
7y
Perhaps. It's certainly what the people suggesting that deliberate dishonesty would be okay are suggesting, and it is what a large amount of online advocacy does, and it is in effect what they did, but they probably didn't consciously decide to do it. I'm not sure how much credit not having consciously decided is worth, though, because that seems to just reward people for not thinking very hard about what they're doing, and they did it from a position of authority and (thus) responsibility. I stand by the use of the word 'plundering'- it's surprising how some people are willing to hum and har about maybe it being worth it, when doing it deliberately would be a very short-sighted, destroy-the-future-for-money-now act. It calls for such a strong term. And I stand by the position that it would throw out actually having discourse on effectiveness if people played those sorts of games, withheld information that would be bad for causes they think are good, etc, rather than being scrupulously honest. But again to say they 'decided' to do those things is perhaps not entirely right. I think in an evaluator, which is in a sense a watchdog for other peoples' claims, these kind of things really are pretty serious- it would be scandalous if e.g. GiveWell were found to have been overexcited about something and ignored issues with it on this level. Their job is to curb enthusiasm, not just be another advocate. So I think taking it seriously is pretty called for. As I mentioned in a comment below, though, maybe part of the problem is that EA people tried to take ACE as a more robust evaluator than it was actually intending to be, and the consequence should be that they shift to regarding it as a source for pointers whose own statements are to be taken with a large grain of salt, the way individual charity statements are.

ACE's primary output is its charity recommendations, and I would guess that it's "top charities" page is viewed ~100x more than the leafleting page Sarah links to.

ACE does not give the "top charity" designation to any organization which focuses primarily on leafleting, and e.g. the page for Vegan Outreach explicitly states that VO is not considered a top charity because of its focus on leafleting and the lack of robust research on that:

We have some concerns that Vegan Outreach has relied too heavily on poor sources of evidence to determine the effectiveness of leafleting as compared to other interventions... Why didn’t Vegan Outreach receive our top recommendation? Although we are impressed with Vegan Outreach’s recent openness to change and their attempts to measure their effectiveness, we still have reservations about their heavy focus on leafleting programs

You are proposing that ACE says negative things on its most prominent pages about leafleting, but left some text buried in a back page that said good things about leafleting as part of a dastardly plot to increase donations to organizations they don't even recommend.

This seems unlikely to me, to put it... (read more)

5
JBeshir
7y
This definitely isn't the kind of deliberate where there's an overarching plot, but it's not distinguishable from the kind of deliberate where a person sees a thing they should do or a reason to not write what they're writing and knowingly ignores it, though I'd agree in that I think it's more likely they flinched away unconsciously. It's worth noting that while Vegan Outreach is not listed as a top charity it is listed as a standout charity, with their page here: https://animalcharityevaluators.org/research/charity-review/vegan-outreach/ I don't think it is good to laud positive evidence but refer to negative evidence only via saying "there is a lack of evidence", which is what the disclaimers do- in particular there's no mention of the evidence against there being any effect at all. Nor is it good to refer to studies which are clearly entirely invalid as merely "poor" while still relying on their data. It shouldn't be "there is good evidence" when there's evidence for, and "the evidence is still under debate" when there's evidence against, and there shouldn't be a "gushing praise upfront, provisos later" approach unless you feel the praise is still justified after the provisos. And "have reservations" is pretty weak. These are not good acts from a supposedly neutral evaluator. Until the revision in November 2016, the VO page opened with: "Vegan Outreach (VO) engages almost exclusively in a single intervention, leafleting on behalf of farmed animals, which we consider to be among the most effective ways to help animals.", as an example of this. Even now I don't think it represents the state of affairs well. If in trying to resolve the matter of whether it has high expected impact or not, you went to the main review on leafleting (https://animalcharityevaluators.org/research/interventions/leafleting/), you'd find it began with "The existing evidence on the impact of leafleting is among the strongest bodies of evidence bearing on animal advocacy methods.". This
2
Ben_West
7y
Thanks for the response, it helps me understand where you're coming from. I agree that the sentence you cite could be better written (and in general ACE could improve, as could we all). I disagree with this though: At the object level: ACE is distinguishable from a bad actor, for example due to the fact that their most prominent pages do not recommend charities which focus on leafleting. At the metalevel: I don't think we should have a conversational norm of "everyone should be treated as a bad actor until they can prove otherwise". It would be really awful to be a member of a community with that norm. All this being said, it seems that ACE is responding in this post now, and it may be better to let them address concerns since they are both more knowledgeable and more articulate than me.
2
Peter Wildeford
7y
To be clear, it's inaccurate to describe the studies as showing evidence of no effect. All of the studies are consistent with a range of possible outcomes that include no effect (and even negative effect!) but they're also consistent with positive effect. That isn't to say that there is a positive effect. But it isn't to say there's a negative effect either. I think it is best to describe this as a "lack of evidence" one way or another. - I don't think there's good evidence that anything works in animal rights and if ACE suggests anything anywhere to the contrary I'd like to push against it.

Good piece

While I think it's good to expect people to have read the same central set of works, I do think we lose out by not being able to snythesisynthesisese discussions. Why isn't there a single community post with the state of the art on this discussion and where key disagreements are? It's understandable that I should have to find the various articles, but why not make it easier for me?

2
Raemon
2y
I'm not sure what your imagining, in terms of overall infrastructural update here. But, here's a post that is in some sense a followup post to this: https://www.lesswrong.com/posts/FT9Lkoyd5DcCoPMYQ/partial-summary-of-debate-with-benquo-and-jessicata-pt-1 

Since there are so many separate discussions surrounding this blog post, I'll copy my response from the original discussion:

I’m grateful for this post. Honesty seems undervalued in EA.

An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s ... (read more)

In the interest of completeness: Sarah posted a follow-up on her post. Reply to Criticism on my EA Post.

I was definitely disappointed to see that post by Sarah. It seemed to defect from good community norms such as attempting to generously interpret people in favour of quoting people out of context. She seems to be applying such rigourous standards to other people, yet applying rather loose standards to herself.

I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.

I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.

Overall, though, I think it's just plain wrong to argue for an unexamined idea of hones... (read more)

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others ... (read more)

Why Our Kind Can't Cooperate (Eliezer Yudkowsky)

Note to casual viewers that the content of this is not what the title makes it sound like. He's not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it's been hard.

From the recent Sarah Constantin post

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as

... (read more)
2
atucker
7y
I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through, and that it encourages you to be slightly evil more than you have to be. I've heard other people argue that that strategy literally doesn't work, making a claim roughly along the lines of "if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult". I'm not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.
1
kbog
7y
True. But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists. Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it's actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.
3
atucker
7y
That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won't donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad. Being at all intellectually dishonest is much worse for an intellectual movement's prospects than it is for normal groups. The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens. I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.
1
kbog
7y
Okay, so there's some optimal balance to be had (there are always ways you can be more rigorous and less growth-oriented, towards a very unreasonable extreme). And we're trying to find the right point, so we can err on either side if we're not careful. I agree that dishonesty is very bad, but I'm just a bit worried that if we all start treating errors on one side like a large controversy then we're going to miss the occasions where we err on the other side, and then go a little too far, because we get really strong and socially damning feedback on one side, and nothing on the other side. To be perfectly blunt and honest, it's a blog post with some anecdotes. That's fine for saying that there's a problem to be investigated, but not for making conclusions about particular causal mechanisms. We don't have an idea of how these people's motivations changed (maybe they'd have the exact same plans before having come into their positions, maybe they become more fair and careful the more experience and power they get). Anyway the reason I said that was just to defend the idea that obtaining power can be good overall. Not that there are no such problems associated with it.

Note that the proposed norm within EA of following laws at least in the US is very demanding-see this article. A 14th very common violation I would add is not fully reporting income to the government like babysitting: "under the table" or "shadow economy." A 15th would be pirated software/music. Interestingly, lying is not illegal in the US, though lying under oath is. So perhaps what we mean is be as law-abiding as would be socially acceptable to most people? And then for areas that are more directly related to running organizations (n... (read more)

It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.

It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect... (read more)