Comment author: Peter_Hurford  (EA Profile) 16 January 2017 04:19:14PM 6 points [-]

Cool, I always love work surfacing an otherwise unknown donation opportunity! I also find your initial framework compelling and think it motivates some of my donations, for example with SHIC.

Under "Reservations about the donation", I think it's worth mentioning the possibility that the threat is misperceived and the Trump administration turns out to not pose any significant risk to the integrity or existence of those datasets.

Comment author: Raemon 19 January 2017 04:21:32PM 1 point [-]

Sort of related to that - is there a place this sort of post (and the other recent Tostan post) can get aggregated?

Comment author: Brian_Tomasik 13 January 2017 12:04:10PM 9 points [-]

I find such social-media policies quite unfortunate. :) I understand that they may be necessary in a world where political opponents can mine for the worst possible quotes, but such policies also reduce the speed and depth of engagement in discussions and reduce the human-ness of an organization. I don't blame ACE (or GiveWell, or others who have to face these issues). The problem seems more to come from (a) quoting out of context and (b) that even when things are quoted in context, one "off" statement from an individual can stick in people's minds more strongly than tons of non-bad statements do. There's not an easy answer, but it would be nice if we could cultivate an environment in which people aren't afraid to speak their minds. I would not want to work for an organization that restricted what I can say (ignoring stuff about proprietary company information, etc.).

Comment author: Raemon 13 January 2017 12:15:00PM 1 point [-]

I agree that these are tradeoffs and that that's very sad. I don't have a very strong opinion on the overall net-balance of the policy. But (it sounds like we both agree?) that they are probably a necessary evil for organizations like this.

Comment author: Daniel_Dewey 12 January 2017 05:53:11PM 15 points [-]

Prediction-making in my Open Phil work does feel like progress to me, because I find making predictions and writing them down difficult and scary, indicating that I wasn't doing that mental work as seriously before :) I'm quite excited to see what comes of it.

Comment author: Raemon 13 January 2017 05:30:49AM 3 points [-]

Wanted to offer something stronger than an up vote in starting the prediction-making: that sounds like a great idea and want to see how it goes. :)

Comment author: erikaalonso 13 January 2017 12:38:41AM *  21 points [-]

Hi everyone! I’m here to formally respond to Sarah’s article, on behalf of ACE. It’s difficult to determine where the response should go, as it seems there are many discussions, and reposting appears to be discouraged. I’ve decided to post here on the EA forum (as it tends to be the central meeting place for EAs), and will try to direct people from other places to this longer response.

Firstly, I’d like to clarify why we have not inserted ourselves into the discussion happening in multiple Facebook groups and fora. We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member). We are aware that this might come across as “radio silence” or lack of concern for the criticism at hand—but that is not the case. Whenever there are legitimate critiques about our work, we take it very seriously. When there are accusations of intent to deceive, we do not take them lightly. The last thing we want to do is respond in haste only to realize that we had not given the criticism enough consideration. We also want to allow the community to discuss amongst themselves prior to posting a response. This is not only to encourage discussion amongst individual members of the community, but also so that we can prioritize responding to the concerns shared by the greatest number of community members.

It is clear to us now that we have failed to adequately communicate the uncertainty surrounding the outcomes of our leafleting intervention report. We absolutely disagree with claims of intentional deception and the characterization of our staff as acting in bad-faith—we have never tried to hide our uncertainty about the existing leafleting research report, and as others have pointed out, it is clearly stated throughout the site where leafleting is mentioned. However, our reasoning that these disclaimers would be obvious was based on the assumption that those interested in the report would read it in its entirety. After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be. We have added a longer disclaimer to the top of our leafleting report page, expressing our current thoughts and noting that we will update the report sometime in 2017.

In addition, we have decided to remove the impact calculator (a tool which included an ability to enter donations directed to leafleting and receive estimates of high and low bounds of animals spared) from our website entirely until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations and/or an understanding of how the low/best/high error bounds exemplify the uncertainty regarding those numbers. It is not typical for us to remove content from the site, but we intend to operate with abundant caution. This change seems to be the best option, given that people believe we are being intentionally deceptive in keeping them online.

Finally, leadership at ACE all agree it has been too long since we have updated our Mistakes page, so we have added new entries concerning issues we have reflected upon as an organization.

We also notice that there is concern among the community that our recommendations are suspect due to the weak evidence supporting our cost-effectiveness estimates of leafleting. The focus on leafleting for this criticism is confusing to us, as our cost-effectiveness estimates address many interventions, not only leafleting, and the evidence for leafleting is not much weaker than other evidence available about animal advocacy interventions. On top of that, cost-effectiveness estimates are only a factor in one of the seven criteria used in our evaluation process. In most cases, we don’t think that they have changed the outcome of our evaluation decisions. While we haven’t come up with a solution for clarifying this point, we always welcome and are appreciative of constructive feedback.

We are committed to honesty, and are disappointed that the content we've published on the website concerning leafleting has caused so much confusion as to lead anyone to believe we are intentionally deceiving our supporters for profit. On a personal note, I’m devastated to hear that our error in communication has led to the character assassination not only of ACE, but of the people who comprise the organization—some of the hardest working, well-intentioned people I’ve ever worked with.

Finally, I would like everyone to know that we sincerely appreciate the constructive feedback we receive from people within and beyond the EA movement.

*Edited to add links

Comment author: Raemon 13 January 2017 05:29:51AM 1 point [-]

Major props for the response. Your new social media policy sounds probably-wise. :)

Comment author: Raemon 12 January 2017 05:19:15PM 17 points [-]

Issue 2: Running critical pieces by the people you're criticizing is necessary, if you want a good epistemic culture. (That said, waiting indefinitely for them to respond is not required. I think "wait a week" is probably a reasonable norm)

Reasons and considerations:

a) they may have already seen and engaged with a similar form of criticism before. If that's the case, it should be the critic's responsibility to read up on it, and make sure their criticism is saying something new. Or, that it's addressing the latest, best thoughts on the part of the person-being-criticized. (See Eliezer's 4 layers of criticism)

b) you may not understand their reasons well. Especially with something off-the-cuff on facebook. The principle of charity is crucial because our natural tendency is to engage with weaker versions of ideas.

c) you may be wrong about things. Because our kind have trouble cooperating because we tend to criticize a lot, it's important for criticism of Things We Are Currently Trying to Coordinate On to be made as-accurate-as-possible through private channels before unleashing the storm.

Controversial things are intrinsically "public facing" (see: Scott Alexander's post on Trump that he specifically asked people not to share and disabled comments on, but which Ann Coulter ended up retweeting). Because it is controversial it may end up being people's first exposure to Effective Altruism.

Similar to my issue 1, I think Sarah intended this post as tit-for-tat punishment for EA Establishment not responding enough to criticism. Assuming I'm correct about that, I disagree with it on two grounds:

  • I frankly think Ben Todd's post (the one most contributing to "EA establishment is defecting on meta-level epistemic discourse norms") was totally fine. GWWC has limited time. Everyone has limited time. Dealing with critics is only one possible use of their time and it's not at all obvious to me it's the best one on the margin. Ben even notes a possible course of action: communicate better about discussions GWWC has already had.

  • Even in the maximally uncharitable interpretation of Ben's comments... it's still important to run things by People You Are Criticizing Who Are In the Middle of a Project That Needs Coordination, for the reasons I said. If you're demanding time/attention/effort on the part of people running extensive projects, you should put time/effort into making sure your criticism is actually moving the dialog forward.

Comment author: Raemon 12 January 2017 04:34:40PM 20 points [-]

Issue 1:

The title and tone of this post is playing with fire, i.e courting controversy, in a way that (I think, but am not sure) undermines its goals.

A, there's the fact that describing these as "lying" seems approximately as true as the first two claims, which other people have mentioned. In a post about holding ourselves to high standards, this is kind of a big deal. Others have mentioned this.

B: Personal integrity/honesty is only one element you need to have a good epistemic culture. Other elements you need include trust, and respect for people's time, attention, and emotions.

Just as every decision to bend the truth has consequences, every decision to inflame emotions has consequences, and these can be just as damaging.

I assume (hope) it was a deliberate choice to use a provocative title that'd grab attention. I think part of the goal was to punish the EA Establishment for not responding well to criticism and attempting to control said criticism.

That may not be a bad choice. Maybe it's necessary but it's a questionable one.

The default world (see: modern politics, and news) is a race to the bottom of outrage and manufactured controversy. People love controversy. I love controversy. I felt an urge to share this article on facebook and say things off the cuff about it. I resisted, because I think it would be harmful to the epistemic integrity of EA.

Maybe it's necessary to write a provocative title with a hazy definition of "lying" in order to get everyone's attention and force a conversation. (In the same way it may be necessary to exaggerate global warming by 4x to get Jane Q Public to care). But it is certainly not the platonic ideal of the epistemic culture we need to build.

Comment author: Michael_PJ 27 October 2016 07:20:54PM *  10 points [-]

This concerns me because "EA" is such a vaguely defined group.

Here are some clearly defined groups:

  • The EA FB group
  • The EA forum
  • Giving What We Can

All of these have a clear definition of membership and a clear purpose. I think it is entirely sensible for groups like this to have some kinds of rules, and processes for addressing and potentially ejecting people who don't conform to those rules. Because the group has a clear membership process, I think most people will accept that being a member of the group means acceding to the rules of the group.

"EA", on the other hand, is a post hoc label for a group of people who happened to be interested in the ideas of effective altruism. One does not "apply" to be an "EA". Nor does can we meaningfully revoke membership except by collectively refusing to engage with someone.

I think that attempts to police the borders of a vague group like "EA" can degenerate badly.

Firstly, since anyone who is interested in effective altruism has a plausible claim to be a member of "EA" under the vague definition, there will continue to be many people using the label with no regards for any "official" definition.

Secondly (and I hope this won't happen), such a free-floating label is very vulnerable to political (ab)use. We open ourselves up to arguments about whether or not someone is a "true" EA, or schisms between various "official" definitions. At risk of bringing up old disagreements, the arguments about vegetarian catering at last year's EA Global were already veering in this direction.

This seems to me to have been a common fate for vague group nouns over the years, with feminism being the most obvious example. We don't want to have wars between the second- and third-wave EAs!

My preferred solution is to avoid "EA" as a noun. Apart from the dangers I mentioned above, its origin as a label for an existing group of people gives it all sorts of connotations that are only really valid historically: rationalist thinking style, frank discussion norms, appreciation of contrarianism ... not to mention being white, male, and highly educated. But practically, having such a label is just too useful.

The only other suggestion I can think of is to make a clearly defined group for which we have community norms. For lack of a better name, we could call it "CEA-style EA". Then the CEA website could include a page that describes the core values of "CEA-style EAs" and some expectations of behaviour. At that point we again have a clearly defined group with a clear membership policy, and policing the border becomes a much easier job.

In practice, you probably wouldn't want an explicit application process, with it rather being something that you can claim for yourself - unless the group arbiter (CEA) has actively decreed that you cannot. Indeed, even if someone has never claimed to be a "CEA-style EA", declaring that they do not meet the standard can send a powerful signal.

Comment author: Raemon 27 October 2016 10:50:41PM 1 point [-]

I think I like the ideas suggested here better than the various permutations suggested elsewhere. Or at least agree with the concerns raised.

Comment author: Kathy 27 October 2016 01:52:26PM *  4 points [-]

I'm half wondering how much upset was influenced by a general suspicion of or aversion about advertising and persuasion in general.

From one perspective, it's almost as if Gleb used to be one of the "advertising/persuasion is icky" people, and decided to bite the bullet and just do this thing, even if it seemed whacked out and icky...

At first I thought maybe part of the problem was Gleb didn't have any vision of how it could be done better. Now, I think it might actually be part of a systemic problem I keep noticing. Our social network generally does not have a clear vision of how it could be done better.

How many of us can easily think of specific strategies to promote InIn that sit well with all of your ethical standards and effectiveness criteria?

If a lot of people here are beginning with the belief that promotion is either icky or ineffective, we have set ourselves up for failure. This may encourage us to behave as if one either needs to accept being ineffective, or one needs to allow ones self to be icky ... which may result in choosing whichever things appear to be the icky-effective ones.

I think effective altruism can have both ethics and effectiveness at the same time. I do not believe there is actually a trade-off where choosing one necessarily must sacrifice the other. I believe there are probably even ways where one can enhance and build on the other.

I keep thinking that it would really benefit the whole movement if more people became more aware about what sorts of things result in disasters and how to promote things well. This is another way that such awareness could be beneficial.

Comment author: Raemon 27 October 2016 02:38:53PM 2 points [-]

Huh, this is a good point. Having a clear sense of what to do with advertising (both within the community and without) would be really helpful.

Comment author: kbog  (EA Profile) 25 October 2016 01:22:46PM *  1 point [-]

The impression I get from Jeff's post is that the people involved took great pains to be as reasonable as possible. They don't even issue recommendations for what to do in the body of the post--they just present observations. This after ~2000 edits over the course of more than two months. This makes me think they'd have been willing to go to the trouble of following a formal procedure. 

I mean for the community as a whole, to say, "oh, look, our thought leaders decided to reject someone - ok, let's all shut them out."

Drama seems pretty universal--I don't think it can be wished away.

There's the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko's Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.

There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it'd be nice to know when it's acceptable to ban a user from the EA forum, Facebook group, etc

Those groups can make their own decisions. EA has no central authority. I moderate a group like that and there is no chance I'd ban someone just because of the sort of thing which is going on here, and certainly not merely because the high chancellor of the effective altruists told me to.

I'm not especially impressed with the reference class of social movements when it comes to doing good, and I'm not sure we should do a particular thing just because it's what other social movements do.

We're not following their lead on how to change the world. We're following their lead on how to treat other members of the community. That's something which is universal to social movements.

 keep seeing other communities implode due to divisive internet drama, and I'd rather this not happen to mine. I would at least like my community to find a new way to implode. I'd rather be an interesting case study for future generations than an uninteresting one.

Is this serious? EA is way more important than yet another obscure annal in Internet history.

So what's the right way to take action, if you and your friends think someone is a bad actor who's harming your movement?

Tell it to them. Talk about it to other people. Run my organizations the way I see fit.

Comment author: Raemon 26 October 2016 03:12:14PM 3 points [-]

Tell it to them. Talk about it to other people. Run my organizations the way I see fit.

That's what we did for a year+. The problem didn't go away.

Comment author: Raemon 14 November 2015 03:30:09PM 6 points [-]

Hmm. I sort of thought "Quality Adjusted Life Year" effectively conveyed the thing I wanted (as opposed to Disability Adjusted Lifeyear, which definitely didn't)

In any case, if people are getting confused on that point, WALY seems to be good as a term to hold in reserve to explain to people who think we're all about health. (I think if we just popularized it as a new term, especially if we hadn't worked out any way to actually measure things other than health in a robust fashion, it'd just end up with the same problems as QALY)

View more: Next