Comment author: AGB 22 April 2017 10:51:44PM *  5 points [-]

So I probably disagree with some of your bullet points, but unless I'm missing something I don't think they can be the crux of our disagreement here, so for the sake of argument let's suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.

...I still don't see how to get from here to (for example) 'The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time'. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they 'should', why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry's post introducing EA Ventures has just 16 upvotes*.

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.

FWIW I'm very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don't have any great ideas at the moment though.

*http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/

Comment author: Fluttershy 23 April 2017 06:44:56PM 1 point [-]

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation.

I'd say this is correct. The EA Forum itself has such a selection effect, though it's weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist's perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.

I'd also agree that, at the time of Will's post, it would have been incorrect to say:

The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

Comment author: Kerry_Vaughan 21 April 2017 04:47:01PM 4 points [-]

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good.

From my point of view, the context for the first section was to explain why we updated in favor of EA Funds persisting past the three-month trial before the trial was over. This was important to communicate because several people expressed confusion about our endorsement of EA Funds while the project was still technically in beta. This is why the first section highlights mostly positive information about EA Funds whereas later sections highlight challenges, mistakes etc.

I think the update that your comment is suggesting is that I should have made the first section longer and should have provided a more detailed discussion of the considerations for and against concluding that EA Funds has been well-received so far. Is that what you think or do you think I should make a different update?

Comment author: Fluttershy 22 April 2017 09:53:26PM 3 points [-]

A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you've framed things as if you'll continue on with the project only if you update in the direction of having more public support than before.

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.

Comment author: AGB 22 April 2017 12:51:34PM 13 points [-]

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA's target of $1m.

Given all that, what would 'well-received' look like in your view?

If you think the community is generally making a mistake in being supportive of the EA funds, that's fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.

Comment author: Fluttershy 22 April 2017 08:20:20PM 3 points [-]

In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don't think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.

Part of what I'm tracking when I say that the EA community isn't supportive of EA Funds is that I've spoken to several people in person who have said as much--I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn't have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn't a great deal of community support for EA funds has to do with the ways in which I'd expect the data on how much support there actually is to be filtered. For example:

  • the method in which Kerry presented his survey data made it look like there was more support than there was
  • the fact that Kerry presented the data in this way suggests it's relatively more likely that Kerry will do so again in the future if given the chance
  • social desirability bias should also make it look like there's more support than there is
  • the fact that it's socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there's more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
  • we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.

It also doesn't help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn't show anything useful either way, I'm not comfortable with accepting the claim that EA Funds has been particularly well-received.

Comment author: Fluttershy 21 April 2017 11:23:52AM 0 points [-]

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communication and EA Funds' website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that's been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren't OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds' Fund Managers encounter.

I've spoken with a couple EAs in person who have mentioned that making the claim that "EA Funds are likely to be at least as good as OPP’s last dollar" is harmful. In this post, it's certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it's less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds "received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page)." But the first sentence of the Wikipedia page for NPS, which I'm sure the author read at least the first line of given that he linked to it, states that NPS is "a management tool that can be used to gauge the loyalty of a firm's customer relationships" (emphasis mine). However, EA Funds isn't a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company's customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you've made this assumption) belies a lack of intent to honestly inform EAs.

This post has other problems, too; it uses the NPS scoring system to analyze donors and other's responses to the question:

How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?

The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being "felt to be good" in industry. Worse, the post mentions that this result

could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.

It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I'd agree that it's a good idea to not "take NPS too seriously", though in this case, I wouldn't say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good. Things don't look good regarding how well this project has been received, but that's not the larger problem here. The larger problem is that things don't look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.

Some days, I like to quietly smile to myself and wonder if we might be able to take that back.

Comment author: Fluttershy 31 March 2017 12:09:59AM 2 points [-]

This is a problem, both for the reasons you give:

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive.

and through this mechanism, which you correctly point out:

The implication is nearly always that the target of the joust has the ‘wrong’ intuitions.

The above two considerations combine extremely poorly with the following:

I’ve noticed IJing happens much more among effective altruists than academic philosophers.

Another consequence of this tendency, when it emerges, is that communicating a felt sense of something is much harder to do, and less rewarding to do, when there's some level of social expectation that arguments from intuition will be attacked. Note that the felt senses of experts often do contain information that's not otherwise available when said experts work in fields with short feedback loops. (This is more broadly true: norms of rudeness, verbal domination, using microaggressions, and nitpicking impede communication more generally, and your more specific concept of IJ does occur disproportionately often in EA).

Note also that development of a social expectation whereby people believe on a gut level that they'll receive about as much criticism, verbal aggression, and so on regardless of how correct or useful their statements are may be especially harmful (See especially the second paragraph of p.2).

Comment author: Fluttershy 08 March 2017 01:44:16PM *  2 points [-]

I'd like to respond to your description of what some people's worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:

the risk of losing flexibility by enforcing what is an “EA view” or not

It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on individual EAs. I suspect that enumerating what sorts of things "count" as EA endeavors isn't a strictly necessary prerequisite for forming such a panel.

I can see why some people held this concern, partly because "defining what does and doesn't count as an EA endeavor" clusters in thing-space with "keeping an eye out for people acting in untrustworthy and non-cooperative ways towards EAs", but these two things don't have to go hand in hand.

the risk of consolidating too much influence over EA in any one organisation or panel

Fair enough. As with the last point, the panel would likely consolidate less unwanted influence over EA if it focused solely on calling out sufficiently dishonestly harmful behavior by anyone who self-identified as an EA, and made no claims as to whether any individuals or organizations "counted" as EAs.

the risk of it being impossible to get agreement, leading to an increase in politicisation and squabbling

This seems like a concern that's good, in that a bit harder for me to address satisfactorily. Hopefully, though, there would some clear-cut cases the panel could choose to consider, too; the case of Intentional Insights' poor behavior was eventually quite clear, for one. I would guess that the less clear cases would tend to be the ones where a clear resolution would be less impactful.

In response, we toned back the ambitions of the proposed ideas.

I'd have likely done the same. But that's the wrong thing to do.

In this case, the counterfactual to having some sort of panel to call out behavior which causes unreasonable amounts of harm to EAs is relying on the initiative of individuals to call out such behavior. This is not a sustainable solution. Your summary of your previous post puts it well:

There’s very little to deal with people representing EA in ways that seem to be harmful; this means that the only response is community action, which is slow, unpleasant for all involved, and risks unfairness through lack of good process.

Community action is all that we had before the Intentional Insights fiasco, and community action is all that we're back to having now.

I didn't get to watch the formation of the panel you discuss, but it seems like a nontrivial amount of momentum, which was riled up by the harm Intentional Insights caused EA, went into its creation. To the extent that that momentum is no longer available because some of it was channeled into the creation of this panel, we've lost a chance at building a tool to protect ourselves against agents and organizations who would impose costs on, and harm EAs and EA overall. Pending further developments, I have lowered my opinion of everyone directly involved accordingly.

Comment author: Fluttershy 07 March 2017 01:57:56PM 4 points [-]

Noted! I can understand that it's easy to feel like you're overstepping your bounds when trying to speak for others. Personally, I'd have been happy for you all to take a more central leadership role, and would have wanted you all to feel comfortable if you had decided to do so.

My view is that we still don't have reliable mechanisms to deal with the sorts of problems mentioned (i.e. the Intentional Insights fiasco), so it's valuable when people call out problems as they have the ability to. It would be better if the EA community had ways of calling out such problems by means other than requiring individuals to take on heroic responsibility, though!

This having been said, I think it's worth explicitly thanking the people who helped expose Intentional Insight's deceitful practices—Jeff Kaufman, for his original post on the topic, and Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, Claire Zabel, and others who have not been mentioned or who contributed anonymously, for writing this detailed document.

Comment author: Fluttershy 24 February 2017 04:25:04AM 9 points [-]

I believe you when you say that you don't benefit much from feedback from people not already deeply engaged with your work.

There's something really noticeable to me about the manner in which you've publicly engaged with the EA community through writing for the past while. You mention that you put lots of care into your writing, and what's most noticeable about this for me is that I can't find anything that you've written here that anyone interested in engaging with you might feel threatened or put down by. This might sound like faint praise, but it really isn't meant to be; I find that writing in such a way is actually somewhat resource intensive in terms of both time, and something roughly like mental energy.

(I find it's generally easier to develop a felt sense for when someone else is paying sufficient attention to conversational nuances regarding civility than it is to point out specific examples, but your discussion of how you feel about receiving criticism is a good example of this sort of civility).

As you and James mention, public writeups can be valuable to readers, and I think this is true to a strong extent.

I'd also say that, just as importantly, writing this kind of well thought out post which uses healthy and civil conversational norms creates value from a leadership/coordination point of view. Leadership in terms of teaching skills and knowledge is important too, but I guess I'm used to thinking of those as separate from leadership in terms of exemplifying civility and openness to sharing information. If it were more common for people and foundations to write frequently and openly, and communicate with empathy towards their audiences when they did, I think the world would be the better for it. You and other senior Open Phil and GiveWell staff are very much respected in our community, and I think it's wonderful when people are happy to set a positive example for others.

(Apologies if I've conflated civility with openness to sharing information; these behaviors feel quite similar to me on a gut level—possibly because they both take some effort to do, but also nudge social norms in the right direction while helping the audience.).

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 09:46:23PM *  2 points [-]

but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

This not about strategic cooperation. This is about strategic sacrifice - in other words, doing things for people that they never do for you or others. Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

Nobody who has left EA has done so with a loud public bang. People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree.

At the same time, when someone joins EA, they'll pull some of their social circle after them.

Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals.

We are talking about communications between people within EA and people outside EA. I don't recognize a clear connection between these issues.

Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking.

Sure, but I don't think that people with credible but slightly different views of ethics and decision theory ought to be excluded. I'm not so close minded that I think that anyone who isn't a thorough expected value maximizer ought to be in our community.

It happens when we moral-trade away too much of our actual moral goals.

Moral trades are Pareto improvements, not compromises.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach.

But we are not exploiting them in any way. Exploitation involves manipulation and deception. I am in no way saying that we should lie about what EA stands for. Someone who finds out that they actually don't care about EA will realize that they simply didn't know enough about it before joining, which doesn't cause anyone to feel exploited.

Overall, you seem to be really worried about people criticizing EA, something which only a tiny fraction of people who leave will do to a significant extent. This pales in comparison to actual contributions which people make - something which every EA does. You'll have to believe that verbally criticizing EA is more significant than the contributions of many, perhaps dozens, of people actually being in EA. This is odd.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

So I'll tell you how to make the people who are reading this thread respect us.

Imagine that you come across a communist forum and someone posts a thread saying "why I no longer identify as a Marxist." This person says that they don't like how Marxists don't pay attention to economic research and they don't like how they are so hostile to liberal democrats, or something of the sort.

Option A: the regulars of the forum respond as follows. They say that they actually have tons of economic research on their side, and they cite a bunch of studies from heterodox economists who have written papers supporting their claims. They point out the flaws and shallowness in mainstream economists' attacks on their beliefs. They show empirical evidence of successful central planning in Cuba or the Soviet Union or other countries. Then they say that they're friends with plenty of liberal democrats, and point out that they never ban them from their forum. They point out that the only times they downvote and ignore liberal democrats is when they're repeating debunked old arguments, but they give examples of times they have engaged seriously with liberal democrats who have interesting ideas. And so on. Then they conclude by telling the person posting that their reasons for leaving don't make any sense, because people who respect economic literature or want to get along with liberal democrats ought to fit in just fine on this forum.

Option B: the regulars on the forum apologize for not making it abundantly clear that their community is not suited for anyone who respects academic economic research. They affirm the OP's claim that anyone who wants to get along with liberal democrats is not welcome and should just stay away. They express deep regret at the minutes and hours of their intellectual opponents' time that they wasted by inviting them to engage with their ideas. They put up statements and notices on the website explaining all the quirks of the community which might piss people off, and then suggest that anyone who is bothered by those things could save time if they stayed away.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 22 February 2017 02:43:18AM 1 point [-]

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first passage I've quoted, it's socially (but not logically) implied that Telofy has "speculated", "overlooked things", and used "motivated reasoning". The second passage I've quoted states that certain people who "don't feel respected or disrespected" should "respect us, first and foremost", which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

These examples are part of a trend in your writing.

Cut it out.

In response to comment by Fluttershy on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 03:05:28AM *  1 point [-]

I'm not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it's really not what's at stake. This is about telling certain kinds of people that EA isn't for them.

there are only some people who have had experiences that would point them to this correct conclusion

But this is about optimal marketing and movement growth, a very objective empirical question. It doesn't seem to have much to do with personal experiences; we don't normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren't dominant factors.

By the way, think of the kind of message that would be sent. "Hey you! Don't come to effective altruism! It probably isn't for you!" That would be interpreted as elitist and close-minded, because there are smart people who don't have the same views that other EAs do and they ought to be involved.

Let's be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that's all.

I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

You can interpret that kind of speech prescriptively - i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world.

Edit: also, since you stated earlier that you don't actually identify as EA, it really doesn't make any sense for you to complain about how we talk about what we believe.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 06:30:06AM 7 points [-]

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

View more: Next