In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 09:46:23PM *  2 points [-]

but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

This not about strategic cooperation. This is about strategic sacrifice - in other words, doing things for people that they never do for you or others. Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

Nobody who has left EA has done so with a loud public bang. People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree.

At the same time, when someone joins EA, they'll pull some of their social circle after them.

Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals.

We are talking about communications between people within EA and people outside EA. I don't recognize a clear connection between these issues.

Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking.

Sure, but I don't think that people with credible but slightly different views of ethics and decision theory ought to be excluded. I'm not so close minded that I think that anyone who isn't a thorough expected value maximizer ought to be in our community.

It happens when we moral-trade away too much of our actual moral goals.

Moral trades are Pareto improvements, not compromises.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach.

But we are not exploiting them in any way. Exploitation involves manipulation and deception. I am in no way saying that we should lie about what EA stands for. Someone who finds out that they actually don't care about EA will realize that they simply didn't know enough about it before joining, which doesn't cause anyone to feel exploited.

Overall, you seem to be really worried about people criticizing EA, something which only a tiny fraction of people who leave will do to a significant extent. This pales in comparison to actual contributions which people make - something which every EA does. You'll have to believe that verbally criticizing EA is more significant than the contributions of many, perhaps dozens, of people actually being in EA. This is odd.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

So I'll tell you how to make the people who are reading this thread respect us.

Imagine that you come across a communist forum and someone posts a thread saying "why I no longer identify as a Marxist." This person says that they don't like how Marxists don't pay attention to economic research and they don't like how they are so hostile to liberal democrats, or something of the sort.

Option A: the regulars of the forum respond as follows. They say that they actually have tons of economic research on their side, and they cite a bunch of studies from heterodox economists who have written papers supporting their claims. They point out the flaws and shallowness in mainstream economists' attacks on their beliefs. They show empirical evidence of successful central planning in Cuba or the Soviet Union or other countries. Then they say that they're friends with plenty of liberal democrats, and point out that they never ban them from their forum. They point out that the only times they downvote and ignore liberal democrats is when they're repeating debunked old arguments, but they give examples of times they have engaged seriously with liberal democrats who have interesting ideas. And so on. Then they conclude by telling the person posting that their reasons for leaving don't make any sense, because people who respect economic literature or want to get along with liberal democrats ought to fit in just fine on this forum.

Option B: the regulars on the forum apologize for not making it abundantly clear that their community is not suited for anyone who respects academic economic research. They affirm the OP's claim that anyone who wants to get along with liberal democrats is not welcome and should just stay away. They express deep regret at the minutes and hours of their intellectual opponents' time that they wasted by inviting them to engage with their ideas. They put up statements and notices on the website explaining all the quirks of the community which might piss people off, and then suggest that anyone who is bothered by those things could save time if they stayed away.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 22 February 2017 02:43:18AM 2 points [-]

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first passage I've quoted, it's socially (but not logically) implied that Telofy has "speculated", "overlooked things", and used "motivated reasoning". The second passage I've quoted states that certain people who "don't feel respected or disrespected" should "respect us, first and foremost", which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

These examples are part of a trend in your writing.

Cut it out.

In response to comment by Fluttershy on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 03:05:28AM *  1 point [-]

I'm not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it's really not what's at stake. This is about telling certain kinds of people that EA isn't for them.

there are only some people who have had experiences that would point them to this correct conclusion

But this is about optimal marketing and movement growth, a very objective empirical question. It doesn't seem to have much to do with personal experiences; we don't normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren't dominant factors.

By the way, think of the kind of message that would be sent. "Hey you! Don't come to effective altruism! It probably isn't for you!" That would be interpreted as elitist and close-minded, because there are smart people who don't have the same views that other EAs do and they ought to be involved.

Let's be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that's all.

I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

You can interpret that kind of speech prescriptively - i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 06:30:06AM 4 points [-]

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

In response to comment by Fluttershy on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 02:18:26AM *  1 point [-]

I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have!

This isn't about "let's all check our privileges", this is "the trivial interests of wealthy people are practically meaningless in comparison to the things we're trying to accomplish."

I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than you do, and that their perspectives are valid too.

There's nothing necessarily intersectional/background-based about that, you can find philosophers in the Western moral tradition arguing the same thing. Sure, they're valid perspectives. They're also untenable, and we don't agree with them, since they place wealthy people's efforts, time, and autonomy on par with the need to mitigate suffering in the developing world, and such a position is widely considered untenable by many other philosophers who have written on the subject. Having a perspective from another culture does not excuse you from having a flawed moral belief.

But don't get confused. This is not "should we rip people off/lie to people in order to prevent mothers from having to bury their little kids" or some other moral dilemma. This is "should we go out of our way to give disclaimers and pander to the people we market to, something which other social movements never do, in order to save them time and effort." It's simply insane.

(As a more general note, and not something I want to address to kbog in particular, I've noticed that I do sometimes System-1-feel like I have to justify arguments for being considerate in terms of utilitarianism. Utilitarianism does justify kindness, but feeling emotionally compelled to argue for kindness on grounds of utilitarianism rather than on grounds of decency feels like overkill, and makes it feel like something is off--even if it is just my emotional calibration that's off.)

The kind of 'kindness' being discussed here - going out of one's way to make your communication maximally considerate to all the new people it's going to reach - is not grounded in traditional norms and inclinations to be kind to your fellow person. It's another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 02:47:18AM *  0 points [-]

There's nothing necessarily intersectional/background-based about that

People have different experiences, which can inform their ability to accurately predict how effective various interventions are. Some people have better information on some domains than others.

One utilitarian steelman of this position that's pertinent to the question of the value of kindness and respect of other's time would be that:

  • respecting people's intellectual autonomy and being generally kind tends to bring more skilled people to EA
  • attracting more skilled EAs is worth it in utilitarian terms
  • there are only some people who have had experiences that would point them to this correct conclusion

Sure, they're valid perspectives. They're also untenable, and we don't agree with them

The kind of 'kindness' being discussed here [is]... another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

I feel that both of these statements are untrue of myself, and I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 20 February 2017 09:34:20PM *  1 point [-]

it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement

How so?

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

It would be a problem if we loosened our standards and weakened the movement to accommodate them. But I don't see what's harmful about someone thinking that EA is for them, exploring it and then later deciding otherwise.

and for their own sake.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 12:21:26AM 1 point [-]

We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have! I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than you do, and that their perspectives are valid too.

(As a more general note, and not something I want to address to kbog in particular, I've noticed that I do sometimes System-1-feel like I have to justify arguments for being considerate in terms of utilitarianism. Utilitarianism does justify kindness, but feeling emotionally compelled to argue for kindness on grounds of utilitarianism rather than on grounds of decency feels like overkill, and makes it feel like something is off--even if it is just my emotional calibration that's off.)

In response to Why I left EA
Comment author: Fluttershy 20 February 2017 10:51:16AM 2 points [-]

For me, most of the value I get out of commenting in EA-adjacent spaces comes through tasting the ways in which I gently care about our causes and community. (Hopefully it is tacit that one of the many warm flavors of that value for me is in the outcomes our conversations contribute to.)

But I suspect that many of you are like me in this way, and also that, in many broad senses, former EAs have different information than the rest of us. Perhaps the feedback we hear when anyone shares some of what they've learned before they go will tend to be less rewarding for them to share, and more informative to us to receive, than most other feedback. In that spirit, I'd like to affirm that it's valuable to have people in similar positions to Lila's share. Thanks to Lila for doing so.

Comment author: Fluttershy 16 February 2017 02:22:34AM 5 points [-]

Personally, I've noticed that being casually aware of smaller projects that seem cash-strapped has given me the intuition that it would be better for Good Ventures to fund more of the things it thinks should be funded, since that might give some talented EAs more autonomy. On the other hand, I suspect that people who prefer the "opposite" strategy, of being more positive on the pledge and feeling quite comfortable with Givewell's approach to splitting, are seeing a very different social landscape than I am. Maybe they're aware of people who wouldn't have engaged with EA in any way other than by taking the pledge, or they've spent relatively more time engaging with Givewell-style core EA material than I have?

Between the fact that filter bubbles exist, and the fact that I don't get out much (see the last three characters of my username), I think I'd be likely to not notice if lots of the disagreement on this whole cluster of related topics (honesty/pledging/partial funding/etc.) was due to people having had differing social experiences with other EAs.

So, perhaps this is a nudge towards reconciliation on both the pledge and on Good Ventures' take on partial funding. If people's social circles tend to be homogeneous-ish, some people will know of lots of underfunded promising EAs and projects (which indirectly compete with GV and GiveWell top charities for resources), and others will know of few such EAs/projects. If this is case, we should expect most people's intuitions on how many funding opportunities for small projects (which only small donors can identify effectively) there are, to be systematically off in one way or another. Perhaps a reasonable thing to do here would be to discuss ways to estimate how many underfunded small projects, which EAs would be eager to fund if only they knew about them, there are.

Comment author: Fluttershy 09 February 2017 11:36:25AM *  2 points [-]

You're clearly pointing at a real problem, and the only case in which I can read this as melodramatic is the case in which the problem is already very serious. So, thank you for writing.

When the word "care" is used carelessly, or, more generally, when the emotional content of messages is not carefully tended to, this nudges EA towards being the sort of place where e.g. the word "care" is used carelessly. This has all sorts of hard to track negative effects; the sort of people who are irked by things like misuse of the word "care" are disproportionately likely to be the sort of people who are careful about this sort of thing themselves. It's easy to see how a harmful "positive" feedback loop might be created in such a scenario if not paying attention to the connotations of words can drive our friends away.

Comment author: RobBensinger 07 February 2017 09:46:26PM 6 points [-]

Anonymous #4:

I think that EA as it exists today doesn't provide much value. It focuses mostly on things that are obvious today ('malaria is bad'), providing people a slightly better way to do what they already think is a good idea, rather than making bets on high-impact large-scale interventions. It also places too much emphasis on alleviating suffering, to the exclusion of Kantian, contractarian, etc. conceptions of ethical obligation.

(By this I primarily have in mind that too many EAs are working on changing the subjective experience of chickens and crickets in a particular direction, on the assumption that qualia/subjectivity is a relatively natural kind, that it exhibits commensurate valences across different species, and that these valences track moral importance very closely. It strikes me as more plausible that morality as we know it is, loosely speaking, a human thing -- a phenomenon that's grounded in our brain's motivational systems and directed at achieving cooperate-cooperate equilibria between intelligent agents simulating one another. Since crickets aren't sophisticated enough to form good mental models of humans (or even of other crickets), they just aren't the kinds of physical systems that are likely to be objects of much moral concern, if any. I obviously don't expect all EAs to agree with me on any of these points, but I think far too many EAs rigidly adhere to the same unquestioned views on moral theory, which would be bad enough even if those views were likely to be true.)

The only EA movement-building organization that strikes me as useful for long-run considerations is 80,000 Hours. GiveWell deliberately avoids the kinds of interventions and organizations that are likely to be useful, and Good Ventures doesn't strike me as willing to explore hard enough to do anything interesting. More generally, I feel like a lot of skilled people are now wasting their time on EA (e.g., Oliver Habryka), many of whom would otherwise be working on issues more directly related to AGI.

What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.

Comment author: Fluttershy 09 February 2017 09:02:15AM 4 points [-]

What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.

I'd be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I've thought a lot about how we can support each other. I'm not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.

Speculation ahoy:

1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.

2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).

3) I wonder how much good work could be done on anyone's mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I've never tried something like this before, but I'd eventually like to.

Well, writing that comment was a journey. I doubt I'll stand by all of what I've written here tomorrow morning, but I do think that I'm correct on some points, and that I'm pointing in a few valuable directions.

Comment author: RomeoStevens 08 February 2017 10:12:46PM 7 points [-]

Meta: this seems like it was a really valuable exercise based on the quality of the feedback. Thank you for conceiving it, running it, and giving thought to the potential side effects and systematic biases that could affect such a thing. It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Comment author: Fluttershy 09 February 2017 04:14:08AM 1 point [-]

It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?

Comment author: Fluttershy 09 February 2017 03:34:36AM *  8 points [-]

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.

Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

It makes some sense that there could be gaps which Open Phil isn't able to fill, even if Open Phil thinks they're no less effective than the opportunities they're funding instead. Was that what was meant here, or am I missing something? If not, I wonder what such a funding gap for a cost-effective opportunity might look like (an example would help)?

There's a part of me that keeps insisting that it's counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund. I'd naively think that there would be at least some sort of tradeoff between producing new suggestions for things the EA fund might fund, and new things that Open Phil might fund. I suspect you're already thinking closely about this, and I would be happy to hear everyone's thoughts.

Edit: I'd meant to express general confidence in those who had been selected as fund managers. Also, I have strong positive feelings about epistemic humility in general, which also seems highly relevant to this project.

View more: Next