Comment author: kbog  (EA Profile) 27 February 2017 06:52:45AM *  3 points [-]

What about online activism? There are lots of debates in various corners of the Internet over AI which often involve people in various areas of academia and tech. It seems like it could be valuable and feasible for people who are sufficiently educated on the basic issues of AI alignment to correct misconceptions and spread good ideas.

As another idea, there are certain kinds of information which would be worth collecting: surveys of relevant experts, taxonomies of research ideas and developments in the field, information about the political and economic sides of AI research. I suppose this could fall into gruntwork for safety orgs, but they don't comprehensively ask for every piece of information and work which could be useful.

Also - this might sound strange, but if someone wants to contribute then it's their choice: students and professionals might be more productive if they had remote personal assistants to handle various tasks which are peripheral to one's primary tasks and responsibilities, and if someone is known to be an EA, value aligned on cause priorities, and moderately familiar with the technical work, then having someone do this seems very feasible.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 23 February 2017 07:50:54AM 2 points [-]

You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.

I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 25 February 2017 01:30:25AM *  -1 points [-]

You get way too riled up over this.

As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what's wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.

And, if my language seems too "adversarial"... honestly, I expect people to deal with it. I don't communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I'm not "riled up", I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.

Comment author: kbog  (EA Profile) 24 February 2017 08:39:25PM *  3 points [-]

Consequentialism has a hard time with praiseworthiness and blameworthiness, since it implies that praiseworthiness and blameworthiness ought to be assigned on the basis of which assignment has the best consequences. It might be nice to rescue a non-instrumental understanding of praise and blame from the consequentialist framework, but there could be different ways of doing it and we ought to be precise about how we go about doing such a thing.

I think you ought to start out with solid premises of value and how one ought to act, and then come to a theory of praiseworthiness/blameworthiness based on that if you still see a need for it. So worry about what is impermissible, what is permissible, what is obligatory, and what is supererogatory. Or, if you like, adopt a scalar theory like some utilitarians do, where the goodness of an action is on a gradient from good to bad.

  1. Then, though I am not obligated to relinquish my interests (because mine hold the same value as all others, and all people are of equal value), I am obligated, at the very minimum, to divide my surplus resources equally, keeping half for myself and distributing half to others who are in most need of it.

I don't see a good justification here. Surely, if everyone's interests are equal, then this naively implies that we ought to divide surplus resources equally among all, keeping 1/7,000,000,000 for ourselves? Why half for oneself and half for everybody else?

You're saying that we need to respect everyone's interests equally. But there's different ways of understanding that. What about spending more than 1/7,000,000,000 of one's income on oneself, so that one can be more productive? (I expect everyone to agree with this logic.) And how do you take differences in marginal utility into account? I may value the interests of a poor man equally to my own, but under utilitarian logic this would imply that I ought to give him more than half of what I own, since he has a greater use for it than I do. And under egalitarian logic, a principle of equality implies that we ought to only care about the interests of the worst-off.

And why should we divide our surplus resources in such a way? Why not all our resources, if we're giving equal interest? Only putting our surplus resources up for consideration, when other people lack basic resources, seems to be an unequal way of doing things.

It is only beyond this point that giving would be thought of as altruistic

Nitpicking, but donating to support others simply is altruism. The way you're using the term here is unusual.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 22 February 2017 04:34:44PM 0 points [-]

The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for.

Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.

So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.

I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

And no other social movement has had this level of obsessive theorizing about movement dynamics.

Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD

It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 22 February 2017 06:42:54PM *  -2 points [-]

Lila has probably read those.

Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don't over-update based on an example like this.

Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you're after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

And texts that don't appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.

A lot of your objections take the form of likely-sounding counternarratives to my narratives.

Yes, and it's sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.

We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc.

But on my side, we shouldn't only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.

With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined.

I don't see how. The more restrictive your message, the less appealing and widespread it is.

The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them.

What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?

Are you really advocating messages like "EA is great but if you don't agree with universally following expected value calculations then it may not be for you?" If we had done this with any of the things described here, we'd be intellectually dishonest - since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.

It's one thing to try to help people out by being honest with them... it's quite another to be dishonest in a paternalistic bid to keep them from "wasting time" by contributing to our movement.

but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

That is what the vast majority of people who read about EA already do.

It’s well possible that I’m overly sensitive to being attacked (by outside critics),

Not only that, but you're sensitive to the extent that you're advocating caving in to their ideas and giving up the ideological space they want.

This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 22 February 2017 09:33:32AM 1 point [-]

I wanted to pose a question (that I found plausible), and now you’ve understood what I was asking, so my work here is pretty much done.

But I can also, for a moment longer, stay in my role and argue for the other side, because I think there are a few more good arguments to be made.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

It’s true that I hadn’t considered the “online charisma” of the situation, but I don’t feel like Option B is what I’d like to argue for. Neither is Option A.

Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.

They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.

It’s really just the basic Eternal September Effect that I’m describing, part of what Durkheim described as anomie.

Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Also what Fluttershy said. If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case. My automatic reaction to your objections is, “Oh, I must’ve been wrong!” then “Well, good thing I didn’t state my opinion strongly. That’d be embarrassing,” and only after some deliberation I’ll remember that I had already considered many of these objections and gradually update back in the direction of my previous hypothesis. My opinions are quite unusually fluid.

Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

Other social movements end up like feminism, with oppositions and toxoplasma. Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do. That doesn’t mean that my stab at a partial solution goes in the correct direction, but it currently seems to me like an improvement.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

Let’s exclude the last example or it’ll get recursive. How would you realize that? I’ve been a lurker in a very authoritarian forum for a while. They had some rules and the core users trusted the authorities to interpret them justly. Someone got banned every other week or so, but they were also somewhat secretive, never advertised the forum to more than one specific person at a time and only when they knew the person well enough to tell that they’d be a good fit for the forum. The core users all loved the forum as a place where they could safely express themselves.

I would’ve probably done great there, but the authoritarian thing scared me on a System 1 level. The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)

The lying problem thing is a point in case. She didn’t identify with the movement, just picked out some quotes, invented a story around them, and later took most of it back. Why does she even write something about a community she doesn’t feel part of? If most of her friends had been into badminton and she didn’t like it, she wouldn’t have caused a stir in the badminton community accusing it of having a lying or cheating problem or something. She would’ve tried it for a few hours and then largely ignored it, not needing to make up any excuse for disliking it.

It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.

Moral trades are Pareto improvements, not compromises.

What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.

Nobody who has left EA has done so with a loud public bang.

I counted this post as a loud, public bang.

People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

I don’t think so, or at least when put into less extreme terms. I’d love to get input on this from an expert in social movements or organizational culture at companies.

Consultancy firms are known for their high churn rates, but that seems like an exception to me. Otherwise high onboarding costs (which we definitely have in EA), a gradual lowering of standards, minimization of communication overhead, and surely many other factors drive a lot of companies toward rather hiring with high precision and low recall than the other way around and then investing greatly into retaining the good employees they have. (Someone at Google, for example, said “The number one thing was to have an incredibly high bar for talent and never compromise.” They don’t want to get lots of people in, get them up to speed, hope they’ll contribute something, and lose most of them again after a year. They want to rather grow more slowly than get diluted like that.)

We probably can’t interview and reject people who are interested in EA, so the closest thing we can do is to help them decide as well as possible whether it’s really what they want to become part of long-term.

I don’t think this sort of thing, from Google or from EAs, would come off as pathetic.

But again, this is the sort of thing where I would love to ask an expert like Laszlo Bock for advise rather than trying to piece together some consistent narrative from a couple books and interviews. I’m really a big fan of just asking experts.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 22 February 2017 11:37:47AM *  -1 points [-]

Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.

What I wrote in response to the OP took me maybe half an hour. If you want to save time then you can easily make quicker, smaller points, especially if you're a subject matter expert. The issue at stake is more about the type of attitude and response than the length. What you're worried about here applies equally well against all methods of online discourse, unless you want people to generally ignore posts.

They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.

The purpose is not to satisfy the person writing the OP. That person has already made up their mind, as we've observed in this thread. The purpose is to make observers and forum members realize that we know what we are talking about.

Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.

Okay, so what kinds of things are you thinking of? I'm kind of lost here. The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for. You said that the one sentence descriptions of EA aren't good enough, but they can't possibly be, and no one joins a social movement based on its one sentence description.

That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.

The addition of new members does not prevent old members from having high quality discussions. It only increases the amount of new person discussions, which seems perfect good to me.

If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case.

I'm not. But the methodology you're using here is suspect and prone to bias.

Other social movements end up like feminism, with oppositions and toxoplasma.

Or they end up successful and achieve major progress.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do.

No social movement has done things like this, i.e. trying to save time and effort for outsiders who are interested in joining by pushing off their interest, at the expense of its own short term goals. And no other social movement has had this level of obsessive theorizing about movement dynamics.

How would you realize that?

By calling out such behavior when I see it.

The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)

That sounds like a great way to ensure intellectual homogeneity as well as slow growth. The whole side of this which I ignored in my above post is that it's completely wrong to think that restricting your outward messages will not result in false negatives among potential additions to the movement. So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

Why does she even write something about a community she doesn’t feel part of?

She's been with the rationalist community since early days as a member of MetaMed, so maybe that has something to do with it.

Movements really get criticized by people who are on the opposite spectrum and completely uninvolved. Every political faction gets its worst criticism from ideological opponents. Rationalists and EAs get most of their criticism from ideological opponents. I just don't see much of this hypothesized twilight zone criticism that comes from nearly-aligned people, and when it does come it tends to be interesting and worth listening to. You only think of it as unduly significant because you are more exposed to it; you have no idea of the extent and audience of much more negative pieces written by people outside the EA social circle.

It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.

I am not talking about not playing nice with other value systems. This is about whether to make conscious attempts to homogenize our community with a single value system and to prevent people with other value systems from making the supposed mistake of exploring our community. It's not cooperation, it's sacrificial, and it's not about moral systems, it's about people and their apparently precious time.

What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.

Stipulate any definition, the point will be the same; you should not be worried about EAs making too many moral trades, because they're going to be Pareto improvements.

I counted this post as a loud, public bang.

Then you should be much less worried about loud public bangs and much more worried about getting people interested in effective altruism.

I’d love to get input on this from an expert in social movements or organizational culture at companies.

Companies experience enormous costs in training new talent and opportunity costs if their talent needs to be replaced. Our onboarding costs are very low in comparison. Companies also have a limited amount of talent they can hire, while a social movement can grow very quickly, so it makes sense for companies to be selective in ways that social movements shouldn't be. If a company could hire people for free then it would be much less selective. Finally, the example you selected (Google) is one of the more unusually selective companies, compared to other ones.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 22 February 2017 02:43:18AM 1 point [-]

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first passage I've quoted, it's socially (but not logically) implied that Telofy has "speculated", "overlooked things", and used "motivated reasoning". The second passage I've quoted states that certain people who "don't feel respected or disrespected" should "respect us, first and foremost", which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

These examples are part of a trend in your writing.

Cut it out.

In response to comment by Fluttershy on Why I left EA
Comment author: kbog  (EA Profile) 22 February 2017 10:51:42AM *  -2 points [-]

which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

I've noticed that strawmanning and poor interpretations of my writing is a trend in your writing. Cut it out.

I did not state that lurkers should respect us at the expense of us disrespecting them. I stated quite clearly that lurkers feel nothing of the sort, since they are observers. This has nothing to do with who they are, and everything to do with the fact that they are passively reading the conversation rather than being a subject of it. Rather, I argued that lurkers should be led to respect us instead of being unimpressed by us, and that they would be unimpressed by us if they saw that the standard reaction to somebody criticizing and leaving the movement was to leave their complaints unassailed and to affirm that such people don't fit in the movement.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 21 February 2017 09:20:04AM *  3 points [-]

There be dragons! Dragons with headaches!

I think the discussion that has emerged here is about an orthogonal point from the one I wanted to make.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

A year ago I would’ve simply agreed or said the same thing, and there would’ve been no second level to my decision process, but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

So I certainly agree with you that I care incomparably more about reducing suffering than about pandering to some privileged person’s divergent moral goals, but here are some more things I currently believe:

  1. The EA movement has a huge potential to reduce suffering (and further related moral goals).
  2. All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.
  3. People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree. When someone leaves with a loud, public bang, they’ll likely pull even more people after them.

If I may, for the moment, redefine “self-interested” to include the “self-interested” pursuit of altruistic goals at the expense of other people’s (selfish and nonselfish) goals, then such a “self-interested” approach will run us into several of the walls or failure modes above:

  1. Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated. They may come to feel like the EA project has failed or is about to, and so don’t want to invest into it anymore. Maybe they’ll rather join some adjacent movement or an object-level organization, but the potential of the consolidated EA movement will be lost.
  2. Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals. I have more and more the feeling that the whole talk about ACE being overconfident is just a meme perpetuated by people who haven’t been following ACE or animal advocacy closely.
  3. Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking. It happens when we moral-trade away too much of our actual moral goals.
  4. But if we trade away too little, we’ll create enemies, resulting in more and more zero-sum fights with groups with other moral goals.

The failure modes most relevant to this post are the lethargy and the zero-sum fights one:

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach. They’ll further my moral goal of reducing suffering for the time they’re around, but if they’re, e.g., a Kantian, they’ll afterwards feel instrumentalized and become a more or less vocal opponent. That’s probably more costly for us than whatever they may’ve contributed along the way unless the first was as trajectory-changing as I think movement building (or movement destroying) can be.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 09:46:23PM *  2 points [-]

but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

This not about strategic cooperation. This is about strategic sacrifice - in other words, doing things for people that they never do for you or others. Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

Nobody who has left EA has done so with a loud public bang. People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree.

At the same time, when someone joins EA, they'll pull some of their social circle after them.

Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals.

We are talking about communications between people within EA and people outside EA. I don't recognize a clear connection between these issues.

Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking.

Sure, but I don't think that people with credible but slightly different views of ethics and decision theory ought to be excluded. I'm not so close minded that I think that anyone who isn't a thorough expected value maximizer ought to be in our community.

It happens when we moral-trade away too much of our actual moral goals.

Moral trades are Pareto improvements, not compromises.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach.

But we are not exploiting them in any way. Exploitation involves manipulation and deception. I am in no way saying that we should lie about what EA stands for. Someone who finds out that they actually don't care about EA will realize that they simply didn't know enough about it before joining, which doesn't cause anyone to feel exploited.

Overall, you seem to be really worried about people criticizing EA, something which only a tiny fraction of people who leave will do to a significant extent. This pales in comparison to actual contributions which people make - something which every EA does. You'll have to believe that verbally criticizing EA is more significant than the contributions of many, perhaps dozens, of people actually being in EA. This is odd.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

So I'll tell you how to make the people who are reading this thread respect us.

Imagine that you come across a communist forum and someone posts a thread saying "why I no longer identify as a Marxist." This person says that they don't like how Marxists don't pay attention to economic research and they don't like how they are so hostile to liberal democrats, or something of the sort.

Option A: the regulars of the forum respond as follows. They say that they actually have tons of economic research on their side, and they cite a bunch of studies from heterodox economists who have written papers supporting their claims. They point out the flaws and shallowness in mainstream economists' attacks on their beliefs. They show empirical evidence of successful central planning in Cuba or the Soviet Union or other countries. Then they say that they're friends with plenty of liberal democrats, and point out that they never ban them from their forum. They point out that the only times they downvote and ignore liberal democrats is when they're repeating debunked old arguments, but they give examples of times they have engaged seriously with liberal democrats who have interesting ideas. And so on. Then they conclude by telling the person posting that their reasons for leaving don't make any sense, because people who respect economic literature or want to get along with liberal democrats ought to fit in just fine on this forum.

Option B: the regulars on the forum apologize for not making it abundantly clear that their community is not suited for anyone who respects academic economic research. They affirm the OP's claim that anyone who wants to get along with liberal democrats is not welcome and should just stay away. They express deep regret at the minutes and hours of their intellectual opponents' time that they wasted by inviting them to engage with their ideas. They put up statements and notices on the website explaining all the quirks of the community which might piss people off, and then suggest that anyone who is bothered by those things could save time if they stayed away.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

In response to comment by Fluttershy on Why I left EA
Comment author: Owen_Cotton-Barratt 21 February 2017 09:43:53AM 3 points [-]

Really liked this comment. Would be happy to see a top level post on the issue.

Comment author: kbog  (EA Profile) 21 February 2017 03:39:02PM *  -1 points [-]

I agree that it would be better out of context, since it's strawmanning the comment that it's trying to respond to.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 06:30:06AM 5 points [-]

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

In response to comment by Fluttershy on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 07:12:01AM *  1 point [-]

That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold

No, it's not cold. It's indifferent, and normal. No one in any social movement worries about wasting the time of people who come to learn about things. Churches don't worry that they're wasting people's time when inviting them to come in for a sermon; they don't advertise all the reasons that people don't believe in God. Feminists don't worry that they're wasting people's time by not advertising that they want white women to check their privilege before colored ones. BLM doesn't worry that it's wasting people's time by not advertising that they don't welcome people who are primarily concerned with combating black-on-black violence. And so on.

Learning what EA is about does not take a long time. This is not like asking people to read Marx or the LessWrong sequences. The books by Singer and MacAskill are very accessible and do not take long to read. If someone reads it and doesn't like it, so what? They heard a different perspective before going back to their ordinary life.

is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

Who thinks "I'm an effective altruist and I feel unwelcome here in effective altruism because people who don't agree with effective altruism aren't properly shielded from our movement"? If you want to make people feel welcome then make it a movement that works for them. I fail to see how publicly broadcasting incompatibility with others does any good.

Sure, it's nice to have a clearly defined outgroup that you can contrast yourselves with, to promote solidarity. Is that what you mean? But there are much easier and safer punching bags to be used for this purpose, like selfish capitalists or snobby Marxist intellectuals.

Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

Intersectionality does not mean simply looking at people's experiences from different backgrounds. It means critiquing and moving past sweeping modernist narratives of the experiences of large groups by investigating the unique ways in which orthogonal identity categories interact. I don't see why it's helpful, given that identity hasn't previously entered the picture at all in this conversation, and that there don't seem to be any problematic sweeping identity narratives floating around.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people.

I am a little bit confused here. You are the one saying that we should make outward facing statements telling people that EA isn't suited for them. How is that not going to drive away valuable people, in particular the ones who have diverse perspectives?

And in what way is failing to make such statements an unhealthy conversational norm? I have never seen any social movement perform this sort of behavior. If doing so is a conversational norm then it's not one which people have grown accustomed to expect.

Moreover, the street goes two ways. Here's a different perspective which you may have overlooked due to your background: some people want to be in a movement that's solid and self-assured. Creating an environment where language is constantly being policed for extreme niceness can lead some people to feel uninterested in engaging in honest dialogue.

If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

You can reject quantitative metrics, and you can also give some credence to allegations of bias. But you can't rely on this sort of thing to form a narrative. You have to find some kind of evidence.

When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse.

This is a strawman of my statements, which I have no interest in validating through response.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 02:47:18AM *  1 point [-]

There's nothing necessarily intersectional/background-based about that

People have different experiences, which can inform their ability to accurately predict how effective various interventions are. Some people have better information on some domains than others.

One utilitarian steelman of this position that's pertinent to the question of the value of kindness and respect of other's time would be that:

  • respecting people's intellectual autonomy and being generally kind tends to bring more skilled people to EA
  • attracting more skilled EAs is worth it in utilitarian terms
  • there are only some people who have had experiences that would point them to this correct conclusion

Sure, they're valid perspectives. They're also untenable, and we don't agree with them

The kind of 'kindness' being discussed here [is]... another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

I feel that both of these statements are untrue of myself, and I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

In response to comment by Fluttershy on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 03:05:28AM *  1 point [-]

I'm not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it's really not what's at stake. This is about telling certain kinds of people that EA isn't for them.

there are only some people who have had experiences that would point them to this correct conclusion

But this is about optimal marketing and movement growth, a very objective empirical question. It doesn't seem to have much to do with personal experiences; we don't normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren't dominant factors.

By the way, think of the kind of message that would be sent. "Hey you! Don't come to effective altruism! It probably isn't for you!" That would be interpreted as elitist and close-minded, because there are smart people who don't have the same views that other EAs do and they ought to be involved.

Let's be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that's all.

I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

You can interpret that kind of speech prescriptively - i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world.

Edit: also, since you stated earlier that you don't actually identify as EA, it really doesn't make any sense for you to complain about how we talk about what we believe.

View more: Next