Comment author: redmoonsoaring 18 March 2017 05:38:04PM 13 points [-]

While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn't seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it's easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.

Comment author: Telofy  (EA Profile) 27 March 2017 02:14:15PM 0 points [-]

While enough people are skeptical about rapid growth and no one (I think) wants so sacrifice integrity, the warning to be careful about politicization of EA is a timely and controversial one because well-known EAs have put a lot of might behind Hillary’s election campaign and the prevention of Brexit to the point that the lines behind private efforts and EA efforts may blur.

Comment author: RomeoStevens 25 February 2017 08:15:29PM *  6 points [-]

Thinking about what to call this phenomenon because it seems like an important aspect of discourse. Namely, making no claims but only distinctions, which generates no arguments. This was a distinct flavor to Superintelligence, I think intentionally to create a framework within which to have a dialog absent the usual contentious claims. This was good for that particular use case, but I think that deployed indiscriminately it leads to a kind of big tent approach inimical to real progress.

I think potentially it is the right thing for OpenPhil to currently be doing since they are first trying to figure out how the world actually is with pilot grants and research methodology testing etc. Good to not let it infect your epistemology permanently though. Suggested counter force: internal non-public betting market.

Comment author: Telofy  (EA Profile) 26 February 2017 08:13:00PM *  0 points [-]

Namely, making no claims but only distinctions

Or taxonomies. Hence: The Taxoplasma of Ra.

(Sorry, I should post this in DEAM, not here. I don’t even understand this Ra thing.)

But I really like this concept!

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 25 February 2017 01:30:25AM *  -1 points [-]

You get way too riled up over this.

As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what's wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.

And, if my language seems too "adversarial"... honestly, I expect people to deal with it. I don't communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I'm not "riled up", I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 25 February 2017 08:37:51AM 0 points [-]

We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.

If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 22 February 2017 06:42:54PM *  -2 points [-]

Lila has probably read those.

Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don't over-update based on an example like this.

Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you're after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

And texts that don't appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.

A lot of your objections take the form of likely-sounding counternarratives to my narratives.

Yes, and it's sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.

We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc.

But on my side, we shouldn't only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.

With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined.

I don't see how. The more restrictive your message, the less appealing and widespread it is.

The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them.

What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?

Are you really advocating messages like "EA is great but if you don't agree with universally following expected value calculations then it may not be for you?" If we had done this with any of the things described here, we'd be intellectually dishonest - since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.

It's one thing to try to help people out by being honest with them... it's quite another to be dishonest in a paternalistic bid to keep them from "wasting time" by contributing to our movement.

but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

That is what the vast majority of people who read about EA already do.

It’s well possible that I’m overly sensitive to being attacked (by outside critics),

Not only that, but you're sensitive to the extent that you're advocating caving in to their ideas and giving up the ideological space they want.

This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 23 February 2017 07:50:54AM 2 points [-]

You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.

I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 22 February 2017 11:37:47AM *  0 points [-]

Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.

What I wrote in response to the OP took me maybe half an hour. If you want to save time then you can easily make quicker, smaller points, especially if you're a subject matter expert. The issue at stake is more about the type of attitude and response than the length. What you're worried about here applies equally well against all methods of online discourse, unless you want people to generally ignore posts.

They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.

The purpose is not to satisfy the person writing the OP. That person has already made up their mind, as we've observed in this thread. The purpose is to make observers and forum members realize that we know what we are talking about.

Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.

Okay, so what kinds of things are you thinking of? I'm kind of lost here. The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for. You said that the one sentence descriptions of EA aren't good enough, but they can't possibly be, and no one joins a social movement based on its one sentence description.

That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.

The addition of new members does not prevent old members from having high quality discussions. It only increases the amount of new person discussions, which seems perfect good to me.

If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case.

I'm not. But the methodology you're using here is suspect and prone to bias.

Other social movements end up like feminism, with oppositions and toxoplasma.

Or they end up successful and achieve major progress.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do.

No social movement has done things like this, i.e. trying to save time and effort for outsiders who are interested in joining by pushing off their interest, at the expense of its own short term goals. And no other social movement has had this level of obsessive theorizing about movement dynamics.

How would you realize that?

By calling out such behavior when I see it.

The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)

That sounds like a great way to ensure intellectual homogeneity as well as slow growth. The whole side of this which I ignored in my above post is that it's completely wrong to think that restricting your outward messages will not result in false negatives among potential additions to the movement. So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

Why does she even write something about a community she doesn’t feel part of?

She's been with the rationalist community since early days as a member of MetaMed, so maybe that has something to do with it.

Movements really get criticized by people who are on the opposite spectrum and completely uninvolved. Every political faction gets its worst criticism from ideological opponents. Rationalists and EAs get most of their criticism from ideological opponents. I just don't see much of this hypothesized twilight zone criticism that comes from nearly-aligned people, and when it does come it tends to be interesting and worth listening to. You only think of it as unduly significant because you are more exposed to it; you have no idea of the extent and audience of much more negative pieces written by people outside the EA social circle.

It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.

I am not talking about not playing nice with other value systems. This is about whether to make conscious attempts to homogenize our community with a single value system and to prevent people with other value systems from making the supposed mistake of exploring our community. It's not cooperation, it's sacrificial, and it's not about moral systems, it's about people and their apparently precious time.

What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.

Stipulate any definition, the point will be the same; you should not be worried about EAs making too many moral trades, because they're going to be Pareto improvements.

I counted this post as a loud, public bang.

Then you should be much less worried about loud public bangs and much more worried about getting people interested in effective altruism.

I’d love to get input on this from an expert in social movements or organizational culture at companies.

Companies experience enormous costs in training new talent and opportunity costs if their talent needs to be replaced. Our onboarding costs are very low in comparison. Companies also have a limited amount of talent they can hire, while a social movement can grow very quickly, so it makes sense for companies to be selective in ways that social movements shouldn't be. If a company could hire people for free then it would be much less selective. Finally, the example you selected (Google) is one of the more unusually selective companies, compared to other ones.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 22 February 2017 04:34:44PM 0 points [-]

The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for.

Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.

So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.

I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

And no other social movement has had this level of obsessive theorizing about movement dynamics.

Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD

It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 22 February 2017 02:43:18AM 1 point [-]

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first passage I've quoted, it's socially (but not logically) implied that Telofy has "speculated", "overlooked things", and used "motivated reasoning". The second passage I've quoted states that certain people who "don't feel respected or disrespected" should "respect us, first and foremost", which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

These examples are part of a trend in your writing.

Cut it out.

In response to comment by Fluttershy on Why I left EA
Comment author: Telofy  (EA Profile) 22 February 2017 09:34:17AM 2 points [-]

Thank you. <3

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 21 February 2017 09:46:23PM *  2 points [-]

but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

This not about strategic cooperation. This is about strategic sacrifice - in other words, doing things for people that they never do for you or others. Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

Nobody who has left EA has done so with a loud public bang. People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree.

At the same time, when someone joins EA, they'll pull some of their social circle after them.

Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals.

We are talking about communications between people within EA and people outside EA. I don't recognize a clear connection between these issues.

Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking.

Sure, but I don't think that people with credible but slightly different views of ethics and decision theory ought to be excluded. I'm not so close minded that I think that anyone who isn't a thorough expected value maximizer ought to be in our community.

It happens when we moral-trade away too much of our actual moral goals.

Moral trades are Pareto improvements, not compromises.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach.

But we are not exploiting them in any way. Exploitation involves manipulation and deception. I am in no way saying that we should lie about what EA stands for. Someone who finds out that they actually don't care about EA will realize that they simply didn't know enough about it before joining, which doesn't cause anyone to feel exploited.

Overall, you seem to be really worried about people criticizing EA, something which only a tiny fraction of people who leave will do to a significant extent. This pales in comparison to actual contributions which people make - something which every EA does. You'll have to believe that verbally criticizing EA is more significant than the contributions of many, perhaps dozens, of people actually being in EA. This is odd.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

So I'll tell you how to make the people who are reading this thread respect us.

Imagine that you come across a communist forum and someone posts a thread saying "why I no longer identify as a Marxist." This person says that they don't like how Marxists don't pay attention to economic research and they don't like how they are so hostile to liberal democrats, or something of the sort.

Option A: the regulars of the forum respond as follows. They say that they actually have tons of economic research on their side, and they cite a bunch of studies from heterodox economists who have written papers supporting their claims. They point out the flaws and shallowness in mainstream economists' attacks on their beliefs. They show empirical evidence of successful central planning in Cuba or the Soviet Union or other countries. Then they say that they're friends with plenty of liberal democrats, and point out that they never ban them from their forum. They point out that the only times they downvote and ignore liberal democrats is when they're repeating debunked old arguments, but they give examples of times they have engaged seriously with liberal democrats who have interesting ideas. And so on. Then they conclude by telling the person posting that their reasons for leaving don't make any sense, because people who respect economic literature or want to get along with liberal democrats ought to fit in just fine on this forum.

Option B: the regulars on the forum apologize for not making it abundantly clear that their community is not suited for anyone who respects academic economic research. They affirm the OP's claim that anyone who wants to get along with liberal democrats is not welcome and should just stay away. They express deep regret at the minutes and hours of their intellectual opponents' time that they wasted by inviting them to engage with their ideas. They put up statements and notices on the website explaining all the quirks of the community which might piss people off, and then suggest that anyone who is bothered by those things could save time if they stayed away.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 22 February 2017 09:33:32AM 1 point [-]

I wanted to pose a question (that I found plausible), and now you’ve understood what I was asking, so my work here is pretty much done.

But I can also, for a moment longer, stay in my role and argue for the other side, because I think there are a few more good arguments to be made.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

It’s true that I hadn’t considered the “online charisma” of the situation, but I don’t feel like Option B is what I’d like to argue for. Neither is Option A.

Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.

They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.

It’s really just the basic Eternal September Effect that I’m describing, part of what Durkheim described as anomie.

Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Also what Fluttershy said. If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case. My automatic reaction to your objections is, “Oh, I must’ve been wrong!” then “Well, good thing I didn’t state my opinion strongly. That’d be embarrassing,” and only after some deliberation I’ll remember that I had already considered many of these objections and gradually update back in the direction of my previous hypothesis. My opinions are quite unusually fluid.

Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

Other social movements end up like feminism, with oppositions and toxoplasma. Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do. That doesn’t mean that my stab at a partial solution goes in the correct direction, but it currently seems to me like an improvement.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

Let’s exclude the last example or it’ll get recursive. How would you realize that? I’ve been a lurker in a very authoritarian forum for a while. They had some rules and the core users trusted the authorities to interpret them justly. Someone got banned every other week or so, but they were also somewhat secretive, never advertised the forum to more than one specific person at a time and only when they knew the person well enough to tell that they’d be a good fit for the forum. The core users all loved the forum as a place where they could safely express themselves.

I would’ve probably done great there, but the authoritarian thing scared me on a System 1 level. The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)

The lying problem thing is a point in case. She didn’t identify with the movement, just picked out some quotes, invented a story around them, and later took most of it back. Why does she even write something about a community she doesn’t feel part of? If most of her friends had been into badminton and she didn’t like it, she wouldn’t have caused a stir in the badminton community accusing it of having a lying or cheating problem or something. She would’ve tried it for a few hours and then largely ignored it, not needing to make up any excuse for disliking it.

It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.

Moral trades are Pareto improvements, not compromises.

What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.

Nobody who has left EA has done so with a loud public bang.

I counted this post as a loud, public bang.

People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

I don’t think so, or at least when put into less extreme terms. I’d love to get input on this from an expert in social movements or organizational culture at companies.

Consultancy firms are known for their high churn rates, but that seems like an exception to me. Otherwise high onboarding costs (which we definitely have in EA), a gradual lowering of standards, minimization of communication overhead, and surely many other factors drive a lot of companies toward rather hiring with high precision and low recall than the other way around and then investing greatly into retaining the good employees they have. (Someone at Google, for example, said “The number one thing was to have an incredibly high bar for talent and never compromise.” They don’t want to get lots of people in, get them up to speed, hope they’ll contribute something, and lose most of them again after a year. They want to rather grow more slowly than get diluted like that.)

We probably can’t interview and reject people who are interested in EA, so the closest thing we can do is to help them decide as well as possible whether it’s really what they want to become part of long-term.

I don’t think this sort of thing, from Google or from EAs, would come off as pathetic.

But again, this is the sort of thing where I would love to ask an expert like Laszlo Bock for advise rather than trying to piece together some consistent narrative from a couple books and interviews. I’m really a big fan of just asking experts.

In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 20 February 2017 09:34:20PM *  1 point [-]

it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement

How so?

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

It would be a problem if we loosened our standards and weakened the movement to accommodate them. But I don't see what's harmful about someone thinking that EA is for them, exploring it and then later deciding otherwise.

and for their own sake.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 21 February 2017 09:20:04AM *  3 points [-]

There be dragons! Dragons with headaches!

I think the discussion that has emerged here is about an orthogonal point from the one I wanted to make.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

A year ago I would’ve simply agreed or said the same thing, and there would’ve been no second level to my decision process, but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

So I certainly agree with you that I care incomparably more about reducing suffering than about pandering to some privileged person’s divergent moral goals, but here are some more things I currently believe:

  1. The EA movement has a huge potential to reduce suffering (and further related moral goals).
  2. All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.
  3. People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree. When someone leaves with a loud, public bang, they’ll likely pull even more people after them.

If I may, for the moment, redefine “self-interested” to include the “self-interested” pursuit of altruistic goals at the expense of other people’s (selfish and nonselfish) goals, then such a “self-interested” approach will run us into several of the walls or failure modes above:

  1. Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated. They may come to feel like the EA project has failed or is about to, and so don’t want to invest into it anymore. Maybe they’ll rather join some adjacent movement or an object-level organization, but the potential of the consolidated EA movement will be lost.
  2. Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals. I have more and more the feeling that the whole talk about ACE being overconfident is just a meme perpetuated by people who haven’t been following ACE or animal advocacy closely.
  3. Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking. It happens when we moral-trade away too much of our actual moral goals.
  4. But if we trade away too little, we’ll create enemies, resulting in more and more zero-sum fights with groups with other moral goals.

The failure modes most relevant to this post are the lethargy and the zero-sum fights one:

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach. They’ll further my moral goal of reducing suffering for the time they’re around, but if they’re, e.g., a Kantian, they’ll afterwards feel instrumentalized and become a more or less vocal opponent. That’s probably more costly for us than whatever they may’ve contributed along the way unless the first was as trajectory-changing as I think movement building (or movement destroying) can be.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

In response to Why I left EA
Comment author: Telofy  (EA Profile) 20 February 2017 07:22:42PM 1 point [-]

Should we maybe take this as a sign that EA needs to become more like Aspirin, or many other types of medicine? I just checked an Aspirin leaflet, and it said clearly exactly what Aspirin is for. The common “doing the most good” slogan kind of falls short of that.

The definition from the FAQ is better, especially in combination with the additional clarifications below on the page:

Effective altruism is using evidence and analysis to take actions that help others as much as possible.

We’ve focused a lot on finding (with high recall) all the value aligned people who find EA to be exactly the thing they’ve been looking for all their lives, but just like with medicine, it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement and for their own sake.

Asprin may be a good example because it’s not known for any terrible side effects, but if someone takes it for some unrelated ailment, they’ll be disillusioned and angry about their investment.

Do we need to be more clear not only about who EA is for but also who EA is probably not for?

In response to comment by Telofy  (EA Profile) on Anonymous EA comments
Comment author: Ben_Todd 09 February 2017 10:42:18AM 6 points [-]

My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 ("The Point of View of the Universe").

Comment author: Telofy  (EA Profile) 10 February 2017 04:01:39PM 0 points [-]

Ah, cool! I should read it.

View more: Next