The most popular posts on this forum make arguments along the lines of:

  • People should be friendly and get along.
  • We should be a broad-based and inclusive movement.
  • We haven't yet figured out the answers to lots of questions.
  • EA is a research question, not a strong set of demands on people.
  • We should be highly supportive of one another.
  • We should limit how weirdly we behave.
Two reasons these articles get written and become popular are:
  • They make the author look virtuous (e.g. modest, kind, reliable).
  • The arguments flatter a large share of readers.
I wrote two highly up-voted pieces along these lines and naturally it feels great to have people piling on in agreement.

Consider alternative arguments that don't get many people writing or voting in support, and indeed are liable to be condemned:
  • We should demand a lot of people - more than most are already doing, or would be willing to do.
  • We should be critical of one another in order to push one another to work harder and better.
  • We should be fine with being weird because that's the only way to find the most unreasonably neglected projects.
  • We already have much better ideas than the rest of society on a wide range of issues.
  • We shouldn't collaborate as much as we do because the hassle involved is too great.
  • It's good to get angry and be combative with people.
These ideas make the author look like a jerk or radical, and in some cases make readers feel worse about themselves.

This makes them unlikely to attract much support, even if someone volunteers to defend them.

This is a good thing. It's not wise to write things that make you look like a jerk, and make readers feel bad about themselves, unless there are particularly pressing reasons to do so. This isn't a recommendation to write publicly in defence of the above - if you're tempted to do so, meditate on the virtue of silence.

At the same time, privately we should acknowledge that the personal costs involved in publicly supporting unappealing positions means we may not be aware of, or receptive to, the best arguments in their favour. If you can't think of any supporters or considerations in favour of an unappealing position, worry that you aren't sampling fairly from both sides of the argument.

(For what it's worth, because it is sure to come up, I don't think any of the claims above are plausible - two of them I strongly disagree with, three of them seem very unlikely and one of them merely unlikely. Like everyone, I do have beliefs that might offend others and make me look bad, but those are precisely the ones I wouldn't include here.)

12

0
0

Reactions

0
0
Comments20
Sorted by Click to highlight new comments since: Today at 1:02 PM

Playing devil's advocate for a moment, it feels like in the limit this sort of none dare urge restraint dynamic may lead to EA getting watered down to the point where it's not as substantially different from mainstream altruism. I'd expect that mainstream altruism is already pretty well optimized to make its practitioners look like good people to the mainstream. If there's an incentive structure within EA to make EAs look like good people to the mainstream (by cutting out weird causes, suppressing critical discussion, etc.), and there aren't countervailing incentives, where exactly do we think this trend is going to stop?

it feels like in the limit this sort of none dare urge restraint dynamic may lead to EA getting watered down to the point where it's not as substantially different from mainstream altruism.

I hear this argument a lot, but it seems to me to be extremely unlikely. Effective Altruism is currently very different from "mainstream" altruism. It has a very special mindset. This goes not the least for leading members. I don't see any tendencies towards EA getting watered down where it's not substantially different from mainstream altruism.

At the very least, I'd like to see more detailed arguments showing that such a scenario is likely.

What about the following simple argument? "If you look at many many (most?) movements or organizations, you see mission creep or Goodharting."

Do you think there is anything that puts us in a different reference class?

I agree that lots of movements have changed target over time. But they haven't necessarily changed in the down-watering direction. Some of turned more extreme (cf Stalinism, IS), others have become altogether different.

The EA movement is a highly intellectual and idealistic movement. My hunch is that such movements have normally run a higher risk of turning too extreme than becoming too watered down. (I haven't conducted any detailed historical studies of these issues, but think such studies should be carried out.)

Hmm, from what I can tell of my analysis of movements so far (looking in particular at communism, the enlightenment, the free market economists, the women's rights movement and the climate change movement) I see dilution and value drift as one of the main risks that reduced the effectiveness of these movements (according to the values of the founders).

Importantly, those do not tend to be "watered down" in a naive fashion. Instead the core ideas slowly get replaced by ideas that are easier to understand and easier to spread (which can often also mean that they are more radical), and the intellectual integrity of the movement deteriorates as more and more members join who have not yet understood the arguments, or who simply have gone through a weaker filtering process.

I agree with xccf here that the natural direction in which I expect EA ideas to morph to become more self-propagating is in the direction of existing charity. Though I can also imagine more extreme ideas to be more adaptive in our current context (though I would assign lower probability to that).

As a disclaimer: I am not yet satisfied with my understanding of past social movements, and this is very much a gut judgement. I hope that I will be able to make better and more rigorous arguments in the future (for any side).

Importantly, those do not tend to be "watered down" in a naive fashion. Instead the core ideas slowly get replaced by ideas that are easier to understand and easier to spread (which can often also mean that they are more radical)

We seem to agree on this - that value drift is common, but that this could just as well lead to more radicalism as to less radicalism.

I agree with xccf here that the natural direction in which I expect EA ideas to morph to become more self-propagating is in the direction of existing charity.

Why is that? I can't see that that follows from your preceding paragraph, where you implied that your historical studies indicates that value drift could just as well lead to more radicalism as to less radicalism.

Again, I think that the EA movement is a highly idealistic and intellectual movement, and that it should be compared with historical examples of such movements, rather than with social movements in general. It seems to me that such movements often face the risk of turning extreme and sect-like, whereas the risk of dilution is lower.

A salient example is that of later 19th century and early 20th century socialism, with its partition of pragmatic social democrats (frequently accused of dilution) and communists. The communists either failed to influence politics (in Western Europe, with some exceptions), because they were seen as too extreme, or became totally corrupted when they acquired power (in the Soviet Union and China). The moderate social democrats, on the other hand, managed to contribute to highly succesful welfare states in Western Europe.

I also think that within any movements - but perhaps especially in strongly idealistic and intellectual movements - "extreme" ideas and behaviour, which are rejected by the out-group, can give you credit within the in-group, even though they don't necessarily help the movement. I think this is what has happened to many socialist groups, and that we have to look out for that.

This said, I also find this question very hard and don't have a complete understanding of past social movements or indeed the EA movement's current state. I would like to see more research, and that this research is made publically available, so that it can get critically discussed. This question is very important, so we need to get a better grasp of it as soon as possible. It would probably be good to have non-EA historians or sociologists look into it, since they would have fewer axes to grind and would be less prone to biases (as outsiders).

(Reviving a bit of an old thread, but just noticed this response in my inbox)

I think you make a good point here, and I think I might have underestimated the risk of EA becoming too radicalized. I will think about this more, and maybe try to do some concrete scenario planning on specific ways in which I can imagine the EA movement becoming too radical.

It's a really important thing to look out for and understand well, so I am very happy about your contribution. Thanks!

Thank you yourself. It's a really hard topic which we'll have to have ongoing discussions about.

The post arguing that EA should be "elitist" got lots of upvotes, even though it presumably belongs in the latter category.

I'd say it belongs in the former because it strongly "flatters a large share of readers." Namely, by saying they are better than most other people =P Of course, that's a controversial form of flattering, which is why the 79% upvote makes sense.

This does somewhat conflict with my theory, though 12 points and 83% positive is small relative to the most popular posts I'm referring to.

Another datapoint going against the theory is this post encouraging running fundraisers for weird charities.

Your own post about CS majors goes in the "weird" category for me and got plenty of upvotes.

I wouldn't expect my "running fundraisers for weird charities" post to be seen as weird by the standards of most EAs that I know, make me look like a jerk, or make any readers feel bad about themselves.

I think your post fits the point of "We should be fine with being weird because that's the only way to find the most unreasonably neglected projects."

I may be influenced by the Less Wrong side of things, but in my experience “They make the reader feel awful cognitive dissonance and make them question their views” is a factor that leads to a lot of upvotes. But maybe there are different types of updating, one that make the person feel virtuous about updating and one that make them feel bad about having been wrong. If there is, then I haven’t found any pattern behind them.

Btw., jerk opinions 1–3 seem defensible to me, 4 with more modest phrasing, but I’d be really curious about any arguments someone could come up with in defense of 5 and 6. But: Rob and other people who, I suspect, know a bunch of things I don’t seem to be interestingly careful about what gets published in this forum, so this is probably something we should rather chitchat about at our local meetup. I just read about how newspaper reports on suicides seem to actually cause suicides, so this virtue of silence thing is not to be taken lightly.

This is, I take it, an ad hominem-argument:

Two reasons these articles get written and become popular are:

They make the author look virtuous (e.g. modest, kind, reliable).
The arguments flatter a large share of readers.

It says that there are non-rational reasons why these articles gets written, and implies that is a reason to adjust the probability that the content of those articles are true downwards.

Now ad hominem-arguments do have a place in debating, although they should be used cautiously. I want to emphasize, though, that similar ad hominem-arguments can also be made against those that write the latter sort of posts. E.g., one might argue that they want to be contrarian, or that they want to be part of an exclusive club, that they want to feel better than everyone else, etc.

Now given the demographic set-up of the EA movement, it isn't obvious to me that the latter kind of ad hominem-argument is less plausible than the former. The situation seems to me to be quite symmetric.

But I also want to caution against over-use of ad hominem-arguments, and not only in public, but also when you're thinking about this yourself. It is very easy to invent a straw man caricature of your opponent - "they only have that view because (insert self-interested motivation)". This is a good example of that, from the "elitist" post:

This all said, the accusation of elitism, even if it's accurate, can feel hurtful. Nevertheless there is an important thought experiment to run: In the hypothetical world where elitism is in fact the best strategy for saving and improving the most lives (even after accounting for reputational risk), how many happy lives am I willing to sacrifice in order to not be accused of elitism?

This implies that those who oppose elitism do so because of their self-interest in not being hurt by accusations of elitism. I defintely don't believe that is universally true. Neither do I believe that all of those who advocate that EA should try to focus on recruiting elite members do so because they want to feel like they're part of an elite group. Instead, I believe that both groups have good reasons for their views, and that we should try to engage with them. (This is not to say that we don't fall prey to biases; we all do.) This debate needs more steel-manning.

People's stated views are often socially strategic. Nothing wrong with noticing such biases as long as you apply the same lens to yourself as others, which I do.

I think these are exactly the incentives that drive people to say things they would otherwise regard as harmful or wrong:

"One might argue that they want to be contrarian, or that they want to be part of an exclusive club, that they want to feel better than everyone else."

Nothing wrong with noticing such biases as long as you are aware they apply to you as much as others (which I do in the post).

In practise, people are much better at spotting others' biases than their own.

Then get other people to tell you them. My point is in the above I am not claiming other people are doing anything different from me.

Yes, I get that, but my point is more general. I'm saying that a general disadvantage with this way of discussing via ad hominem-arguments is that people are unlikely to be able to use them in a fair and rational way: they are going to be too lenient against themselves and too strict against their opponents. Hence why they should be used with caution (though they can and must be used to some extent).