CRS

Charlie Rogers-Smith

504 karmaJoined Jan 2018

Comments
8

Nice. What you wrote accords with my experience. In my own personal case, my relationship to EA changed quite substantially--and in the way you describe--when I transitioned from very online to being within a community.

Ah. In one sense, a core part of rationality is indeed rejecting beliefs you can't justify. Similarly, a core part of EA is thinking carefully about your impact. However, I think one claim you could make here is that naively, intensely optimising these things will not actually win (e.g. lead to the formation of accurate beliefs; save the world). Specifically:

  • Rationality: often a deep integration with your feelings is required to form accurate beliefs--paying attention to a note of confusion, or something you can't explain in rational terms yet. Indeed, sometimes it is harmful to impose "rationality" constraints too early, because you will tend to lose the information that can't immediately comply with those constraints. Another example is defying social norms because one cannot justify them, only to later realise that they served some important function.
  • EA:  Burnout; depression.

I also talk about my experiences with this here, in response to Howie's comment on my self-love post. 

Here, I should probably stop and define toxic norms. I think a toxic norm is any rule where following it makes you feel like large parts of you are bad.

I talk about this a bit in a post I wrote recently on self-love. I think self-love is a much bigger deal than most people expect, and it can solve a lot of the problems discussed in this piece (on an individual level). 

Most of it is just toxic social norms. These groups develop toxic social norms.

Idk how much I buy that these are "norms". I think most people who have been around for a while would strongly disagree with the notion that one should never stop thinking about their impact, for example. Yet it seems that enough (young) people have some relationship to these "norms" that they're worth talking about.

I don't know that I agree with the mechanisms Sasha proposes, but I buy a lot of the observations they're meant to explain. 

 

In particular, I don't think conferring status was a big part of the problem for me. It was more internal. The more dangerous thing was taking my beliefs really seriously, only allowing myself to care about saving the world and ruthlessly (and very naively) optimising for that, even to the detriment of things like my mental health.

and your words motivate me to do this more again.

Yay!

but I think I would do well to internalize that spending time like this is well spend.

You can test it maybe! It might not be better but it might be really beneficial. Sometimes people need to escape into distractions, and sometimes it's nice to be with our pain, especially when we have the tools to comfort ourselves. Good luck!

Hi Richenda. Thanks for posting this; a discussion on the value of direct work is long overdue!

Two main things come to mind. One is a consideration for retaining people, and the other on the choice of comparison class.

Retaining people - I agree with you that losing people is bad. A key consideration is which people you want to retain most. In A Model of an EA Group, I claim that:

Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.

Since groups are time-constrained, they can do only put on a certain number of activities. All else equal, it seems we should favour retaining those that engage with the key ideas of Effective Altruism most. By prioritising direct work, we run the risk of losing people who would benefit greatly from, say, career planning sessions or 1-1 meetings. This is because even with the best people, being active in moving them through the funnel is super essential, and if you engage in a tradeoff with retaining people earlier in the funnel, it's very plausible that they will stagnate. Supporting those who are willing to do indirect and high-impact work is in fact supporting those who are willing to do the most good, and people we should most want in our community.

I think this is a particularly important consideration because all your conclusions from retaining people can be 'flipped' (in a quasi-crucial way (lol)) if you agree that retaining people far down the funnel is more important.

Choice of comparison class - Throughout the post, a comparison between direct work and some other activities is made. I'm not sure the other activities belong to the right comparison class. Some properties of these activities:

(Activities that are)

not directly beneficial to your life

mostly academic

(Groups that are)

largely focused around meeting weekly and discussing philosophical issues

(Groups where)

opportunities to actively apply EA are fairly limited.

I think I'm pretty much in agreement that if a group is doing these things, then direct work is probably an improvement. However I don't think that groups should be doing these things. The relevant comparison should be made between the best known community building activities that groups are able to do. Career planning sessions combat the above, and can (as an example) successfully act a first line of defence against people who want to be more active.

Last thing - You mention opportunities that seek to

empower talented and ambitious altruists to upskill and make strides towards the impact they are best suited to deliver in the long term

I really like this, and I'm fully on board with this type of direct work. A small concern is that opportunities like this might 'lock people into' careers that are disproportionately available to people in (maybe just student) groups. As an example, fundraising seems to be particularly easy to do, whereas getting experience in AI Safety as an undergrad is a fair bit harder, and maybe not even desirable.

Thanks again for the post!