Comment author: DavidMoss 05 May 2018 10:42:40PM 2 points [-]

Comment mostly copied from Facebook:

I think most will agree that it's not advisable to simply try to persuade as many people as possible. That said, given the widespread recognition that poor or inept messaging can put people off EA ideas, the question of persuasion doesn't seem to be one that we can entirely put aside.

A couple of questions (among others) will be relevant to how far we should merely offer and not try to persuade: how many people we think will be initially (genuinely) interested in EA and how many people we think would be potentially (genuinely) interested in EA were it suitably presented.

A very pessimistic view across these questions is that very few people are inclined to be interested in EA initially and very few would be interested after persuasion (e.g. because EA is a weird idea compelling only to a minority who are weird on a number of dimensions, and most people are highly averse to its core demands). On this view, offering and not trying to persuade, seems appealing, because few will be interested, persuasion won't help, and all you can do is hope some of the well inclined minority will hear your message.

If you think very few will be initially inclined but (relatively) many more would be inclined with suitable persuasion (e.g. because EA ideas are very counter-intuitive, inclined to sound very off-putting, but can be appealing if framed adroitly), then the opposite conclusion follows: it seems like persuasion is high value (indeed a necessity).

Conversely, if you are a more optimistic (many people intuitively like EA: it's just "doing the most good you can do + good evidence!") then persuasion looks less important (unless you also think that persuasion can bring many additional gains even above the high baseline of EA-acceptance already).

-

Another big distinction which I assume is, perhaps, motivating the "offer, don't persuade" prescription, is whether people think that persuasion tends to influence the quality of those counterfactual recruits negatively, neutrally or positively. The negative view might be motivated by thinking that persuading people (especially via dubious representations of EA) who wouldn't otherwise have liked EA's offer will disproportionately bring in people who don't really accept EA. The neutral view might be motivated by positing that many people are turned-off (or attracted to) EA by considerations orthogonal to actual EA content (e.g. nuances of framing, or whether they instinctively non-rationally like/dislike ideas things EA happens to be associated with (e.g. sci-fi)). The positive view, might be motivated by thinking that certain groups are turned off, disproportionately, by unpersuasive messages (e.g. women and minorities do not find EA attractive, but would do with more carefully crafted, symbolically not off-putting outreach), and thinking that getting more of these groups would be epistemically salutary for some reason.

-

Another major consideration would simply be how many EAs we presently have relative to desired numbers. If we think we have plenty (or even, more than we can train/onboard), then working to persuade/attract more people seems unappealing and conversely if we highly value having more people, then the converse. I think it's very reasonable that we switch our priorities from trying to attract more people to not, depending on present needs. I'm somewhat concerned that perceived present needs get reflected in a kind of 'folk EA wisdom' i.e. when we lack(ed) people, the general idea that movement building is many times more effective than most direct work, was popularised, whereas now we have more people (for certain needs), the general idea of 'quality trumps quantity' gets popularised. But I'm worried the very general memes aren't especially sensitive to actual supply/demand/needs and would be hard/slow to update, if needs were different. This also becomes very tricky when different groups have different needs/shortages.

Comment author: Evan_Gaensbauer 24 April 2018 06:57:06PM 1 point [-]

What's your impression of how positively correlated close social ties are with staying altruistic among those individuals you surveyed?

Comment author: DavidMoss 03 May 2018 08:12:59PM 7 points [-]

My anecdata is that it's very high, since people are heavily influenced by such norms and (imagined) peer judgement.

Cutting the other way, however, people who are brought into EA by such social effects (e.g. because they were hanging around friends who were EA, so they became involved in EA too rather than in virtue of having (always had) intrinsic EA belief and motivation) would be much more vulnerable to value drift once those social pressures change. I think this is behind a lot of cases of value drift I've observed.

When I was systematically interviewing EAs for a research project this distinction, between social-network EAs and always-intrinsic EAs was one of the clearest and most important distinctions that arose. I think one might imagine that social-network EAs would be disproportionately less involved, more peripheral members, whereas the always-intrinsic EAs would be more core, but actually the tendency was roughly the reverse. The social-network EAs were often very centrally positioned in higher staff positions within orgs, whereas the always-instrinsic EAs were often off independently doing whatever they thought was most impactful, without being very connected.

Comment author: Yannick_Muehlhaeuser 02 May 2018 12:33:27PM 6 points [-]

If i could only recommend one book to someone should i recommend this or Doing Good Better? Not really sure about that. What do you think?

Comment author: DavidMoss 03 May 2018 01:53:50AM 0 points [-]

Or Singer's The Most Good You Can Do.

Comment author: DavidMoss 23 April 2018 03:06:39AM *  13 points [-]

This also fits my experience.

A few other implications if value drift is more of a concern:

  • Movement growth looks a lot less impactful in absolute terms
  • It's an open question whether this means we should therefore focus our movement-building efforts on a minority of particularly likely to be engaged people or expand our numbers more to offset attrition (depending on details of your model)
  • Selecting primarily for high skill/potential-impact levels may be a mistake, as a person who among the very highest skill level, but who decides to do something else, may likely produce zero impact. There are, no doubt, more implications.
Comment author: Denkenberger 03 April 2018 01:38:13AM 0 points [-]

Maybe "full time EAs?"

Comment author: DavidMoss 03 April 2018 03:17:20AM 0 points [-]

I think someone suggested this in previous discussions about what euphemism we could use for extreme/hardcore EAs. The problem here is that one can be a full time EA without being an insider and one can be an insider without being full time.

Comment author: cassidynelson 16 March 2018 01:05:55AM 0 points [-]

I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom's Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.

He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.

Great suggestion about Sam Harris - I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I'm still waiting for the audio to be uploaded on Sam's podcast, but I wonder given Sam's positions if he questions Pinker on this as well.

Comment author: DavidMoss 16 March 2018 01:57:08AM 3 points [-]

I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he'd read anything about them. Now people have pushed back and pointed out his obvious errors and he's had to at least somewhat read about what the positions are, but he doesn't want to backtrack at all from his previous statement of extreme dismissiveness.

Comment author: Risto_Uuk 12 March 2018 11:00:58AM 1 point [-]

Do you offer any recommendations for communicating utilitarian ideas based on Everett's research or someone else's?

For example, in Everett's 2016 paper the following is said:

"When communicating that a consequentialist judgment was made with difficulty, negativity toward agents who made these judgments was reduced. And when a harmful action either did not blatantly violate implicit social contracts, or actually served to honor them, there was no preference for a deontologist over a consequentialist."

Comment author: DavidMoss 12 March 2018 09:13:30PM *  1 point [-]

I imagine more or less anything which expresses conflictedness about taking the 'utilitarian' decision and/or expresses feeling the pull of the contrary deontological norm would fit the bill for what Everett is saying here. That said, I'm not convinced that Everett (2016) is really getting at reactions to "consequentialism" (see here ( 1 , 2 )

I think that this paper by Uhlmann et al, does show that people judge negatively those who take utilitarian decisions though, even when they judge that the utilitarian act was the right one to take. Expressing conflictedness about the utilitarian decision may be a double-edged sword, therefore. I think it may well offset negative character evaluations of the person taking the utilitarian decision, but plausibly it may also reduce any credence people attached to the utilitarian act being the right one to take.

My collaborators and I did some work relevant to this, on the negative evaluation of people who make their donation decisions in a deliberative rather than explicitly empathic way. The most relevant of our experiments for this looked at the evaluation of people who both deliberated about the cost effectiveness of the donation and expressed empathy towards the recipient of the donation simultaneously. The empathy+deliberation condition was close to the empathy condition in moral evaluation (see figure 2 https://osf.io/d9t4n/) and closer to the deliberation condition in evaluation of reasonableness.

Comment author: Risto_Uuk 11 March 2018 03:45:43PM 2 points [-]

I think this depends on how we define mass outreach. I would consider a lot of activities organized in EA community to be mass outreach. For example, EAG, books, articles in popular media outlets, FB posts in EA group, 80 000 Hours podcast, etc. They are mass outreach because they reach a lot of people and very often don't enable an in-depth work on. Exceptions would be career coaching session at EAG event and discussing books/articles in discussion groups.

Comment author: DavidMoss 11 March 2018 04:45:09PM 3 points [-]

I agree re. books and articles in the mass media- and these are the kinds of things it seems people are stepping back from.

I think of the EA FB group, EAG etc. to be more insider-facing than outreach (EAG used to seem to be more about general outreach, but not anymore).

The 80K podcast is an interesting middle case, since it's clearly broadcast generally, but I imagine the actual audience is pretty niche and it's a lot more in depth than any media article or even Doing Good Better (in my view). I have to wonder how far, in a couple of years, people will be saying of the podcast the same things being said of DGB, i.e. the content is sub-optimal or out of date and so we wouldn't want to spread it around. The same considerations seem, in principle, to apply in both cases, since even if people within the English-speaking world are already locked into some bad ideas, we don't want to continue locking them into new ideas, which we will judge in 2020 to have been premature/mistaken/sub-optimal.

Comment author: Risto_Uuk 11 March 2018 12:37:50PM 3 points [-]

Thank you for the post!

I agree that from the point of view of translation Doing Good Better might be too focused on donating to charity and on global health, but this doesn't seem to be an issue at all when it comes small in-depth discussion groups. I guess this is another argument in favor of focusing on these types of activities rather than large-scale outreach.

Comment author: DavidMoss 11 March 2018 03:01:11PM 2 points [-]

I'm curious how much mass outreach there actually is in EA and/or what people have in mind when they talk about mass outreach.

Aside from Doing Good Better and Will/CEA's other public intellectual work, which they seem to be retreating from, it's not clear to me what mass outreach has actually been done.

Comment author: Ben_Todd 07 March 2018 10:39:30PM 3 points [-]

Some concerning data in this recent post about local groups: http://effective-altruism.com/ea/1l7/2017_lean_impact_assessment_evaluation_strategic/

One other striking feature of this category is that all of the top groups [in terms of new event attendees] were from non-Anglo-American countries. While this is purely speculative, an explanation for this pattern might be that these groups are aggressively reaching out to people unfamiliar with EA in their areas, getting them to attend events, but largely not seeing success in transferring this into increased group membership.

Comment author: DavidMoss 08 March 2018 07:33:42PM *  4 points [-]

Thanks for the citation!

We agree this doesn't look good for (non-Anglo-American) groups running large events as outreach, in that it looks like it doesn't bear fruit in terms of increasing members or other outputs. But note the rest of the paragraph you quote, where we say:

it seems plausible that EA groups outside of the traditional geographical areas may face distinct challenges and require more tailored support (such as translation of materials).

One possible explanation for the observation above is that these groups' large events in non-Anglo-American countries don't bear so much fruit because they lack the supporting background and infrastructure (like translated materials). So, for example, if someone attends a large event in London they can easily immediately check out lots of EA websites and materials, find places to follow-up and so on; if someone attends a large event in a different country without these touch-points and critical mass, then not so much. Of course, it may also just be that these areas were less fertile ground for receiving EA message in some other way.

It's also important to note that it's not clear from the data we provided above that non-Anglo-American groups distinctively receive low payoffs from large events. If you look at the specific graph you'll see that these groups are pretty clear outliers, reporting events with many more people who are unfamiliar with effective altruism, but it's not that Anglo-American groups are running large events with lots of people unfamiliar with EA and receiving comparatively larger payoffs: rather it's that most other groups are just not running such large events with so many people unfamiliar with effective altruism. So what is distinctive about these groups is that they are running large events with lots of unfamiliar attendees at all, not that they are running these large events and failing to receive payoff.

View more: Next