Comment author: lukefreeman  (EA Profile) 05 July 2018 10:48:51PM 2 points [-]

Thanks for sharing!

The consistent item on my budget that I find we struggle to keep low is eating out, largely because that's the only way we see a lot of people (close friends and family). I wrote about this in my blog post when I limited my food to <$2 a day for a month, food is social and that's a tough thing.

Also, I read your other post about setting your salary based on world's average GDP and was shocked to see rent that low – rent is at minimum 5x that much in Sydney 😳

Comment author: DavidMoss 06 July 2018 06:09:08PM 2 points [-]

Fwiw, rent does seem a bit more expensive in Sydney than Vancouver (https://www.numbeo.com/cost-of-living/compare_cities.jsp?country1=Canada&city1=Vancouver&country2=Australia&city2=Sydney), though only by about 27%, so still in the same ballpark. It looks pretty similar comparing on Craigslist too (I'm also a Vancouverite). I acknowledge you'd need to pay substantially more than 220USD per person, if you weren't splitting costs with a partner or room-mate.

Comment author: Jamie_Harris 03 July 2018 07:52:48AM *  0 points [-]

Finding this list very useful personally, so thanks for that!

I am also planning to use it to help create a discussion group on the topic. There is a shorter version from EAF, but here are the questions for discussion I'm thinking of discussing:

  1. Suffering: a. Are we convinced that K-Selecting species have net negative lives? b. Are we convinced that R-Selecting species have net negative lives? c. Are we convinced that wild animals, in total, have net negative lives?

  2. Prioritisation: How far should we prioritise work on reducing WAS compared to: a. farm animal advocacy, b. other EA cause areas?

  3. Potential crucial considerations: a. Should we still focus on WAS if we believed that animals had net positive lives? i. Would this change how far we prioritise the cause over other cause areas? ii. What effect would this have on the actual interventions that we favoured? b. Should we still focus on WAS if a scientific consensus emerged that most insects were not sentient (but other r-selecting species, like many fish, small mammals or birds etc, still were)? i. Would this change how far we prioritise the cause over other cause areas? ii. What effect would this have on the actual interventions that we favoured?

  4. How far should we prioritise each of the following steps: a. Spreading anti-speciesism and concern for all sentient beings, including those living in the wild. b. Raising awareness of the very bad situation in which wild animals are, and spreading the view that we should be prepared to intervene to aid them. c. Doing research regarding the situation in which these animals are and the ways in which the harms they suffer can be reduced, rather than increased. d. Supporting those interventions in nature that are feasible today and present them as examples of what could be done for the good of animals in the wild at a bigger scale.

  5. What are the different actions and interventions we can take for each of the above steps: a. As a community? b. As individuals?

  6. Can we come up with 1 single personal goal to do more in this space? Create a timeframe and feedback loop or commitment device for this?

Comment author: DavidMoss 03 July 2018 03:14:05PM 0 points [-]

Is this from EAF, not LEAN?

Comment author: DavidMoss 05 May 2018 10:42:40PM 2 points [-]

Comment mostly copied from Facebook:

I think most will agree that it's not advisable to simply try to persuade as many people as possible. That said, given the widespread recognition that poor or inept messaging can put people off EA ideas, the question of persuasion doesn't seem to be one that we can entirely put aside.

A couple of questions (among others) will be relevant to how far we should merely offer and not try to persuade: how many people we think will be initially (genuinely) interested in EA and how many people we think would be potentially (genuinely) interested in EA were it suitably presented.

A very pessimistic view across these questions is that very few people are inclined to be interested in EA initially and very few would be interested after persuasion (e.g. because EA is a weird idea compelling only to a minority who are weird on a number of dimensions, and most people are highly averse to its core demands). On this view, offering and not trying to persuade, seems appealing, because few will be interested, persuasion won't help, and all you can do is hope some of the well inclined minority will hear your message.

If you think very few will be initially inclined but (relatively) many more would be inclined with suitable persuasion (e.g. because EA ideas are very counter-intuitive, inclined to sound very off-putting, but can be appealing if framed adroitly), then the opposite conclusion follows: it seems like persuasion is high value (indeed a necessity).

Conversely, if you are a more optimistic (many people intuitively like EA: it's just "doing the most good you can do + good evidence!") then persuasion looks less important (unless you also think that persuasion can bring many additional gains even above the high baseline of EA-acceptance already).

-

Another big distinction which I assume is, perhaps, motivating the "offer, don't persuade" prescription, is whether people think that persuasion tends to influence the quality of those counterfactual recruits negatively, neutrally or positively. The negative view might be motivated by thinking that persuading people (especially via dubious representations of EA) who wouldn't otherwise have liked EA's offer will disproportionately bring in people who don't really accept EA. The neutral view might be motivated by positing that many people are turned-off (or attracted to) EA by considerations orthogonal to actual EA content (e.g. nuances of framing, or whether they instinctively non-rationally like/dislike ideas things EA happens to be associated with (e.g. sci-fi)). The positive view, might be motivated by thinking that certain groups are turned off, disproportionately, by unpersuasive messages (e.g. women and minorities do not find EA attractive, but would do with more carefully crafted, symbolically not off-putting outreach), and thinking that getting more of these groups would be epistemically salutary for some reason.

-

Another major consideration would simply be how many EAs we presently have relative to desired numbers. If we think we have plenty (or even, more than we can train/onboard), then working to persuade/attract more people seems unappealing and conversely if we highly value having more people, then the converse. I think it's very reasonable that we switch our priorities from trying to attract more people to not, depending on present needs. I'm somewhat concerned that perceived present needs get reflected in a kind of 'folk EA wisdom' i.e. when we lack(ed) people, the general idea that movement building is many times more effective than most direct work, was popularised, whereas now we have more people (for certain needs), the general idea of 'quality trumps quantity' gets popularised. But I'm worried the very general memes aren't especially sensitive to actual supply/demand/needs and would be hard/slow to update, if needs were different. This also becomes very tricky when different groups have different needs/shortages.

Comment author: Evan_Gaensbauer 24 April 2018 06:57:06PM 1 point [-]

What's your impression of how positively correlated close social ties are with staying altruistic among those individuals you surveyed?

Comment author: DavidMoss 03 May 2018 08:12:59PM 8 points [-]

My anecdata is that it's very high, since people are heavily influenced by such norms and (imagined) peer judgement.

Cutting the other way, however, people who are brought into EA by such social effects (e.g. because they were hanging around friends who were EA, so they became involved in EA too rather than in virtue of having (always had) intrinsic EA belief and motivation) would be much more vulnerable to value drift once those social pressures change. I think this is behind a lot of cases of value drift I've observed.

When I was systematically interviewing EAs for a research project this distinction, between social-network EAs and always-intrinsic EAs was one of the clearest and most important distinctions that arose. I think one might imagine that social-network EAs would be disproportionately less involved, more peripheral members, whereas the always-intrinsic EAs would be more core, but actually the tendency was roughly the reverse. The social-network EAs were often very centrally positioned in higher staff positions within orgs, whereas the always-instrinsic EAs were often off independently doing whatever they thought was most impactful, without being very connected.

Comment author: Yannick_Muehlhaeuser 02 May 2018 12:33:27PM 6 points [-]

If i could only recommend one book to someone should i recommend this or Doing Good Better? Not really sure about that. What do you think?

Comment author: DavidMoss 03 May 2018 01:53:50AM 0 points [-]

Or Singer's The Most Good You Can Do.

Comment author: DavidMoss 23 April 2018 03:06:39AM *  13 points [-]

This also fits my experience.

A few other implications if value drift is more of a concern:

  • Movement growth looks a lot less impactful in absolute terms
  • It's an open question whether this means we should therefore focus our movement-building efforts on a minority of particularly likely to be engaged people or expand our numbers more to offset attrition (depending on details of your model)
  • Selecting primarily for high skill/potential-impact levels may be a mistake, as a person who among the very highest skill level, but who decides to do something else, may likely produce zero impact. There are, no doubt, more implications.
Comment author: Denkenberger 03 April 2018 01:38:13AM 0 points [-]

Maybe "full time EAs?"

Comment author: DavidMoss 03 April 2018 03:17:20AM 0 points [-]

I think someone suggested this in previous discussions about what euphemism we could use for extreme/hardcore EAs. The problem here is that one can be a full time EA without being an insider and one can be an insider without being full time.

Comment author: cassidynelson 16 March 2018 01:05:55AM 1 point [-]

I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom's Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.

He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.

Great suggestion about Sam Harris - I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I'm still waiting for the audio to be uploaded on Sam's podcast, but I wonder given Sam's positions if he questions Pinker on this as well.

Comment author: DavidMoss 16 March 2018 01:57:08AM 3 points [-]

I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he'd read anything about them. Now people have pushed back and pointed out his obvious errors and he's had to at least somewhat read about what the positions are, but he doesn't want to backtrack at all from his previous statement of extreme dismissiveness.

Comment author: Risto_Uuk 12 March 2018 11:00:58AM 1 point [-]

Do you offer any recommendations for communicating utilitarian ideas based on Everett's research or someone else's?

For example, in Everett's 2016 paper the following is said:

"When communicating that a consequentialist judgment was made with difficulty, negativity toward agents who made these judgments was reduced. And when a harmful action either did not blatantly violate implicit social contracts, or actually served to honor them, there was no preference for a deontologist over a consequentialist."

Comment author: DavidMoss 12 March 2018 09:13:30PM *  1 point [-]

I imagine more or less anything which expresses conflictedness about taking the 'utilitarian' decision and/or expresses feeling the pull of the contrary deontological norm would fit the bill for what Everett is saying here. That said, I'm not convinced that Everett (2016) is really getting at reactions to "consequentialism" (see here ( 1 , 2 )

I think that this paper by Uhlmann et al, does show that people judge negatively those who take utilitarian decisions though, even when they judge that the utilitarian act was the right one to take. Expressing conflictedness about the utilitarian decision may be a double-edged sword, therefore. I think it may well offset negative character evaluations of the person taking the utilitarian decision, but plausibly it may also reduce any credence people attached to the utilitarian act being the right one to take.

My collaborators and I did some work relevant to this, on the negative evaluation of people who make their donation decisions in a deliberative rather than explicitly empathic way. The most relevant of our experiments for this looked at the evaluation of people who both deliberated about the cost effectiveness of the donation and expressed empathy towards the recipient of the donation simultaneously. The empathy+deliberation condition was close to the empathy condition in moral evaluation (see figure 2 https://osf.io/d9t4n/) and closer to the deliberation condition in evaluation of reasonableness.

Comment author: Risto_Uuk 11 March 2018 03:45:43PM 2 points [-]

I think this depends on how we define mass outreach. I would consider a lot of activities organized in EA community to be mass outreach. For example, EAG, books, articles in popular media outlets, FB posts in EA group, 80 000 Hours podcast, etc. They are mass outreach because they reach a lot of people and very often don't enable an in-depth work on. Exceptions would be career coaching session at EAG event and discussing books/articles in discussion groups.

Comment author: DavidMoss 11 March 2018 04:45:09PM 3 points [-]

I agree re. books and articles in the mass media- and these are the kinds of things it seems people are stepping back from.

I think of the EA FB group, EAG etc. to be more insider-facing than outreach (EAG used to seem to be more about general outreach, but not anymore).

The 80K podcast is an interesting middle case, since it's clearly broadcast generally, but I imagine the actual audience is pretty niche and it's a lot more in depth than any media article or even Doing Good Better (in my view). I have to wonder how far, in a couple of years, people will be saying of the podcast the same things being said of DGB, i.e. the content is sub-optimal or out of date and so we wouldn't want to spread it around. The same considerations seem, in principle, to apply in both cases, since even if people within the English-speaking world are already locked into some bad ideas, we don't want to continue locking them into new ideas, which we will judge in 2020 to have been premature/mistaken/sub-optimal.

View more: Next