Comment author: cscanlon 24 August 2018 01:36:19PM *  2 points [-]

What do you see as a better way of gathering data going forward?

Comment author: DavidMoss  (EA Profile) 24 August 2018 08:10:03PM 3 points [-]

In the future SHIC is going to be placing more weight on our impact on the understanding and trajectory changes of the smaller number of students who progress from the primary workshops (of the kind we described in this report) onto our advanced workshops and individual coaching. Because we'll be working more closely with these students and discussing concrete actions (e.g. education and career decisions, pursuing volunteering opportunities with effective charities, and attending EA meetups and conferences), we hope to have a much more reliable insight into whether we're actually producing valuable changes in their understanding and plans.

Comment author: DavidMoss  (EA Profile) 08 August 2018 02:14:47PM 2 points [-]

Very interesting project! Have you engaged with the work of Rupert Read at all? He's a philosopher in the UK, Green Party politician, think tank chair and sometime critic of EA. He's written a paper (2012) arguing for a body, building on the Hungarian ombudsman example, who would aim to act as a proxy for future generations, review and potentially veto legislation deemed harmful to their interests.

Comment author: lukefreeman  (EA Profile) 05 July 2018 10:48:51PM 2 points [-]

Thanks for sharing!

The consistent item on my budget that I find we struggle to keep low is eating out, largely because that's the only way we see a lot of people (close friends and family). I wrote about this in my blog post when I limited my food to <$2 a day for a month, food is social and that's a tough thing.

Also, I read your other post about setting your salary based on world's average GDP and was shocked to see rent that low – rent is at minimum 5x that much in Sydney 😳

Comment author: DavidMoss  (EA Profile) 06 July 2018 06:09:08PM 2 points [-]

Fwiw, rent does seem a bit more expensive in Sydney than Vancouver (https://www.numbeo.com/cost-of-living/compare_cities.jsp?country1=Canada&city1=Vancouver&country2=Australia&city2=Sydney), though only by about 27%, so still in the same ballpark. It looks pretty similar comparing on Craigslist too (I'm also a Vancouverite). I acknowledge you'd need to pay substantially more than 220USD per person, if you weren't splitting costs with a partner or room-mate.

Comment author: Jamie_Harris 03 July 2018 07:52:48AM *  0 points [-]

Finding this list very useful personally, so thanks for that!

I am also planning to use it to help create a discussion group on the topic. There is a shorter version from EAF, but here are the questions for discussion I'm thinking of discussing:

  1. Suffering: a. Are we convinced that K-Selecting species have net negative lives? b. Are we convinced that R-Selecting species have net negative lives? c. Are we convinced that wild animals, in total, have net negative lives?

  2. Prioritisation: How far should we prioritise work on reducing WAS compared to: a. farm animal advocacy, b. other EA cause areas?

  3. Potential crucial considerations: a. Should we still focus on WAS if we believed that animals had net positive lives? i. Would this change how far we prioritise the cause over other cause areas? ii. What effect would this have on the actual interventions that we favoured? b. Should we still focus on WAS if a scientific consensus emerged that most insects were not sentient (but other r-selecting species, like many fish, small mammals or birds etc, still were)? i. Would this change how far we prioritise the cause over other cause areas? ii. What effect would this have on the actual interventions that we favoured?

  4. How far should we prioritise each of the following steps: a. Spreading anti-speciesism and concern for all sentient beings, including those living in the wild. b. Raising awareness of the very bad situation in which wild animals are, and spreading the view that we should be prepared to intervene to aid them. c. Doing research regarding the situation in which these animals are and the ways in which the harms they suffer can be reduced, rather than increased. d. Supporting those interventions in nature that are feasible today and present them as examples of what could be done for the good of animals in the wild at a bigger scale.

  5. What are the different actions and interventions we can take for each of the above steps: a. As a community? b. As individuals?

  6. Can we come up with 1 single personal goal to do more in this space? Create a timeframe and feedback loop or commitment device for this?

Comment author: DavidMoss  (EA Profile) 03 July 2018 03:14:05PM 0 points [-]

Is this from EAF, not LEAN?

Comment author: DavidMoss  (EA Profile) 05 May 2018 10:42:40PM 2 points [-]

Comment mostly copied from Facebook:

I think most will agree that it's not advisable to simply try to persuade as many people as possible. That said, given the widespread recognition that poor or inept messaging can put people off EA ideas, the question of persuasion doesn't seem to be one that we can entirely put aside.

A couple of questions (among others) will be relevant to how far we should merely offer and not try to persuade: how many people we think will be initially (genuinely) interested in EA and how many people we think would be potentially (genuinely) interested in EA were it suitably presented.

A very pessimistic view across these questions is that very few people are inclined to be interested in EA initially and very few would be interested after persuasion (e.g. because EA is a weird idea compelling only to a minority who are weird on a number of dimensions, and most people are highly averse to its core demands). On this view, offering and not trying to persuade, seems appealing, because few will be interested, persuasion won't help, and all you can do is hope some of the well inclined minority will hear your message.

If you think very few will be initially inclined but (relatively) many more would be inclined with suitable persuasion (e.g. because EA ideas are very counter-intuitive, inclined to sound very off-putting, but can be appealing if framed adroitly), then the opposite conclusion follows: it seems like persuasion is high value (indeed a necessity).

Conversely, if you are a more optimistic (many people intuitively like EA: it's just "doing the most good you can do + good evidence!") then persuasion looks less important (unless you also think that persuasion can bring many additional gains even above the high baseline of EA-acceptance already).

-

Another big distinction which I assume is, perhaps, motivating the "offer, don't persuade" prescription, is whether people think that persuasion tends to influence the quality of those counterfactual recruits negatively, neutrally or positively. The negative view might be motivated by thinking that persuading people (especially via dubious representations of EA) who wouldn't otherwise have liked EA's offer will disproportionately bring in people who don't really accept EA. The neutral view might be motivated by positing that many people are turned-off (or attracted to) EA by considerations orthogonal to actual EA content (e.g. nuances of framing, or whether they instinctively non-rationally like/dislike ideas things EA happens to be associated with (e.g. sci-fi)). The positive view, might be motivated by thinking that certain groups are turned off, disproportionately, by unpersuasive messages (e.g. women and minorities do not find EA attractive, but would do with more carefully crafted, symbolically not off-putting outreach), and thinking that getting more of these groups would be epistemically salutary for some reason.

-

Another major consideration would simply be how many EAs we presently have relative to desired numbers. If we think we have plenty (or even, more than we can train/onboard), then working to persuade/attract more people seems unappealing and conversely if we highly value having more people, then the converse. I think it's very reasonable that we switch our priorities from trying to attract more people to not, depending on present needs. I'm somewhat concerned that perceived present needs get reflected in a kind of 'folk EA wisdom' i.e. when we lack(ed) people, the general idea that movement building is many times more effective than most direct work, was popularised, whereas now we have more people (for certain needs), the general idea of 'quality trumps quantity' gets popularised. But I'm worried the very general memes aren't especially sensitive to actual supply/demand/needs and would be hard/slow to update, if needs were different. This also becomes very tricky when different groups have different needs/shortages.

Comment author: Evan_Gaensbauer 24 April 2018 06:57:06PM 1 point [-]

What's your impression of how positively correlated close social ties are with staying altruistic among those individuals you surveyed?

Comment author: DavidMoss  (EA Profile) 03 May 2018 08:12:59PM 8 points [-]

My anecdata is that it's very high, since people are heavily influenced by such norms and (imagined) peer judgement.

Cutting the other way, however, people who are brought into EA by such social effects (e.g. because they were hanging around friends who were EA, so they became involved in EA too rather than in virtue of having (always had) intrinsic EA belief and motivation) would be much more vulnerable to value drift once those social pressures change. I think this is behind a lot of cases of value drift I've observed.

When I was systematically interviewing EAs for a research project this distinction, between social-network EAs and always-intrinsic EAs was one of the clearest and most important distinctions that arose. I think one might imagine that social-network EAs would be disproportionately less involved, more peripheral members, whereas the always-intrinsic EAs would be more core, but actually the tendency was roughly the reverse. The social-network EAs were often very centrally positioned in higher staff positions within orgs, whereas the always-instrinsic EAs were often off independently doing whatever they thought was most impactful, without being very connected.

Comment author: Yannick_Muehlhaeuser 02 May 2018 12:33:27PM 6 points [-]

If i could only recommend one book to someone should i recommend this or Doing Good Better? Not really sure about that. What do you think?

Comment author: DavidMoss  (EA Profile) 03 May 2018 01:53:50AM 0 points [-]

Or Singer's The Most Good You Can Do.

Comment author: DavidMoss  (EA Profile) 23 April 2018 03:06:39AM *  13 points [-]

This also fits my experience.

A few other implications if value drift is more of a concern:

  • Movement growth looks a lot less impactful in absolute terms
  • It's an open question whether this means we should therefore focus our movement-building efforts on a minority of particularly likely to be engaged people or expand our numbers more to offset attrition (depending on details of your model)
  • Selecting primarily for high skill/potential-impact levels may be a mistake, as a person who among the very highest skill level, but who decides to do something else, may likely produce zero impact. There are, no doubt, more implications.
Comment author: Denkenberger 03 April 2018 01:38:13AM 0 points [-]

Maybe "full time EAs?"

Comment author: DavidMoss  (EA Profile) 03 April 2018 03:17:20AM 0 points [-]

I think someone suggested this in previous discussions about what euphemism we could use for extreme/hardcore EAs. The problem here is that one can be a full time EA without being an insider and one can be an insider without being full time.

Comment author: cassidynelson 16 March 2018 01:05:55AM 1 point [-]

I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom's Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.

He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.

Great suggestion about Sam Harris - I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I'm still waiting for the audio to be uploaded on Sam's podcast, but I wonder given Sam's positions if he questions Pinker on this as well.

Comment author: DavidMoss  (EA Profile) 16 March 2018 01:57:08AM 3 points [-]

I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he'd read anything about them. Now people have pushed back and pointed out his obvious errors and he's had to at least somewhat read about what the positions are, but he doesn't want to backtrack at all from his previous statement of extreme dismissiveness.

View more: Next