Comment author: Ben_Todd 22 May 2018 03:28:59AM 5 points [-]

Yes, each cause has different relative needs.

It's also more precise and often clearer to talk about particular types of talent, rather than "talent" as a whole e.g. the AI safety space is highly constrained by people with deep expertise in machine learning and global poverty isn't.

However, when we say "the landscape seem more talent constrained than funding constrained" what we typically mean is that given our view of cause priorities, EA aligned people can generally have a greater impact through direct work than earning to give, and I still think that's the case.

Comment author: Ben_Todd 08 May 2018 06:42:34AM *  4 points [-]

More reasons for why sharing the mission of EA (which includes dedication as a component) is important in most roles in EA non-profits:

https://80000hours.org/articles/operations-management/#why-is-it-important-for-operations-staff-to-share-the-mission-of-effective-altruism

Comment author: MichaelPlant 13 April 2018 11:18:07PM 0 points [-]

I agree it's really complicated, but merits some thinking. The one practical implication I take is "if 80k says I should be doing X, there's almost no chance X will be the best thing I could do by the time I'm in a position to do it"

Comment author: Ben_Todd 20 April 2018 06:24:03AM 4 points [-]

That seems very strong - you're saying all our recommendations are wrong, even though we're already trying to take account of this effect.

Comment author: MichaelPlant 12 April 2018 10:16:26AM 16 points [-]

However, we can also err by thinking about a too narrow reference class

Just to pick up on this, a worry I've had for a while - which I'm don't think I'm going to do a very job explaining here - is that the reference class people use is "current EAs" not "current and future EAs". To explain, when I started to get involved in EA back in 2015, 80k's advice, in caricature, was that EAs should become software developers or management consultants and earn to give, whereas research roles, such as becoming a philosopher or historian, are low priority. Now the advice has, again in caricature, swung the other way: management consultancy looks very unpromising, and people are being recommended to do research. There's even occassion discussion (see MacAskill's 80k podcast) that, on the margin, philosophers might be useful. If you'd taken 80k's advice seriously and gone in consultancy, it seems you would have done the wrong thing. (Objection, imagining Wiblin's voice: but what about personal fit? We talked about that. Reply: if personal fit does all the work - i.e. "just do the thing that has greatest personal fit" - then there's no point making more substantive recommendations)

I'm concerned that people will funnel themselves into jobs that are high-priority now, in which they have a small comparative advice to other EAs, rather than jobs in which they will later have a much bigger comparative advantage to other EAs. At the present time, the conversation is about EA needing more operations roles. Suppose two EAs, C and D, are thinking about what to do. C realises he's 50% better than D at ops and 75% better at research, so C goes into Ops because that's higher priority. D goes into research. Time passes the movement grows. E now joins. E is better than C at Ops. The problem is that C has taken an ops role and it's much harder for C to transition to research. C only has a comparative advantage at ops in the first time period, thereafter he doesn't. Overall, it looks like C should just have gone into research, not ops.

In short, our comparative advantage is not fixed, but will change over time simply based on who else shows up. Hence we should think about comparative advantage over our lifetimes rather than the shorter term. This likely changes things.

Comment author: Ben_Todd 20 April 2018 06:22:00AM 2 points [-]

I agree with the "in short" section. I'm less sure about exactly how it changes things. It seems reasonable to think more about your comparative advantage compared to the world as a whole (taking that as a proxy for the future composition of the community), or maybe just try to think more about which types of talent will be hardest to attract in the long-term. I don't think much the changes in advice about etg and consulting were due to this exact mistake.

One small thing we'll do to help with this is ask people to project the biggest talent shortages at longer time horizons in our next talent survey.

Comment author: Alex_Barry 13 April 2018 11:09:17AM *  15 points [-]

Thanks for writing this up! This does seem to be an important argument not made often enough.

To my knowledge this has been covered a couple of times before, although not as thoroughly.

Once by Oxford Prioritization Project however they approached it from the other end, instead asking "what absolute percentage x-risk reduction would you need to get for £10,000 for it to be as cost effective as AMF" and finding the answer of 4 x 10^-8%. I think your model gives £10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not quite as good as global poverty.

Note they use 5% before 2100 as their risk, also do not consider QALYs, instead only looking at 'lives saved' which is likely bias them against AMF, since it mostly saves children.

We also calculated this as part of the Causal Networks Model I worked on with Denise Melchin at CEA over the summer. The conclusion is mentioned briefly here under 'existential effectiveness'.

I think our model was basically the same as yours, although we were explicitly interested in the chance of existential risk before 2050, and did not include probabilistic elements. We also tried to work in QALYs, although most of our figures were more bullish than yours. We used by default:

  • 7% chance of existential risk by 2050, which in retrospect seems extremely high, but I think was based on a survey from a conference.
  • The world population in 2050 will be 9.8 Billion, and each death will be worth -25 QALYs (so 245 billion QALYs at stake, very similar to yours)
  • For the effectiveness of research, we assumed that 10,000 researchers working for 10 years would reduce x-risk by 1% point (i.e. from 7% to 6%). We also (unreasonably) assumed each researcher year cost £50,000 (where I think the true number should be at least double that, if not much more).
  • Our model then had various other complicated effects, modelling both 'theoretical' and 'practical' x-risk based on government/industry willingness to use the advances, but these were second order and can mostly be ignored.

Ignoring these second order effects then, our model suggested it would cost £5 billion to reduce x-risk by 1% point, which corresponds to a cost of about £2 per QALY. In retrospect this should be at least 1 or 2 orders of magnitude higher (increasing researcher cost and decreasing x-risk possibility by and order of magnitude each).

I find your x-risk chance somewhat low, I think 5% before 2100 seems more likely. Your cost-per-percent to reduce x-risk also works out as much higher than the one we used, but seems more justified (ours was just pulled from the air as 'reasonable sounding').

Comment author: Ben_Todd 20 April 2018 06:07:35AM *  5 points [-]

I also made a very rough estimate in this article: https://80000hours.org/articles/extinction-risk/#in-total-how-effective-is-it-to-reduce-these-risks Though this estimate is much better and I've added a link to it.

I also think x-risk over the century is over 1%, and we can reduce it much more cheaply than your guess, though it's nice to show it's plausible even with conservative figures.

Comment author: Nick_Beckstead 26 March 2018 06:33:12PM *  2 points [-]

I am a Program Officer at Open Philanthropy who joined as a Research Analyst about 3 years ago.

The prior two places I lived were New Brunswick, NJ and Oxford, UK. I live in a house with a few friends. It is 25-30m commute door-to-door via BART. My rent and monthly expenses are comparable to what I had in Oxford but noticeably larger than what I had in New Brunswick. I got pay increases when I moved to Open Phil, and additional raises over time. I’m comfortable on my current salary and could afford to get a single-bedroom apartment if I wanted, but I’m happy where I am.

Overall, I would say that it was an easy adjustment.

Comment author: Ben_Todd 27 March 2018 04:35:31AM 2 points [-]

Surely rent is much higher than Oxford on average? It's possible to get a great place in Oxford for under £700 per month, while comparable in SF would be $1300+. Food also seems about 30% more expensive, and in Oxford you don't have to pay for a commute. My overall guess is that $80k p.a. in SF is equivalent to about £40k p.a. in Oxford.

Comment author: Jan_Kulveit 16 March 2018 12:39:43AM *  2 points [-]

Hi Ben,

1) I understand your concerns.

On the other hand I'm not sure if you take into account the difficuties

  • e.g. going to EAG could require something like "heroic effort". If my academic job was my only source of income, going to EAGxOxford would have meant spending more than my monthly disposable income on it (even if EAGxOxford organizers were great in offering me big reduction in fees)

  • I'm not sure if you are modelling correctly barriers in building connections to the core of the community(*). Here is my guess at some problems people from different countries ol cultures may run into when trying to make connections, e.g. by applying for internships, help, conferences, etc.

1) People unconsciously take language sophistication as a proxy for inteligence. By not being proficient in English you loose points.

2) People are evaluated on proxies like "attending prestigeous university". Universities outside of US or UK are generally discounted as regional

3) People are often in fact evaluated based on social network distance; this leads to "rich gets richer" dynamics

4) People are evaluated based on previous "EA achievements" which are easier to achieve in places where EA is allready present.

(*) you may object e.g. Ales Flidr is a good counterexample, but Harvard alumni are typically relatively "delocalized", in demand everywhere, and may percieve greater value in working in the core than spreading ideas from core to emerging local groups. (Prestige and other incentives also point that way)

one risk I see in your article is it may influence the people who would be best at mitigating risks of "wrong local EA movements" being founded to not work on it at all.

I dont think "the barriers" should be zero, as such barriers in a way help the selection of motivated people. Just in my impression they may be higher than they appear from inside. Asking people to first build conections, while building such connections is not systematically supported, may make the barriers higher than optimal.

Btw yes the core of the group can read English materials. Also it could do research in machine learning, found a startup, work in quantitive finance, in part get elected to the parliament, move out of the country, and more. What I want to point at, if you imagine members of a group of people working on "translation" of EA into new culture you would like, they are likely paying huge opportunity costs in doing so. It may be hard to keep them at some state of waiting and building connections.

In our case, counterfactually it seems plausible "waiting even more" could have also led to the group not working, or worse organization beeing created, as the core people would loose motivation / pursue other opportunities.

2) In counting long-term impact and the lock-in effect you should consider the chance movements in new languages and cultures develop in some respects better versions of effective altruism, and beneficial mutations can than spread, even back to the core. More countries may mean more experiments running and faster evolution of better versions of EA. It's unclear to me how these terms (lock in, more optimization power) add up but both should be counted. One possible resolution may be to prioritize founding movements in smaller countries where you create experience, but the damage is limited.

To your questions

i] with one exception, reasonably well taken from the viewpoint of strategic communication (communicating easily communicable concepts, eg impact and effectivity). I don't think the damage is great, and part of misconceptions is unavoidable givent the name "effective altruism".

ii] it has a distribution... IMO the understanding at the moment is highly correlated with the enagagement in the group. Which may be more important than criteria like "average understanding" or "public understanding"

iii] yes, it's complicated, depends on some misalignments, and I dont want to disscuss it publicly.

Comment author: Ben_Todd 17 March 2018 11:11:54PM 0 points [-]

Quick addition that I realise the lack of support for local groups is not ideal, but this capacity constraint is another reason to go slow. I'd favour a more "all or nothing" approach, where we select a small number of countries / languages / locations and then make a major attempt to get them going (e.g. ideally supplying enough money so that 1-2 people can go full-time, pay for trips to visit other groups etc., plus provide in-depth mentoring from CEA), while in other locations we minimise outward facing activities. The middle ground of lots of small groups with few resources doesn't seem ideal. I'm optimistic we're moving in this direction with things like the EA Grants and http://effective-altruism.com/ea/1l3/announcing_effective_altruism_community_building/

Comment author: Jan_Kulveit 13 March 2018 08:15:29AM *  7 points [-]

I just published a short history of creating effective altruism movement in the Czech Republic http://effective-altruism.com/ea/1ls/introducing_czech_association_for_effective/ and I think it is highly relevant to this discussion

Compared to Ben's conclusions I would use it as a data-point showing

  • it can be done

  • it may not be worth delaying

  • there are intermediate forms of communication in between "mass outreach" and "person-to-person outreach"

  • you should consider more complex model of communication than just (personal vs. mass media): specifically, a viable model in new country could be something like "very short message in mass media, few articles translated in national language to lower the bareer and point in the right direction, much larger amount transmitted via conferences & similar

Putting too much weight into "person to person" interaction runs into the problem you are less likely to find the right persons (consider how such connections may be created)

Btw it seems to me the way e.g. 80k hours and CEA works are inadeqate in creating the required personal connections in new countries, so it's questionable if it makes sense to focus on it

(I completely agree China is extremely difficult, but I don't think China should be considered a typical example - considering mentality it's possibly one of the most remote countries from from Eurpoean POV)

Comment author: Ben_Todd 14 March 2018 09:07:09PM 2 points [-]

Hi Jan,

It's a useful case study, however, two quick responses:

1) To some extent you were following the suggested approach, because you only pushed ahead having already built a core of native speakers who had been involved in the past with English language materials (e.g. Ales Flidr was head of EA Harvard; core of LW people helped to start it).

You also mention how doing things like meeting CFAR and attending EAGxOxford were very useful in building the group. This suggests to me that doing even more to build expertise and connections with the core English-speaking EA community before pushing ahead with Czech outreach might have led to even better results.

I also guess that most of the group can read English language materials? If so, that makes the situation much easier. As I say, the less the distance, the weaker the arguments for waiting.

2) You don't directly address my main concern. I'm suggesting that if we try to spread EA in new languages and cultures without laying out groundwork could lead to a suboptimal version of EA being locked into the new audience. However, in your report, you don't directly respond to this concern.

You do give some evidence of short-term impact, which is evidence that benefits outweighed the opportunity costs. But I'd also want to look at questions like: (i) how accurately was EA portrayed in your press coverage? (ii) how well do people getting involved in the group understand EA? (iii) might you have put off important groups in ways that could have been avoided?

Comment author: Ben_Todd 07 March 2018 10:40:45PM 0 points [-]

Another consideration I'm not sure of is that a mainly English speaking community will be easier to coordinate than one of the same size split across many languages and cultures, so this might be reason to focus initially on one language (to the extent that efforts across different languages funge with each other).

Comment author: Ben_Todd 07 March 2018 10:39:30PM 3 points [-]

Some concerning data in this recent post about local groups: http://effective-altruism.com/ea/1l7/2017_lean_impact_assessment_evaluation_strategic/

One other striking feature of this category is that all of the top groups [in terms of new event attendees] were from non-Anglo-American countries. While this is purely speculative, an explanation for this pattern might be that these groups are aggressively reaching out to people unfamiliar with EA in their areas, getting them to attend events, but largely not seeing success in transferring this into increased group membership.

View more: Next