Comment author: RomeoStevens 29 January 2018 08:55:44PM *  5 points [-]

This is a big part of why I find the 'EA is talent constrained not funding constrained' meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

There is a lot of outside view research on this that could be collected and analyzed.

Comment author: Ben_Todd 10 February 2018 08:29:16AM 1 point [-]

The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

This is what many of the core organisations are focused on :) You could see it as 80k's whole purpose. It's also why CEA is doing things like EA Grants, and Open Phil is doing the AI Fellowship.

It's also a central internal challenge for any org that has funding and is trying to scale. But it's not easy to solve: https://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/

Comment author: Ben_Todd 10 February 2018 03:48:09AM 1 point [-]

Nice! Just note that I don't think you mention medium-term indirect effects. Arguably this should cause a convergence between resources directed at the global poor e.g. because making the US richer will spillover (a little bit) to other countries.

Read more: http://reflectivedisequilibrium.blogspot.ae/2014/01/what-portion-of-boost-to-global-gdp.html

There may also be other general reasons for convergence between interventions (e.g. regression to the mean): http://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/

For these reasons, I think it's better to deflate a "direct only" effects estimate by 3-10x, so arguably your dollar only goes 3-50x further overseas.

If you also try to factor in the long-term effects on existential risk etc. then the comparison becomes even less clear. It's plausible that many US-directed interventions do more to reduce existential risk than global poverty focused ones. See more here: https://80000hours.org/articles/extinction-risk/#2-broad-efforts-to-reduce-risks

Comment author: MichaelPlant 30 January 2018 07:31:49PM 1 point [-]

You've scooped me! I've got a post on the SNT framework in the works. On the scale bit:

The relevant consideration here seems to be systemic vs atomic changes. Former affects all of the cause, or has a chance of doing so. Latter just affects a small part with no further impacts, hence 'atom'. Example of former would be curing cancer, example of latter would be treating one case of it.

Assessing the total scale of a cause is only relevant if you're calculating the expected value of systemic interventions. I generally agree it's a mistake to force people to size up the entire cause - as 80k do, for instance - because it's not necessary if you're just look at atomic interventions.

Comment author: Ben_Todd 10 February 2018 03:37:47AM 0 points [-]

I generally agree it's a mistake to force people to size up the entire cause - as 80k do

We don't - see my comment above.

Comment author: Peter_Hurford  (EA Profile) 31 January 2018 01:14:07AM 0 points [-]

Do people really think of scale as a bottleneck? I take this article to mean "maybe scale isn't really important to think about if you're unlikely to ever reach that scale".

Perhaps scale could be thought as the inverse of the diminishing returns rate (e.g., more scale = less diminishing returns = more ability to take funding). This seems useful to think about to me.

Maybe the argument should be that when thinking about scale, neglectedness, and tractability, we should put more emphasis on tractability and also think about the tractability of attracting funding / resources needed to meet the scale?

Comment author: Ben_Todd 10 February 2018 03:37:19AM 0 points [-]

Perhaps scale could be thought as the inverse of the diminishing returns rate (e.g., more scale = less diminishing returns = more ability to take funding). This seems useful to think about to me.

Yes, this is why you need to consider the ratio of scale and neglectedness (for a fixed definition of the problem).

Comment author: Ben_Todd 10 February 2018 03:36:15AM 0 points [-]

Quick comment: note that you can apply INT to any fraction of the problem (1% / 10% / 100%). The key is just that you use the same fraction for N and T as well. That's why we define the framework using "% of problem solved" rather than "solve the whole problem". https://80000hours.org/articles/problem-framework/

If you run into heavily diminishing returns at the 10% mark, then applying INT to 10% of the problem should yield better results.

This can mean that very narrowly defined problems will often be more effective than broad ones, so it's important to compare problems of roughly the same scale. Also note that narrowly defined problem areas are less useful - the whole point of having relatively broad areas is to build career capital that's relevant to more than just one project.

Finally, our overall process is (i) problems (ii) methods (iii) personal fit. Within methods you should think about the key bottlenecks within the problem area, so it partly gets captured there. Expected impact is roughly the multiple of the three. So, I agree people shouldn't use problem selection as an absolute filter, since it could be better to work on a medium-ranked problem with a great method and personal fit.

Comment author: Ben_Todd 24 January 2018 04:04:05PM 9 points [-]

Quick comment - I broadly agree. I think if you want to maximise impact within global poverty, then you should first look for potential large-scale solutions, such as policy change, even if they have weak evidence behind them. We might not find any, but we should try hard first. It's basically hits based giving. https://www.openphilanthropy.org/blog/hits-based-giving

In practice, however, the community members who agree with this reasoning, have moved on to other problem areas. This leaves an odd gap for "high risk global poverty" interventions. Though GiveWell has looked into some options here, and I hope they'll do more.

Comment author: Jan_Kulveit 01 January 2018 10:43:37PM *  2 points [-]

After thinking about it for a while I'm still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing "the person not changing their career path". However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state "of the world" would be "someone else doing a similar work in a central EA organization". As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don't count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?

Another question is how do you estimate your uncertainty in valuing something rate-n?

Comment author: Ben_Todd 07 January 2018 12:47:37PM 2 points [-]

Hi Jan,

We basically just do our best to think about what the counterfactual would have been without 80k, and then subtract that from our impact. We tend to break this into two components: (i) the value of the new option compared to what they would have done otherwise (ii) the influence of others in the community, who might have brought about similar changes soon afterwards.

The value of their next best alternative matters a little less than it might first seem because we think the impact of different options is fat-tailed i.e. someone switching to a higher-impact option might well 2x or even 10x their impact, which means you only need to reduce the estimate by 10-50%, which is a comparatively small adjustment given other huge uncertainties.

With the value of working at EA organisations, because they're talent constrained additional staff can have a big impact, even taking account of the fact that someone else could have been hired anyway. For more on this, see our recent talent survey: https://80000hours.org/2017/11/talent-gaps-survey-2017/ This showed that EA orgs highly value marginal staff, even taking account of replaceability.

Comment author: MichaelPlant 29 December 2017 12:16:17PM 1 point [-]

Thanks for this Ben. Two comments.

  1. Could you explain your Impact-Adjusted Significant Plan Changes to those of us who don't understand the system? E.g What does an "rated-1000" plan change look like and how does that compare to a "rated-1"? I imagine the former is something like a top maths bod going from working on nothing to working on AI safety but that's just my assumption. I really don't know what these mean in practice, so some illustrative examples would be nice.

  2. Following comments made by others about CEA's somewhat self-flagellatory review, it seems a bit odd and unnecessarily self-critical to describe something as a challenge if you've conscious chosen to de-prioritise it. In this case:

(iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.

By analogy, it's curious if tell you 1) a challenge for me this year was that I didn't run a marathon this year and 2) I decided running marathons wasn't that important to me (full disclosure humblebrag: I did run(/walk) a marathon this year).

Comment author: Ben_Todd 30 December 2017 12:00:19PM 5 points [-]

Hey Michael,

A typical rated-1 is someone saying they took the GWWC pledge due to us, and are at the median in terms of how much we expect them to donate.

Rated-10 means we'd trade that plan change for 10 rated-1s.

You can see more explanation of typical rated 10 and higher plan changes from 2017 here: https://80000hours.org/2017/12/annual-review/#what-did-the-plan-changes-consist-of

Some case studies of top plan changes here: https://80000hours.org/2017/12/annual-review/#value-of-top-plan-changes

Unfortunately, many of the details are sensitive, so we don't publicly release most of our case studies.

We also intend for our ratings to roughly line up with how many "donor dollars" each plan change is worth. Our latest estimates were that a rated-1 plan change is worth $7000 donor dollars on average; whereas a rated-100 is over $1m i.e. it's equal in value to an additional $1m donated to where our donors would have given otherwise.

With the IASPC target, I listed it as a mistake rather than merely a reprioritisation because:

We could have anticipated some of these problems earlier if we had spent more time thinking about our plans and metrics, which would have made us more effective for several months.

Comment author: Ben_Todd 21 December 2017 10:49:01AM 1 point [-]

Thanks!

Nvidia (who make GPUs used for ML) saw their share price approximately doubl, after quadrupling last year.

Do you have an impression of whether this is due to crypto mining or ML progress?

Comment author: Henry_Stanley 14 December 2017 11:50:05PM *  1 point [-]

genuflects

Yes, that's the idea. Have chatted to Richard at 80K; we'll see what happens in terms of "official" adoption. But with a bit of automated scraping (and manual work) I don't see why this shouldn't end up being a superset of the 80K jobs board listings and those in the Facebook group.

Comment author: Ben_Todd 15 December 2017 02:55:17PM 1 point [-]

Note as well that there's this much wider list, which automatically scrapes from the organisation's job boards and has filters, though it needs to be neatened up:

https://www.joinmonday.com/collections/organisations-80000-hours-sometimes-recommends-people-apply

We know the founders and they're willing to add more features on requests (e.g. different cause filters). But, they would eventually want some payment for providing it.

View more: Next