Comment author: Peter_Hurford  (EA Profile) 31 January 2018 01:14:07AM 1 point [-]

Do people really think of scale as a bottleneck? I take this article to mean "maybe scale isn't really important to think about if you're unlikely to ever reach that scale".

Perhaps scale could be thought as the inverse of the diminishing returns rate (e.g., more scale = less diminishing returns = more ability to take funding). This seems useful to think about to me.

Maybe the argument should be that when thinking about scale, neglectedness, and tractability, we should put more emphasis on tractability and also think about the tractability of attracting funding / resources needed to meet the scale?

Comment author: Ben_Todd 10 February 2018 03:37:19AM 1 point [-]

Perhaps scale could be thought as the inverse of the diminishing returns rate (e.g., more scale = less diminishing returns = more ability to take funding). This seems useful to think about to me.

Yes, this is why you need to consider the ratio of scale and neglectedness (for a fixed definition of the problem).

Comment author: Ben_Todd 10 February 2018 03:36:15AM 1 point [-]

Quick comment: note that you can apply INT to any fraction of the problem (1% / 10% / 100%). The key is just that you use the same fraction for N and T as well. That's why we define the framework using "% of problem solved" rather than "solve the whole problem". https://80000hours.org/articles/problem-framework/

If you run into heavily diminishing returns at the 10% mark, then applying INT to 10% of the problem should yield better results.

This can mean that very narrowly defined problems will often be more effective than broad ones, so it's important to compare problems of roughly the same scale. Also note that narrowly defined problem areas are less useful - the whole point of having relatively broad areas is to build career capital that's relevant to more than just one project.

Finally, our overall process is (i) problems (ii) methods (iii) personal fit. Within methods you should think about the key bottlenecks within the problem area, so it partly gets captured there. Expected impact is roughly the multiple of the three. So, I agree people shouldn't use problem selection as an absolute filter, since it could be better to work on a medium-ranked problem with a great method and personal fit.

Comment author: Ben_Todd 24 January 2018 04:04:05PM 11 points [-]

Quick comment - I broadly agree. I think if you want to maximise impact within global poverty, then you should first look for potential large-scale solutions, such as policy change, even if they have weak evidence behind them. We might not find any, but we should try hard first. It's basically hits based giving. https://www.openphilanthropy.org/blog/hits-based-giving

In practice, however, the community members who agree with this reasoning, have moved on to other problem areas. This leaves an odd gap for "high risk global poverty" interventions. Though GiveWell has looked into some options here, and I hope they'll do more.

Comment author: Jan_Kulveit 01 January 2018 10:43:37PM *  2 points [-]

After thinking about it for a while I'm still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing "the person not changing their career path". However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state "of the world" would be "someone else doing a similar work in a central EA organization". As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don't count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?

Another question is how do you estimate your uncertainty in valuing something rate-n?

Comment author: Ben_Todd 07 January 2018 12:47:37PM 2 points [-]

Hi Jan,

We basically just do our best to think about what the counterfactual would have been without 80k, and then subtract that from our impact. We tend to break this into two components: (i) the value of the new option compared to what they would have done otherwise (ii) the influence of others in the community, who might have brought about similar changes soon afterwards.

The value of their next best alternative matters a little less than it might first seem because we think the impact of different options is fat-tailed i.e. someone switching to a higher-impact option might well 2x or even 10x their impact, which means you only need to reduce the estimate by 10-50%, which is a comparatively small adjustment given other huge uncertainties.

With the value of working at EA organisations, because they're talent constrained additional staff can have a big impact, even taking account of the fact that someone else could have been hired anyway. For more on this, see our recent talent survey: https://80000hours.org/2017/11/talent-gaps-survey-2017/ This showed that EA orgs highly value marginal staff, even taking account of replaceability.

Comment author: MichaelPlant 29 December 2017 12:16:17PM 2 points [-]

Thanks for this Ben. Two comments.

  1. Could you explain your Impact-Adjusted Significant Plan Changes to those of us who don't understand the system? E.g What does an "rated-1000" plan change look like and how does that compare to a "rated-1"? I imagine the former is something like a top maths bod going from working on nothing to working on AI safety but that's just my assumption. I really don't know what these mean in practice, so some illustrative examples would be nice.

  2. Following comments made by others about CEA's somewhat self-flagellatory review, it seems a bit odd and unnecessarily self-critical to describe something as a challenge if you've conscious chosen to de-prioritise it. In this case:

(iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.

By analogy, it's curious if tell you 1) a challenge for me this year was that I didn't run a marathon this year and 2) I decided running marathons wasn't that important to me (full disclosure humblebrag: I did run(/walk) a marathon this year).

Comment author: Ben_Todd 30 December 2017 12:00:19PM 5 points [-]

Hey Michael,

A typical rated-1 is someone saying they took the GWWC pledge due to us, and are at the median in terms of how much we expect them to donate.

Rated-10 means we'd trade that plan change for 10 rated-1s.

You can see more explanation of typical rated 10 and higher plan changes from 2017 here: https://80000hours.org/2017/12/annual-review/#what-did-the-plan-changes-consist-of

Some case studies of top plan changes here: https://80000hours.org/2017/12/annual-review/#value-of-top-plan-changes

Unfortunately, many of the details are sensitive, so we don't publicly release most of our case studies.

We also intend for our ratings to roughly line up with how many "donor dollars" each plan change is worth. Our latest estimates were that a rated-1 plan change is worth $7000 donor dollars on average; whereas a rated-100 is over $1m i.e. it's equal in value to an additional $1m donated to where our donors would have given otherwise.

With the IASPC target, I listed it as a mistake rather than merely a reprioritisation because:

We could have anticipated some of these problems earlier if we had spent more time thinking about our plans and metrics, which would have made us more effective for several months.

10

80,000 Hours annual review released

Hi everyone, The full review is here . Below is the summary:  ---- This year, we focused on “upgrading” – getting engaged readers into our top priority career paths. We do this by writing articles on why and how to enter the  priority paths , providing one-on-one advice to help... Read More
Comment author: Ben_Todd 21 December 2017 10:49:01AM 1 point [-]

Thanks!

Nvidia (who make GPUs used for ML) saw their share price approximately doubl, after quadrupling last year.

Do you have an impression of whether this is due to crypto mining or ML progress?

Comment author: Henry_Stanley 14 December 2017 11:50:05PM *  1 point [-]

genuflects

Yes, that's the idea. Have chatted to Richard at 80K; we'll see what happens in terms of "official" adoption. But with a bit of automated scraping (and manual work) I don't see why this shouldn't end up being a superset of the 80K jobs board listings and those in the Facebook group.

Comment author: Ben_Todd 15 December 2017 02:55:17PM 1 point [-]

Note as well that there's this much wider list, which automatically scrapes from the organisation's job boards and has filters, though it needs to be neatened up:

https://www.joinmonday.com/collections/organisations-80000-hours-sometimes-recommends-people-apply

We know the founders and they're willing to add more features on requests (e.g. different cause filters). But, they would eventually want some payment for providing it.

Comment author: Michael_PJ 27 November 2017 08:05:17PM 1 point [-]

Let me illustrate my argument. Suppose there are two opportunities, X and Y. Each of them contributes some value at each time step after they've been taken.

In the base timeline, A is never taken, and B is taken at time 2.

Now, it is time 1 and you have the option of taking A or B. Which should you pick?

In one sense, both are equally neglected, but in fact taking A is much better, because B will be taken very soon, whereas A will not.

The argument is that new technology is more likely to be like B, and any remaining opportunities in old technology is more likely to be like A (simply because if it were easy to do, we would have expected someone to do it already).

So even if most breakthroughs occur at the cutting edge, so long as we expect other people to do them soon, and they are not so big that we really want even a small speedup, then it can be better to find things that are more "persistently" neglected. (I used to use "persistent neglectedness" and "temporary neglectedness" for these concepts, but I thought it was confusing)

Comment author: Ben_Todd 28 November 2017 04:31:31AM 0 points [-]

OK, I agree that makes sense as well - it now seems unclear which way it goes.

However, if you're thinking from a career capital or more long-term future perspective (where transformative technologies are often the key lever), my guess is that EAs should still focus on learning about cutting-edge technologies.

Comment author: MichaelPlant 26 November 2017 10:53:12PM 1 point [-]

Thanks very much for this. I just want to add a twist to this:

Counterintuitively, this suggests that you should stay away from new technologies: it is very likely that someone will try “machine learning for X” relatively soon, so it is unlikely to be neglected.

EAs don't have stay away from new tech. You could plan to have impact by getting rich via being the first to build cutting edge tech and then giving your money away; basically doing a variant of 'earn to give'. In this case your company wouldn't have done much good directly - because what you call the 'time advantage' would be so tiny - and the value would come from your donations. This presumes the owners of the company you beat wouldn't have given their money away.

Comment author: Ben_Todd 27 November 2017 01:14:08AM 1 point [-]

Yes, there are other instrumental reasons to be involved in new tech. It's not only the money, but it also means you'll learn about the tech, which might help you spot new opportunities for impact, or new risks.

I also think I disagree with the reasoning. If you consider neglectedness over all time, then new tech is far more neglected since people have only just started using it. With tech that has been around for decades, people have already had a chance to find all its best applications. e.g. when we interviewed biomedical researchers, several mentioned that breakthroughs often come when people apply new tech to a research question.

My guess is that there are good reasons for EAs to aim to be on the cutting edge of technology.

View more: Prev | Next