Hi everyone,
The full review is here.
Below is the summary:
----
This year, we focused on “upgrading” – getting engaged readers into our top priority career paths.
We do this by writing articles on why and how to enter the priority paths, providing one-on-one advice to help the most engaged readers narrow down, and introductions to help them enter.
Some of our main successes this year include:
- We developed and refined this upgrading process, having been focused on introductory content last year. We made lots of improvements to coaching, and released 48 pieces of content.
- We used the process to grow the number of rated-10 plan changes 2.6-fold compared to 2016, from 19 to 50. We primarily placed people in AI technical safety, other AI roles, effective altruism non-profits, earning to give and biorisk.
- We started tracking rated-100 and rated-1000 plan changes. We recorded 10 rated-100 and one rated-1000 plan change, meaning that with the new metric, total new impact-adjusted significant plan changes (IASPC v2) doubled compared to 2016, from roughly 1200 to 2400. That means we’ve grown the annual rate of plan changes 23-fold since 2013. (If we ignore the rated-100+ category, then IASPCv1 grew 31% from 2017 to 2016, and 12-fold since 2013.)
- This meant that despite rising costs, cost per IASPC was flat. We updated our historical and marginal cost-effectiveness estimates, and think we’ve likely been highly cost-effective, though we have a lot of uncertainty.
- We maintained a good financial position, hired three great full-time core staff (Brenton Mayer as co-head of coaching; Peter Hartree came back as technical lead; and Niel Bowerman started on AI policy), and started training several managers.
Some challenges include: (i) people misunderstand our views on career capital so are picking options we don’t always agree with (ii) we haven’t made progress on team diversity since 2014 (iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.
Over the next year, we intend to keep improving this upgrading process, with the aim of recording at least another 2200 IASPC. We think we can continue to grow our audience by releasing more content (it has grown 80% p.a. the last two years), getting better at spotting who from our audience to coach, and offering more value to each person we coach (e.g. doing more headhunting, adding a fellowship). By doing all of this, we can likely grow the impact of our upgrading process at least several-fold, and then we could scale it further by hiring more coaches.
We’ll continue to make AI technical safety and EA non-profits a key focus, but we also want to expand more into other AI roles, other policy roles relevant to extinction risk, and biorisk.
Looking forward, we think 80,000 Hours can become at least another 10-times bigger, and make a major contribution to getting more great people working on the world’s most pressing problems.
We’d like to raise $1.02m this year. We expect 33-50% to be covered by the Open Philanthropy Project, and are looking for others to match the remainder. If you’re interested in donating, the easiest way is through the EA Funds.
If you’re interested in making a large donation and have questions, please contact ben@80000hours.org.
If you’d like to follow our progress during the year, subscribe to 80,000 Hours updates.
After thinking about it for a while I'm still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing "the person not changing their career path". However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state "of the world" would be "someone else doing a similar work in a central EA organization". As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don't count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?
Another question is how do you estimate your uncertainty in valuing something rate-n?
Hi Jan,
We basically just do our best to think about what the counterfactual would have been without 80k, and then subtract that from our impact. We tend to break this into two components: (i) the value of the new option compared to what they would have done otherwise (ii) the influence of others in the community, who might have brought about similar changes soon afterwards.
The value of their next best alternative matters a little less than it might first seem because we think the impact of different options is fat-tailed i.e. someone switching to a higher-... (read more)