Comment author: PeterMcCluskey 22 May 2017 02:41:16PM 5 points [-]

Can you explain your expected far future population size? It looks like your upper bound is something like 10 orders of magnitude lower than Bostrom's most conservative estimates.

That disagreement makes all the other uncertainty look extremely trivial in comparison.

Comment author: ThomasSittler 23 May 2017 10:55:02AM *  1 point [-]

Do you mean Bostrom's estimate that "the Virgo Supercluster could contain 10^23 biological humans"? This did come up in our conversations. One objection that was raised is that humanity could go extinct, or for some other reason colonisation of the Supercluster could have a very low probability. There was significant disagreement among us, and if I recall correctly we chose the median of our estimates.

Do you think Bostrom is correct here? What probability distribution would you have chosen for the expected far future population size? :)

Comment author: Peter_Hurford  (EA Profile) 21 May 2017 11:17:47PM 1 point [-]

I think you should add more uncertainty to your model around the value of an 80K career change (in both directions). While 1 impact-adjusted change is approximately the value of a GWWC pledge, that doesn't mean it is equal in both mean and standard deviation as your model suggests, since the plan changes involve a wide variety of different possibilities.

It might be good to work with 80K to get some more detail about the kinds of career changes that are being made and try to model the types of career changes separately. Certainly, some people do take the GWWC pledge, and that is a change that is straightforwardly comparable with the value of the GWWC pledge (minus concerns about the counterfactual share of 80K), but other people make much higher-risk higher-reward career changes, especially in the 10x category.

Speaking just for me, in my personal view looking at a few examples of the 80K 10x category, I've found them to be highly variable (including some changes that I'd personally judge as less valuable than the GWWC pledge)... while this certainly is not a systematic analysis on my part, it would suggest your model should include more uncertainty than it currently does.

Lastly, I think your model right now assumes 80K has 100% responsibility for all their career changes. Maybe this is completely fine because 80K already weights their reported career change numbers for counterfactuality? Or maybe there's some other good reason to not take this into account? I admit there's a good chance I'm missing something here, but it would be nice to see it addressed more specifically.

Comment author: ThomasSittler 23 May 2017 10:41:51AM 1 point [-]

One clarification is that our current model incorporates uncertainty at the stage where GWWC-donation-equivalents are converted to HEWALYs. We do not additionally have uncertainty on the value of a plan change scored "10" in terms of GWWC-donation-equivalents. We do have uncertainty on the 0.1s and 1s.

Comment author: rohinmshah  (EA Profile) 14 May 2017 12:56:54AM 5 points [-]

Attracting more experienced staff with higher salary and nicer office: more experienced staff are more productive which would increase the average cost-effectiveness above the current level, so the marginal must be greater than the current average.

Wait, what? The costs are also increasing, it's definitely possible for marginal cost effectiveness to be lower than the current average. In fact, I would strongly predict it's lower -- if there's an opportunity to get better marginal cost effectiveness than average cost effectiveness, that begs the question of why you don't just cut funding from some of your less effective activities and repurpose it for this opportunity.

Given the importance of such considerations and the difficulty of modelling them quantitatively, to holistically evaluate an organization, especially a young one, there is an argument for using a qualitative approach and “cluster thinking”, in addition to a quantitative approach and “sequential thinking.”

Please do, I think an analysis of the potential for growth (qualitative or quantitative) would significantly improve this post, since that consideration could easily swamp all others.

Comment author: ThomasSittler 23 May 2017 10:37:49AM 1 point [-]

Hi Rohin, thanks for the comment! :) My hunch is also that 80,000 Hours and most organisations have diminishing marginal cost-effectiveness. As far as I know from our conversations, on balance this is Sindy's view too.

The problem with qualitative considerations is that while they are in some sense useful standing on their own, they are very difficult to aggregate into a final decision in a principled way.

Modelling the potential for growth quantitatively would be good. Do you have a suggestion for doing so? The counterfactuals are hard.

Comment author: Peter_Hurford  (EA Profile) 21 May 2017 11:19:34PM 7 points [-]

I just wanted to say, as a wannabe cause prioritizer, congratulations! I've been very impressed with your team's productivity and process. I really admire the time pressure to create actionable and practical results, even if it may be hard, highly uncertain, and probably wrong! :D

I've left critical comments, but I don't want that to distract from me telling you how impressive you guys have been (and I don't use that term lightly), and I really hope you continue to do this and improve upon your results.

Comment author: ThomasSittler 23 May 2017 10:30:59AM 2 points [-]

Thanks! And I'll just add that specific criticisms, with suggestions for improvement, are very welcome. The most useful comments for us have been those targeting a specific modelling decision or a model input, and proposing a superior alternative.

Comment author: kokotajlod 22 May 2017 04:40:54PM 5 points [-]

That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course--it's hard to give out a prize on the basis of raw hunches--but nevertheless we should work towards finding ways to avoid it.

Kudos to OPP for being so transparent in their thought process though!

Comment author: ThomasSittler 23 May 2017 10:27:31AM 1 point [-]

Daniel - You are correct that our decision not to donate to GFI was an example of measurability bias. However, I would say the problematic decision was about the shortlist as a whole, rather than replacing GFI with ACE on the shortlist once it was chosen. When we abandoned GFI, we had already chosen the shortlist in part based on what it would be interesting to model. After gaining new information, i.e. discovering ACE could be more sensibly (and thus more interestingly) modelled, switching felt like the right call.

Our shortlisting decision was not as principled as we'd hoped, and I agree it is biased in various ways.

I think our ranking of the shortlisted organisations, while very uncertain, is probably less biased. Producing quantitative models of a biased shortlist of organisations still has some value, especially when they are in different cause areas.

6

Four quantitative models, aggregation, and final decision - Oxford Prioritisation Project

By Tom Sittler, Director Cross-posted to the Oxford Prioritisation Project blog . We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here. Summary: Congratulations to 80,000 Hours for winning the £10,000 Oxford Prioritisation Project grant! This is a summary post about our final grant... Read More
5

A model of 80,000 Hours - Oxford Prioritisation Project

By Sindy Li, Fellow. Created: 2017-05-12 Cross-posted from the Oxford Prioritisation Project blog . We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here. Summary: We built a quantitative model estimating the impact of 80,000 Hours. To measure our uncertainty, we built the model... Read More
5

Expected value estimates we (cautiously) took literally - Oxford Prioritisation Project

By Tom Sittler, Director Cross-posted from the Oxford Prioritisation Project blog . We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here. Summary: This post describes how we turned the cost-effectiveness estimates of our four shortlisted organisations into a final decision. In order to... Read More
2

A model of Animal Charity Evaluators - Oxford Prioritisation Project

By Dominik Peters, Fellow (with Tom Sittler) Created: 2017-05-13 Cross-posted from the Oxford Prioritisation Project blog . We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here. Summary. We describe a simple simulation model for the recommendations of a charity evaluator like Animal Charity Evaluators... Read More
2

A model of the Machine Intelligence Research Institute - Oxford Prioritisation Project

By Sindy Li, Fellow Created: 2017-05-12 Cross-posted from the Oxford Prioritisation Project blog . We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here. Summary: We built a quantitative model estimating the impact of the Machine Intelligence Research Institute. To measure our uncertainty, we... Read More

View more: Prev | Next