5

Peter_Hurford comments on Expected value estimates we (cautiously) took literally - Oxford Prioritisation Project - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelDickens  (EA Profile) 29 May 2017 11:34:34PM 1 point [-]

I'm still undecided on the question of whether quantitative models can actually work better than qualitative analysis. (Indeed, how can you even ever know which works better?) But very few people actually use serious quantitative models to make decisions--even if quantitative models ultimately don't work as well as well-organized qualitative analysis, they're still underrepresented--so I'm happy to see more work in this area.

Some suggestions on ways to improve the model:

Account for missing components

Quantitative models are hard, and it's impossible to construct a model that accounts for everything you care about. I think it's a good idea to consider which parts of reality you expect to matter most for the impact of a particular thing, and try to model those. Whatever your model is missing, try to figure out which parts of that matter most. You might decide that some things are too hard to model, in which case you should consider how those hard-to-model bits will likely affect the outcome and adjust your decision accordingly.

Examples of major things left out:

  • 80K model only considers impact in terms of new donations to GWWC based on 80K's own numbers. It would be better to use your own models of the effectiveness of different cause areas and account for how many people 80K moves into/away from these cause areas using your own effectiveness estimates for different causes.
  • ACE model only looks at the value from moving money among top charities. My own model includes money moved among top charities plus new money moved to top charities plus the value of new research that ACE funds.

Sensitivity analysis

The particular ordering you found (80K > MIRI > ACE > StrongMinds) depends heavily on certain input parameters. For example, for your MIRI model, "expected value of the far future" is doing tons of work. It assumes that the far future contains about 10^17 person-years; I don't see any justification given. What if it's actually 10^11? Or 10^50? This hugely changes the outcome. You should do some sensitivity analysis to see which inputs matter the most. If any one input matters too much, break it down into less sensitive inputs.

Comment author: Peter_Hurford  (EA Profile) 30 May 2017 12:12:53AM 0 points [-]

Indeed, how can you even ever know which works better?

Retrospective analysis of track record? Looking into Tetlock-style research?

Comment author: MichaelDickens  (EA Profile) 30 May 2017 02:42:42PM 0 points [-]

Suppose it's 10 years in the future, and we can look back at what ACE and MIRI have been doing for the past 10 years. We now know some new useful information, such as:

  • Has ACE produced research that influenced our understanding of effective charities?
  • Has MIRI published new research that moved us closer to making AI safe?
  • Has ACE moved more money to top animal charities?

But even then, we still don't know nearly as much as we'd like. We don't know if ACE really moved money, or if that money would have been donated to animal charities anyway. Maybe MIRI took funding away from other research avenues that would have been more fruitful. We still have no idea how (dis)valuable the far future will be.