The last Open Thread was in October 2017, so I thought we were overdue for a new one.
Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don't have enough karma to post on the main forum.
Consider giving your post a brief title to improve readability.
I'm not sure I understand the distinction you're making. In what sense is this compatible with your contention that "Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately"? Is this "chain of theoretical reasoning" a "model that includes far-future effects"?
We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock's terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to "the math of forecasting".)
I'm trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn't arrive at a numerical cost-per-outcome, instead arrives at something like "...and so this thing is probably best").
Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they're actually using pretty differ... (read more)