Comment author: Robert_Wiblin 20 April 2018 06:39:13PM 2 points [-]

I made a similar observation about AI risk reduction work last year:

"Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk * reduced by 1% * 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime)."

http://effective-altruism.com/ea/18u/intuition_jousting_what_it_is_and_why_it_should/amj

Comment author: Robert_Wiblin 08 April 2018 01:12:17AM *  5 points [-]

Now that I've had a chance to read this properly I have one key follow-up question.

People often talk about the possibility of solving societal coordination problems with cryptocurrency, but I am yet to see a concrete example of this.

Is it possible to walk through a coordination failure that today could be best tackled using blockchain technology, and step-by-step how that would work?

This would be most persuasive if the coordination failure was in one of the priority problems mentioned above, but I'd find any specific illustration very helpful.

Comment author: rhys_lindmark 04 April 2018 05:10:59PM *  3 points [-]
Comment author: Robert_Wiblin 04 April 2018 08:39:54PM *  0 points [-]
Comment author: Robert_Wiblin 04 April 2018 12:16:28AM *  7 points [-]

Thanks for writing this up, you cover a lot of ground. I don't have time to respond to it now, but I wanted to link to a popular counterpoint to the practical value of blockchain technology: Ten years in, nobody has come up with a use for blockchain.

Comment author: Robert_Wiblin 04 December 2017 12:44:10AM *  8 points [-]

"Within countries, per capita GDP growth does not appear to lead to corresponding increases in well-being."

I spent a bunch of time looking into this 'Easterlin paradox' and concluded it's more likely than not that it doesn't exist. If you look across all the countries we have data on up to the present day, increased income is indeed correlated with increased levels of SWB. Not all things are positional or absolute, it's just a mix.

My impression is that people who study this topic are divided on the correct interpretation of the data, so you should take everyone's views (including mine) with a pinch of salt.

Comment author: Halffull 18 November 2017 12:11:17AM -1 points [-]

Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person's estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.

This is what I'm talking about when I say "jut so stories" about the data from the GJP. One explanation is that superforecasters are going through this thought process, another would be that they discard non-superforecasters' knowledge, and therefore end up as more extreme without explicitly running the extremizing algorithm on their own forecasts.

Similarly, the existence of super-forecasters themselves argues for a non-modest epistemology, while the fact that the extremized aggregation beats the superforecasters may argue for somewhat of a more modest epistemology. Saying that the data here points one way or the other to my mind is cherrypicking.

Comment author: Robert_Wiblin 18 November 2017 01:00:17AM *  1 point [-]

"...the existence of super-forecasters themselves argues for a non-modest epistemology..."

I don't see how. No theory on offer argues that everyone is an epistemic peer. All theories predict some people have better judgement and will be reliably able to produce better guesses.

As a result I think superforecasters should usually pay little attention to the predictions of non-superforecasters (unless it's a question on which expertise pays few dividends).

Comment author: vaniver 17 November 2017 01:24:51AM *  3 points [-]

I think with Eliezer's approach, superforecasters should exist, and it should be possible to be aware that you are a superforecaster. Those both seem like they would be lower probability under the modest view. Whether Eliezer personally is a superforecaster seems about as relevant as whether Tetlock is one; you don't need to be a superforecaster to study them.

I expect Eliezer to agree that a careful aggregation of superforecasters will outperform any individual superforecaster; similarly, I expect Eliezer to think that a careful aggregation of anti-modest reasoners will outperform any individual anti-modest reasoner.

It's worth considering what careful aggregations look like when not dealing with binary predictions. The function of a careful aggregation is to disproportionately silence error while maintaining signal. With many short-term binary predictions, we can use methods that focus on the outcomes, without any reference to how those predictors are estimating those outcomes. With more complicated questions, we can't compare outcomes directly, and so need to use the reasoning processes themselves as data.

That suggests a potential disagreement to focus on: the anti-modest view suspects that one can do a careful aggregation based on reasoner methodology (say, weighing more highly forecasters who adjust their estimates more frequently, or who report using Bayes, or so on), whereas I think the modest view suspects that this won't outperform uniform aggregation.

(The modest view has two components--approving of weighting past performance, and disapproving of other weightings. Since other approaches can agree on the importance of past performance, and the typical issues where the two viewpoints differ are those where we have little data on past performance, it seems more relevant to focus on whether the disapproval is correct than whether the approval is correct.)

Comment author: Robert_Wiblin 17 November 2017 11:10:49AM *  1 point [-]

OK so it seems like the potential areas of disagreement are:

  • How much external confirmation do you need to know that you're a superforecaster (or have good judgement in general), or even the best forecaster?
  • How narrowly should you define the 'expert' group?
  • How often should you define who is a relevant expert based on whether you agree with them in that specific case?
  • How much should you value 'wisdom of the crowd (of experts)' against the views of the one best person?
  • How much to follow a preregistered process to whatever conclusion it leads to, versus change the algorithm as you go to get an answer that seems right?

We'll probably have to go through a lot of specific cases to see how much disagreement there actually is. It's possible to talk in generalities and feel you disagree, but actually be pretty close on concrete cases.

Note that it's entirely possible that non-modest contributors will do more to enhance the accuracy of a forecasting tournament because they try harder to find errors, but less right than others' all-things-considered views, because of insufficient deference to the answer the tournament as a whole spits out. Active traders enhance market efficiency, but still lose money as a group.

As for Eliezer knowing how to make good predictions, but not being able to do it himself, that's possible (though it would raise the question of how he has gotten strong evidence that these methods work). But as I understand it, Eliezer regards himself as being able to do unusually well using the techniques he has described, and so would predict his own success in forecasting tournaments.

Comment author: Halffull 17 November 2017 01:20:31AM 0 points [-]

How is that in conflict with my point? As superforecasters spend more time talking and sharing information with one another, maybe they have already incorporated extremising into their own forecasts.

Doesn't this clearly demonstrate that the superforecasters are not using modest epistemology? At best, this shows that you can improve upon a "non-modest" epistemology by aggregating them together, but does not argue against the original post.

Comment author: Robert_Wiblin 17 November 2017 09:54:43AM 1 point [-]

Hi Halffull - now I see what you're saying, but actually the reverse is true. That superforecasters have already extremised shows their higher levels of modesty. Extremising is about updating based on other people's views, and realising that because they have independent information to add, after hearing their view, you can be more confident of where to shift from your prior.

Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person's estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.

Once they've done that, there won't be gains from further extremising. But a non-humble participant would fail to properly extremise based on the information in the other person's view, leaving accuracy to be gained if this is done at a later stage by someone running the forecasting tournament.

Comment author: Robert_Wiblin 17 November 2017 12:16:33AM *  8 points [-]

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

But no matter - they probably won't suffer much, because the meek do no inherit the Earth, at least not in this life.

People follow confidence in leaders, generating the pathological start-up founder who is sure they're 100x more likely to succeed than the base rate; someone who portrays themselves as especially competent in a job interview is more likely to be hired than someone who accurately appraises their merits; and I don't imagine deferring to a boring consensus brings more romantic success than elaborating on one's exciting contrarian opinions.

Given all this, it's unsurprising evolution has programmed us to place an astonishingly high weight on our own judgement.

While there are some social downsides to seeming arrogant, people who preach modesty here advocate going well beyond what's required to avoid triggering an anti-dominance reaction in others.

Indeed, even though I think strong modesty is epistemically the correct approach on the basis of reasoned argument, I do not and can not consistently live and speak that way, because all my personal incentives are lined up in favour of me portraying myself as very confident in my inside view.

In my experience it requires a monastic discipline to do otherwise, a discipline almost none possess.

Comment author: Halffull 16 November 2017 10:26:19PM -3 points [-]

It's an interesting just so story about what IARPA has to say about epistemology, but the actual story is much more complicated. For instance, the fact that "Extremizing" works to better calibrate general forecasts, but that extremizing of superforecaster's predictions makes them worse.

Furthermore, that contrary to what you seem to be claiming about people not being able to outperform others, there are in fact "superforecasters" who out perform the average participant year after year, even if they can't outperform the aggregate when their forecasts are factored in.

Comment author: Robert_Wiblin 16 November 2017 10:36:49PM *  3 points [-]

Not sure how this is a 'just so story' in the sense that I understand the term.

"the fact that "Extremizing" works to better calibrate general forecasts, but that extremizing of superforecaster's predictions makes them worse."

How is that in conflict with my point? As superforecasters spend more time talking and sharing information with one another, maybe they have already incorporated extremising into their own forecasts.

I know very well about superforecasters (I've read all of his books and interviewed Tetlock last week), but I am pretty sure an aggregation of superforecasters beats almost all of them individually, which speaks to the benefits of averaging a range of people's views in most cases. Though in many cases you should not give much weight to those who are clearly in a worse epistemic position (non-superforecasters, whose predictions Tetlock told me were about 10-30x less useful).

View more: Next