RobinHanson comments on Against prediction markets - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobinHanson 12 May 2018 05:15:13PM *  6 points [-]

Without some concrete estimate of how highly prediction markets are currently rated, its hard to say if they are over or under rated. They are almost never used, however, so it is hard to believe they are overused.

The office prediction markets you outline might well be useful. They aren't obviously bad.

I see huge potential for creating larger markets to estimate altruism effectiveness. We don't have any such at the moment, or even much effort to make them, so I find it hard to see that there's too much effort there.

For example, it would be great to create markets estimating advertised outcomes from proposed World Bank projects. That might well pressure the Bank into adopting projects more likely to achieve those outcomes.

Comment author: Denise_Melchin 13 May 2018 08:33:06AM *  4 points [-]

I don't think prediction markets are overused by EAs, I think they are advocated for too much (both for internal lower stakes situations as well as for solving problems in the world) when they are not the best alternative for a given problem.

One problem with prediction markets is that they are hassle to implement which is why people don't actually want to implement them. But since they are often the first alternative suggestion to the status quo within EA, better solutions in lower stakes situations like office forecasts which might have a chance of actually getting implemented don't even get discussed.

I don't think an office prediction market would be bad or not useful once you ignore opportunity costs, just worse than the alternatives. To be fair, I'm somewhat more optimistic for implementing office prediction markets in large workspaces like Google, but not for the small EA orgs we have. In those they would more likely take up a bunch of work without actually improving the situation much.

How large do you think a market needs to be to be efficient enough to be better than, say, asking Tetlock for the names of the top 30 superforecasters and hiring them to assess the problem? Given that political betting, despite being pretty large, had such big trouble as described in the post, I'm afraid an efficient enough prediction market would take a lot of work to implement. I agree with you the added incentive structure would be nice, which might well make up for a lack of efficiency.

But again, I'm still optimistic about sufficiently large stock market like prediction markets.

Comment author: RobinHanson 13 May 2018 12:09:32PM 2 points [-]

Political betting had a problem relative to perfection, not relative to the actual other alternatives used; it did better than them according to accuracy studies.

Yes there are overheads to using prediction markets, but those are mainly for having any system at all. Once you have a system, the overhead to adding a new question is much lower. Since you don't have EA prediction markets now, you face those initial costs.

For forecasting in most organizations, hiring top 30 super forecasters would go badly, as they don't know enough about that organization to be useful. Far better to have just a handful of participants from that organization.

Comment author: Denise_Melchin 13 May 2018 03:37:09PM 2 points [-]

I assumed you didn't mean an internal World Bank prediction market, sorry about that. As I said above, I'm more optimistic about large workplaces employing prediction markets. I don't know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?

Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don't see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.

You also don't state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn't call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Comment author: Pablo_Stafforini 13 May 2018 09:02:20PM *  2 points [-]

I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Robin's position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.

Comment author: Denise_Melchin 14 May 2018 08:32:20PM 0 points [-]

Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn't actually the main end goal I care about (but 'good done in the world' for which better forecasts of the future would be pretty useful).

Comment author: Pablo_Stafforini 15 May 2018 05:10:08PM 2 points [-]

Feel free to ignore if you don't think this is sufficiently important, but I don't understand the contrast you draw between accuracy and outside world manipulation. I thought manipulation of prediction markets was concerning precisely because it reduces their accuracy. Assuming you accept Robin's point that manipulation increases accuracy on balance, what's your residual concern?

Comment author: PeterMcCluskey 13 May 2018 04:36:47PM 1 point [-]

I think markets that have at least 20 people trading on any given question will on average be at least as good as any alternative.

Your comments about superforecasters suggest that you think what matters is hiring the right people. What I think matters is the incentives the people are given. Most organizations produce bad forecasts because they have goals which distract people from the truth. The biggest gains from prediction markets are due to replacing bad incentives with incentives that are closely connected with accurate predictions.

There are multiple ways to produce good incentives, and for internal office predictions, there's usually something simpler than prediction markets that works well enough.