Comment author: Denise_Melchin 29 May 2018 12:27:00PM 2 points [-]

I think it would have been better for you to post this as a comment on your own or Joey’s post. Having a discussion in three different places makes the discussion hard to follow. Two are more than enough.

Comment author: Denise_Melchin 20 May 2018 11:42:00PM *  25 points [-]

Thanks for trying to get a clearer handle on this issue by splitting it up by cause area.

One gripe I have with this debate is the focus on EA orgs. Effective Altruism is or should be about doing the most good. Organisations which are explicitly labelled Effective Altruist are only a small part of that. Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.

Whether 'doing the most good' in the world is more talent than funding constrained is much harder to prove but is the actually important question.

If we focus the debate on EA orgs and our general vision as a movement on orgs that are labelled EA, the EA Community runs the risk of overlooking efforts and opportunities which aren't branded EA.

Of course fixing global poverty takes more than ten people working on the problem. Filling the funding gap for GiveWell recommended charities won't be enough to fix it either. Using EA branded framing isn't special to you - but it can make us lose track of the bigger picture of all the problems that still need to be solved, and all the funding that is still needed for that.

If you want to focus on fixing global poverty, just because EA focuses on GW recommended charities doesn't mean EtG is the best approach - how about training to be a development economist instead? The world still needs more than ten additional ones of that. (Edit: But it is not obvious to me whether global poverty as a whole is more talent or funding constrained - you'd need to poll leading people who actually work in the field, e.g. leading development economists or development professors.)

Comment author: Pablo_Stafforini 13 May 2018 09:02:20PM *  2 points [-]

I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Robin's position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.

Comment author: Denise_Melchin 14 May 2018 08:32:20PM 0 points [-]

Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn't actually the main end goal I care about (but 'good done in the world' for which better forecasts of the future would be pretty useful).

Comment author: RobinHanson 13 May 2018 12:09:32PM 2 points [-]

Political betting had a problem relative to perfection, not relative to the actual other alternatives used; it did better than them according to accuracy studies.

Yes there are overheads to using prediction markets, but those are mainly for having any system at all. Once you have a system, the overhead to adding a new question is much lower. Since you don't have EA prediction markets now, you face those initial costs.

For forecasting in most organizations, hiring top 30 super forecasters would go badly, as they don't know enough about that organization to be useful. Far better to have just a handful of participants from that organization.

Comment author: Denise_Melchin 13 May 2018 03:37:09PM 2 points [-]

I assumed you didn't mean an internal World Bank prediction market, sorry about that. As I said above, I'm more optimistic about large workplaces employing prediction markets. I don't know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?

Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don't see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.

You also don't state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn't call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Comment author: Buck 12 May 2018 06:40:44PM 3 points [-]

Two points about prediction markets:

  • I think it's interesting that in the limit, prediction markets don't have prices that converge to probabilities--they converge to risk-adjusted prices.
  • I think the strongest case for prediction markets is that they're unbiased and hard to manipulate in the limit. See this cached old blog post. Your post doesn't take that into account.
Comment author: Denise_Melchin 13 May 2018 08:41:39AM 2 points [-]

I'm arguing that the limit is hard to reach and when it isn't being reached, prediction markets are usually worse than alternatives. I'd be excited about a prediction market like Scott is describing in his post, but we are quite far away from implementing anything like that.

I also find it ironic that Scott's example discusses how hard election prediction markets are to corrupt, which is precisely what happened in the Intrade example above.

Comment author: PeterMcCluskey 12 May 2018 04:59:59PM 7 points [-]

Who are you arguing against? The three links in your first paragraph go to articles that don't clearly disagree with you.

I’d also be curious about a prediction market in which only superforecasters trade.

I'd guess that there would be fewer trades than otherwise, and this would often offset any benefits that come from the high quality of the participants.

Comment author: Denise_Melchin 13 May 2018 08:36:41AM 8 points [-]

I'm arguing against prediction markets being the best alternative in many situations contemplated by EAs, which is something I have heard said or implied by a lot of EAs in conversations I've had with them. Most notably, I think a lot of EAs are unaware of the arguments I make in the post and I wanted to have them written up for future reference.

Comment author: RobinHanson 12 May 2018 05:15:13PM *  6 points [-]

Without some concrete estimate of how highly prediction markets are currently rated, its hard to say if they are over or under rated. They are almost never used, however, so it is hard to believe they are overused.

The office prediction markets you outline might well be useful. They aren't obviously bad.

I see huge potential for creating larger markets to estimate altruism effectiveness. We don't have any such at the moment, or even much effort to make them, so I find it hard to see that there's too much effort there.

For example, it would be great to create markets estimating advertised outcomes from proposed World Bank projects. That might well pressure the Bank into adopting projects more likely to achieve those outcomes.

Comment author: Denise_Melchin 13 May 2018 08:33:06AM *  4 points [-]

I don't think prediction markets are overused by EAs, I think they are advocated for too much (both for internal lower stakes situations as well as for solving problems in the world) when they are not the best alternative for a given problem.

One problem with prediction markets is that they are hassle to implement which is why people don't actually want to implement them. But since they are often the first alternative suggestion to the status quo within EA, better solutions in lower stakes situations like office forecasts which might have a chance of actually getting implemented don't even get discussed.

I don't think an office prediction market would be bad or not useful once you ignore opportunity costs, just worse than the alternatives. To be fair, I'm somewhat more optimistic for implementing office prediction markets in large workspaces like Google, but not for the small EA orgs we have. In those they would more likely take up a bunch of work without actually improving the situation much.

How large do you think a market needs to be to be efficient enough to be better than, say, asking Tetlock for the names of the top 30 superforecasters and hiring them to assess the problem? Given that political betting, despite being pretty large, had such big trouble as described in the post, I'm afraid an efficient enough prediction market would take a lot of work to implement. I agree with you the added incentive structure would be nice, which might well make up for a lack of efficiency.

But again, I'm still optimistic about sufficiently large stock market like prediction markets.

Comment author: RobinHanson 12 May 2018 01:24:19PM *  8 points [-]

You seem to be comparing prediction markets to perfection, not to the real mechanisms that we now use today instead. People proposing prediction markets are suggesting they'd work better than the status quo. They are usually not comparing them to something like GJP.

Comment author: Denise_Melchin 12 May 2018 05:02:46PM *  7 points [-]

I agree with you prediction markets are in many cases better than the status quo. I'm not comparing prediction markets to perfection but to their alternatives (like extremizing team forecasts). I'm also only arguing that prediction markets are overrated within EA, not in the wider world. I'd assume they're underrated outside of libertarian-friendly circles.

All in all, for which problems prediction markets do better than which alternatives is an empirical question, which I state in the post:

How stringently the conditions for market efficiency need to be met for a market to actually be efficient is an empirical question. How efficient a prediction market needs to be to give better forecasts than the alternatives is another one.

Do you disagree that in the specific examples I have given (an office prediction market about the timeline of a project, an election prediction market) having a prediction market is worse than the alternatives?

It would be good if you could give concrete examples where you expect prediction markets to be the best alternative.

Prediction markets are a neat concept, and are often regarded highly in the EA sphere. I think they are often not the best alternative for a given problem and are insufficiently compared to those alternatives within EA. Perhaps because they are such a neat concept - "let's just do a prediction market!" sounds a lot more exciting than discussing a problem in a team and extremizing the team's forecast even though a prediction market would be a lot more work.


Against prediction markets

Within the EA sphere, prediction markets have often been championed as a good solution for forecasting the future. Improved forecasting has been discussed many times as a cause area for humanity  to  make better judgements and generally improve institutional decision making.   In this post, I will argue that prediction... Read More
Comment author: KarolinaSarek 06 May 2018 07:03:47PM 4 points [-]

Thank you, Joey, for gathering those data. And thank you, Darius, for providing us with the suggestions for reducing this risk. I agree that further research on causes of value drift and how to avoid it is needed. If the phenomenon is explained correctly, that could be a great asset to the EA community building. But regardless of this explanation, your suggestions are valuable.

It seems to be a generally complex problem because retention encapsulates the phenomenon in which a person develops an identity, skill set, and consistent motivation or dedication to significantly change the course of their life. CEA in their recent model of community building framed it as resources, dedication, and realization.

Decreasing retention is also observed in many social movements. Some insights about how it happens can be culled from sociological literature. Although it is still underexplored and the sociological analysis might have mediocre quality, but it might still be useful to have a look at it. For example, this analysis implicate that “movement’s ability to sustain itself is a deeply interactive question predicted by its relationship to its participants: their availability, their relationships to others, and the organization’s capacity to make them feel empowered, obligated, and invested."

Additional aspects of value drift to consider on an individual level that might not be relevant to other social movements: mental health and well-being, pathological altruism, purchasing fuzzies and utilons separately.

The reasons for the value drift from EA seems to be as important in understanding the process, as the value drift that led to EA, e.g. In Joey's post, he gave an illustrative story of Alice. What could explain her value drift was the fact that at people during their first year of college are more prone to social pressure and need for belonging. That could make her become EA and drifted when she left college and her EA peers. So "Surround yourself with value aligned people" for the whole course of your life. That also stresses the importance of untapped potential of local groups outside the main EA hubs. For this reason, it's worth considering even If in case of outreach we shouldn't rush to translate effective altruism

About the data itself. We might be making wrong inferences trying to explain those date. Because it shows only a fraction of the process and maybe if we would observe the curve of engagement it would fluctuate over a longer period of time, eg. 50% in the first 2-5 year, 10% in a 6th year, 1% in for the next 2-3 and then coming back to 10%, 50% etc.? Me might hypothesize that life situation influence the baseline engagement for short period (1 month- 3 years). As analogous for changes in a baseline of happiness and influences of live events explained by hedonic adaptation, maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.

Additionally, the level of engagement in EA and other significant variables does not correlate perfectly, the data could also be explained by the regression to the mean. If some of the EAs were hardcore at the beginning, they will tend to be closer to the average on a second measurement, so from 50% to 10%, and those from 10% to 1%. Anyhow, the likelihood that the value drift is true is higher than that it's not.

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Becuase mechanism of the value drift would determine the strategies to minimalize risk or harm of it and because the EA community might not be representative for other social movements, we should systematically and empirically explore those and other factors in order to find the 80/20 of long-lasting commitment.

Comment author: Denise_Melchin 11 May 2018 06:29:53PM *  6 points [-]

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Doing effective altruistic things ≠ Doing Effective Altruism™ things

All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can't all work at the main EA orgs.

There are lots of highly impactful opportunities out there that aren't branded as EA - check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.

We should be interested in having as high an impact as possible and not in 'performing EA-ness'.

I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an 'EA approved' workplace like D pmind or J n Street) - or nothing. That's counterproductive and sad.

A potential explanation: it's difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and 'status'.

As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.

View more: Prev | Next