Comment author: janklenha 18 March 2018 01:43:33PM 1 point [-]

Thanks for the feedback, this is very helpful!

EA vs. CCC values: I think about prioritization as a 3-step process of choosing a cause, an intervention, and then a specific organization. EA, 80k Hours or Global priorities are focused especially on choosing causes (the most „meta“ activity), GiveWell and charity evaluators are focusing on the third step – recommending organizations. Copenhagen Consensus´ approach can be seen as a compatible middle step in this process – prioritizing between possible interventions and solutions (hopefully more and more in the most high-impact areas, being increasingly compatible with EA).

Discount rates: Yes, Copenhagen Consensus uses discount rates (3% and 5% in previous projects) I would argue especially because of the uncertainty about the future, and our comparative advantage in solving current issues. We are always open to discuss this with EA, especially considering projects in more developed countries.

X-risks: Our projects are done in three steps, that we value more or less equally: 1) stakeholder research (gathering 1000+ policy ideas and choosing the top 60-80 interventions to analyse), 2) cost benefit analyses on those interventions, and 3) media dissemination and public advocacy for the top priorities. I would expect x-risks to be considered in the research in developed country rather that in Bagladesh, Haiti or India. Interventions reducing x-risks will definitely be among the 1000+ policy ideas, and even if they don´t make it into the 60-80 intervention analysed, thinking about low-chance, extreme-impact effects is certainly something that should be included in all relevant cost-benefit analyses.

Meta: There is a substantial difficulty to reasonably calculate very broad long-term benefits. Cost-benefit ratio of “Improving institutional decision-making” would be almost impossible to calculate, but we will assess our own impact, and since this is exactly our goal, some interesting data might come up. It would be also helpful to analyse partial interventions such as anti-corruption or transparency, that should lead to better institutional decision-making as a result. There are other interventions with long-term effects, that might make it to the final round and EA would probably agree on, such as programs in mental health, giving homes, consumption of animal products (e.g. removing subsidies for factory farms), antibiotic resistance etc.

Advocacy challenges: The project intends to, of course, say the most true things. If some of the top priorities will be difficult to implement, politicians will simply choose not pay attention to them, but at least public awareness will be created. I don’t think there will be any extremely controversial interventions (in AI Safety, for example), that would make us consider not publishing it to protect the whole project from being ridiculed and discredited.

Public sentiment and preventing authoritarianism: Yes, we expect public sentiment to be a key driving factor for change (along with roundtables and presentations to politicians, political parties and members of budget committee), more so than in third-world countries. We are in touch with local media, that are influential in shaping public opinion. Implementing the best interventions would have great effects, but even if not implemented, we hope to move public discussion (that is always in conflict between different world-views) to a little bit more rational level, and open up as many eyes and educate as many minds as possible to think in bigger picture. That seems to be a good way to fight irrational populism, which has all sorts of bad impacts on the society.

Importance of robust policies vs. acceptable policies: This possible trade-off should be made in each analysis, the researcher should think if the specific intervention would make the most impact by making the connected policies more robust, or if the most impact can be done by, for example, increasing the funding for this intervention slowly, so that it´s acceptable by all sides and works best in the long run. This should ideally be encompassed in each cost-benefit ratio.

Preventing bad policies vs. improving good ones: We will look for policies that can have any of these effects, but we are not specifically looking for the existing bad policies. Improving good ones is not the goal either, we want to find policies that have great effects per each Koruna, but occupy unfairly bad position in the current equilibria - might be unpopular, underfunded, not yet well understood or not known by the public.

Sure, you can follow our web or hit me up via email at

Comment author: Denkenberger 20 March 2018 01:43:16AM 0 points [-]

I assume this is 3% real (adjusted for inflation) discount rate. This is actually similar to global per capita income growth. Since a dollar is worth less to a rich person (logarithmic utility), EAs are generally ok with discounting at the economic growth rate. This means we are valuing the utility of future generations the same as present generations.

Comment author: Denkenberger 17 March 2018 03:45:09PM 0 points [-]

I'm glad to see you are investigating present generation causes with significant uncertainty, which seem to be less looked at. Any interest in investigating preparation for agricultural catastrophes?

Comment author: Denkenberger 25 February 2018 04:57:42PM 1 point [-]

Impressive article - I especially liked the biases section. I would recommend doing a quantitative model of cost effectiveness comparing to AIA, as I have done for global agricultural catastrophes, especially because neglectedness is hard to define in your case.

Comment author: Denkenberger 25 February 2018 02:14:34PM 2 points [-]

This sounds like a good idea - I think in-person contact could counteract attrition. Could you clarify how this interacts with the next round of the more general EA grants (e.g. allocated money amounts and timing)?

Comment author: Denkenberger 15 February 2018 12:30:24AM *  0 points [-]

I again donated half my income to the Alliance to Feed the Earth in Disasters (ALLFED).

Comment author: Denkenberger 12 February 2018 01:01:10PM 2 points [-]

These are good points. There are number of "global systemic risk" institutes around the world, and many of them do focus on financial risk. But I would guess they would be concerned more with two and four, and not have one or three on their radar. I'm not aware of any EAs working on this.

Comment author: Denkenberger 09 February 2018 12:46:36AM 4 points [-]

Wow-impressive piece of work. This is longer than most journal articles-maybe a record for the EA forum? You had good links to other people discussing the long-term impacts of global catastrophic risk mitigation. I think a major impact of alternate foods is to make the collapse of civilization less likely, meaning less of the nastiness that could get locked into AI. But of course preventing the risk would mean none of that nastiness. It sounds like you're talking about a different effect, which is that civilization could still be lost, but alternate foods would mean that the minimum population would be higher, meaning a faster recovery. Some of the alternate foods require civilization, but some could be done on a small scale, so this is possible. In this case, alternate foods would still reduce the nastiness, but because recovery would be quicker, I guess it is more likely that the nastiness would not decay out before we get AI.

Comment author: HaukeHillebrandt 29 January 2018 11:48:47AM *  0 points [-]

I take your point. I'm inclined to agree with you that the Allfed should be prioritized over this given that you're the expert. But let's say you're fully funded and we would give you more money to regrant on this cause - would you give to these people for more research or out research? If not, where?

Comment author: Denkenberger 01 February 2018 02:42:15PM *  2 points [-]

Good question. Regranting from ALLFED up to around $100 million would be to existing research labs to research and develop alternate foods as well as planning. I mentioned elsewhere on this page that there are catastrophes that could disrupt the global electricity grid, meaning we could not pull fossil fuels the ground, so the loss of industrial civilization. These catastrophes include extreme solar storm, multiple high altitude detonations of nuclear weapons causing electromagnetic pulse, and a coordinated cyber attack. My preliminary estimate is that $100 million could dramatically increase our resilience to these catastrophes. Beyond that, I think there are number of very neglected failure modes of AI that are between the mass unemployment and AGI/superintelligence, something I would call global catastrophic AI. An example of this is that the coordinated cyber attack mentioned above could take the form of a narrow AI computer virus. But there are a number of other risks and Alexey Turchin and I are outlining them in a paper we hope to publish soon. Work on prevention of these types of risks could be a high priority not just because they are neglected, but also because they could happen sooner than AGI. I also think a lot of meta-EA work is high leverage.

Comment author: HaukeHillebrandt 31 January 2018 12:56:10PM 0 points [-]

I take your point that because on the cause level AI safety is somewhat more neglected, it scores better on the ITN framework (I actually think all of military spending is kind tangled up in the nuclear security scale / tractability, and so maybe it would actually score worse than climate change).

In any case, I think given that this research has a net present value of $10 trillion and it would also liberate funding and talent to go to other causes it is still worth considering and might on the margin be better than a mediocre AI safety grant.

Also, note that I have written this list explicitly so that there is some flexibility in what one can pitch to different donors, who might care particularly about climate change as a cause. Within climate change, I believe this might be a particularly good research area to fund, even before geoengineering projects.

Comment author: Denkenberger 01 February 2018 02:08:02PM *  1 point [-]

Actually, the mitigation strategies for nuclear war risk I was referring to were alternate foods, which fare much better on the ITN framework than climate change.

But it is very interesting to think about your point of freeing of money for other causes. I have shown that the return on investment of alternate foods in terms of saving lives (with a value of statistical life) is something like 100% to 40,000,000%. I have done some unpublished calculations on what the actual monetary return on investment might be. The origin of the monetary return is that if we do not have alternate foods, the price of stored food would become extremely high, and I estimate a total expenditure of $90 trillion. With alternate foods, even though many more people would be fed, the total expenditure on food would be much lower. So I was thinking it might be possible for EAs to fund alternate foods in exchange for a huge sum of money if a global agricultural catastrophe did occur and alternate foods saved governments a lot of money (because they would likely be footing much of the bill to allow some people to afford food). Of course if it is not guaranteed that the global agricultural catastrophe will occur before artificial intelligence becomes dominant. But if it does, we could potentially turn tens of millions of dollars now into hundreds of billions of dollars that could be used for AI or other causes (and of course even if the governments did not give us some fraction of the value of our services, we would still be saving many lives and reducing the chance of loss of civilization). It looks like the return on investment of alternate foods from the monetary perspective would be even greater than from the perspective of saving lives.

I agree about being able to pitch to multiple donors, which is one reason I point out that alternate foods are a cost-effective way of addressing abrupt regional climate change and extreme global climate change (slow increase of more than 5°C).

Comment author: Denkenberger 30 January 2018 10:14:41PM 1 point [-]

Climate change: Fund the authors of this paper on the $10 trillion value of better information about the transient climate response. More on Value of information.

Interesting paper-the reason information is so valuable is because they are talking about spending ~$100 trillion on emissions reductions. Since we are only talking about spending around a few billion dollars on AI or $100 million on mitigation strategies for nuclear war, and because these risks are significantly bigger than climate change, it shows you how much lower a priority climate change is. Solar radiation management (a type of geo-engineering), which you refer to, can be much cheaper, but it still cannot compete (and it potentially poses its own risks).

View more: Next