[Paper] Preventing Supervolcanic Eruptions

Supervolcanic eruptions are approximately 10 times as powerful as the eruption that caused the year without a summer in 1816. A supervolcanic eruption would cause local devastation, but the main problem is blocking the sun for years and starvation (with possible loss of civilization without recovery and other far future... Read More
Comment author: Denkenberger 15 January 2018 03:37:04AM 0 points [-]

Thanks - they make sense. Do you think I followed them here?

Comment author: RyanCarey 14 January 2018 12:01:42PM *  2 points [-]

I haven't read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.

The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]

For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.

We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.

Comment author: Denkenberger 14 January 2018 02:16:40PM *  4 points [-]

It is hard to encapsulate this all into a simple scale, but we wanted to recognize that false vacuum decay that would destroy the Universe at light speed would be worse than bad AI, at least if you think the future will be net positive. Bad AI could be constrained by a more powerful civilization.

Comment author: Denkenberger 04 January 2018 03:41:31PM 1 point [-]

This piece combines AI and a cause that there has been some EA interest in - reversing aging. When Turchin says that medical AI will be safer, he means that AGI that grows out of medical narrow AI is likely to be safer than AGI that grows out of a military, other government agency, private company, or open effort. Note I am a coauthor.

Comment author: JacobTref 30 December 2017 11:57:02PM 0 points [-]

This is a useful write-up, thank you for this and your previous posts on alternate foods.

Do you know which US government departments are working on food in catastrophes, if any? DHS? FEMA? USDA (e.g. https://www.usda.gov/topics/disaster)? I assume that in catastrophes government coordination on food is key - the closest parallel that comes to mind is rationing in the UK after WW2. I'd be interested if there's anything public to read about departments' budgets for things like this, or how they're thinking about it.

Comment author: Denkenberger 31 December 2017 02:30:25AM *  0 points [-]

Good question. We have spent some time trying to find the most appropriate parts of the US and UK governments. USDA looks at smaller disasters, as does FEMA/DHS. The Cold War Civil Defense Department turned into FEMA, but now FEMA does not consider conflict scenarios. DOD is a possibility. The UK foreign office put out a report on the ~80% likelihood of 10% global agricultural shortfalls this century. We have found very few people thinking about what to do if there were a 10% or 100% agricultural shortfall.

Comment author: Denkenberger 30 December 2017 07:18:29PM 2 points [-]

Thanks! Did you include Roman Yampolskiy in GCRI? He has been publishing a lot (Kaj mentions just one example).

Comment author: Ben_West  (EA Profile) 21 December 2017 10:54:17PM 1 point [-]

Thanks for writing this! This is a very interesting idea.

Do you have thoughts on "learning" goals for the next year? E.g. is it possible that you could find a certain valuable food source with significantly more or less effort expected? Or could you learn of a non-EA funding source (e.g. government grants) that would make you significantly more impactful? I'm mostly interested in your $10,000 order of magnitude, if that's relevant.

Also: do you think that your research could negatively impact animal welfare in the event that a global catastrophe does not occur? E.g. could you recommend a change to fishing practices which are implemented prior to a catastrophe which increases the number of farmed fish or changes their quality of life?

Comment author: Denkenberger 30 December 2017 01:07:02AM 0 points [-]

Thanks! Yes, it is possible we will find new food sources. Some that I have not yet been able to analyze include bacteria that eat plastic, bacteria that run on electricity, and direct chemical synthesis of food. We are actively pursuing non-EA grants and foundations. Generally we would not be recommending changing the way things are done today very much, because that can get into the billions of dollars of cost. So I doubt we would negatively impact animal welfare if the catastrophe did not occur.

Comment author: Denkenberger 19 December 2017 06:13:17PM 2 points [-]

Thanks for letting us all know! I put Alliance to Feed the Earth in Disasters (ALLFED) into the running. It's great to see so much support for EA in the discussion.

Comment author: Denkenberger 08 December 2017 01:51:50AM *  1 point [-]

One problem is that with current technology, it is quite expensive to prevent extreme climate change. With emissions reductions, it is trillions of dollars. Even with solar radiation management (a type of geoengineering), it is tens of billions of dollars. Depending on the type of solar radiation management, it could result in rapid warming if turned off by another catastrophe, causing a double catastrophe. But there are adaptation techniques that are cheaper (~$100 million). And since these techniques protect against many other catastrophes, I'm pretty sure they are far more cost effective than preventing extreme climate change. But it would be interesting to compare quantitatively different interventions in your model.


How you can save expected lives for $0.20-$400 each and reduce X risk

    Summary The Alliance to Feed the Earth in Disasters ( ALLFED ) is a new EA-aligned charity with potential for high cost effectiveness in the global poverty and existential risk spaces. I have posted on the EA forum before about getting prepared for alternate foods (roughly those not... Read More

View more: Next