Comment author: Denkenberger 30 October 2016 12:30:59PM 2 points [-]

We have tried a little to engage with preppers. We wrote about them in our book. We know that all but the most extreme only have about one year of food storage, which would not last a five or 10 year nuclear winter. They tend to be focused on their families and local communities. Some of them are concerned about non-science-based risks. I did give a webinar to the American Preppers Network emphasizing the alternate foods that could be done on a household scale (and cheaper than food storage). They could help by testing out alternate foods, but I have never heard of any of them doing that.

Comment author: turchin 31 October 2016 10:33:04PM 1 point [-]

How long is your home supply? My is something like 5-10 days, but I cary month supply in my belly )))

There is also flutrackers.com community who discuss stockpiling in case of flu pandemic, but you probably know them.

11 my relatives had been starving in Sankt-Peterburg during WW2, so I was grown on the legends about it.

Comment author: casebash 24 October 2016 07:32:58PM 2 points [-]

This is excellent work. I'm curious, how long did this take you? Are you just doing this by yourself or are you a researcher?

Comment author: turchin 24 October 2016 08:03:11PM 3 points [-]

I wrote couple of books in Russian about x-risks several years ago, and in takes several years to come to most of the ideas in this and other maps, but I decided to completely rewrite what I did in books and transform into maps as it would be more contemporary form of information presentation.

It took around one week to make one map if I work 6 hours a day. I did around 30 maps and plan more.

I work in Foundation for Life extension based in Moscow and think about my self as of researcher. I have several academic publications in Russian magazines on x-risks topics.

Other my maps are here: http://immortality-roadmap.com/sample-page/

Comment author: casebash 24 October 2016 07:29:12PM 2 points [-]

I wonder if it is worth the effort for EA to perform outreach to preppers/survivalists to spread ideas about existential risk?

Comment author: turchin 24 October 2016 07:33:47PM 1 point [-]

Maybe they are well aware about risks and do what they can? But it seems that they are preparing to the wrong catastrophes. If they collectively crowdfund MIRI or something like this, it would be substantial help.

Comment author: Denkenberger 24 October 2016 01:34:17PM 2 points [-]

Nice! Though preppers are a less extreme version of survivalists, there are supposedly about 3 million preppers just in the US. In independent shelters, food should be produced by chemical synthesis, not photosynthesis: http://sethbaum.com/ac/2015_Refuges.html Also, if underground, the heat could be dissipated into the surrounding rock for decades or into the flowing groundwater.

Comment author: turchin 24 October 2016 02:00:47PM 1 point [-]

Thanks for the ideas. I will add preppers into the map.

What to you think of the negative side of refuges? They could be used by terrorists who want to exterminate other people on earth, or used as military command centers and places to preserve some WMD?

11

The Map of Shelters and Refuges from Global Risks (Plan B of X-risks Prevention)

This map is part of the map : “ Plan of action to prevent human extinction risks ” . This map zooms in the Plan B of x-risks prevention. The main idea of the map: There are many ways how to create x-risks shelter, but they have only marginal utility... Read More
Comment author: So8res 12 October 2016 08:56:02PM 6 points [-]

As Tsvi mentioned, and as Luke has talked about before, we’re not really researching “provable AI”. (I’m not even quite sure what that term would mean.) We are trying to push towards AI systems where the way they reason is principled and understandable. We suspect that that will involve having a good understanding ourselves of how the system performs its reasoning, and when we study different types of reasoning systems we sometimes build models of systems that are trying to prove things as part of how they reason; but that’s very different from trying to make an AI that is “provably X” for some value of X. I personally doubt AGI teams be able to literally prove anything substantial about how well the system will work in practice, though I expect that they will be able to get some decent statistical guarantees.

There are some big difficulties related to the problem of choosing the right objective to optimize, but currently, that’s not where my biggest concerns are. I’m much more concerned with scenarios where AI scientists figure out how to build misaligned AGI systems well before they figure out how to build aligned AGI systems, as that would be a dangerous regime. My top priority is making it the case that the first AGI designs humanity develops are the kinds of system it’s technologically possible to align with operator intentions in practice. (I’ll write more on this subject later.)

Comment author: turchin 12 October 2016 10:51:42PM 0 points [-]

Thanks! Could link there you will write about this subject later?

Comment author: Marylen 12 October 2016 05:27:08AM *  5 points [-]

I believe that the best and biggest system of morality so far is the legal system. It is an enormous database where the fairest of men have built over the wisdom of their predecessors for a balance between fairness and avoiding chaos; where the bad or obsolete judgements are weed out. It is a system of prioritisation of law which could be encoded one day. I believe that it would be a great tool for addressing corrigibility and value learning. I'm a lawyer and I'm afraid that MIRI may not understand all the potential of the legal system.

Could you tell me why the legal system would not be a great tool for addressing corrigibility and value learning in the near future?

I describe in a little more detail how I think it could be useful at: https://docs.google.com/document/d/1eRirDom-EA_CtLD9Q5T9hWLD6xEKJ80AL3h_u7K-ErA/edit?usp=sharing

Comment author: turchin 12 October 2016 09:16:40PM -1 points [-]

Agree

Comment author: turchin 12 October 2016 10:47:39AM 1 point [-]

One thing always puzzle me about provable AI. If we able to prove that AI will do X and only X after unlimitedly many generations of self-improvemnet, it still not clear how to choose right X.

For example we could be sure that paperclip maximizer will still makes clip after billion generations.

So my question is what we are proving about provable AI?

Comment author: turchin 11 October 2016 10:24:04PM 5 points [-]

If you will get credible evidences that AGI will be created by Google in next 5 years, what will you do?

Comment author: turchin 11 October 2016 10:05:21PM 7 points [-]

If you find and prove right strategy for FAI creation, how you will implement it? Will you send it to all possible AI creators, or will try to build own AI, or ask government to pass it as law?

View more: Prev | Next