Comment author: FeepingCreature 30 June 2017 11:42:36AM 3 points [-]

But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is

Dangling sentence.

In my personal belief, the "hard AI takeoff" scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, "mathematics/programming". So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

Comment author: turchin 01 July 2017 05:56:33PM 1 point [-]

Does it mean that we could try to control AI by preventing its to know anything about programming?

And on the other side, any AI which is able to write code should be regarded extremely dangerous, no matter how low its abilities in other domains?

Comment author: Daniel_Eth 07 April 2017 12:25:21AM 0 points [-]

Looks like good work! My biggest question is how would you get people to actually do this? I'd imagine there are a lot of people that would want to go to Mars since that seems like a great adventure, but living in a submarine in case there's a catastrophe isn't something that I think would appeal to many people, nor is funding the project.

Comment author: turchin 07 April 2017 07:11:11AM 3 points [-]

If we promise that people who want to go to Mars have to serve a year on a refuge-submarine, there will be a lot of volonteers - and we could choose the best.

Or we could collect the crews the same way as military crews are collected - combining prestige and salary.

Comment author: JacobLBryan 06 April 2017 10:57:28AM 0 points [-]

When you get to scenario three where a nuclear submarine is operating under a private non-governmental organization I have to wonder about precedent for governments allowing fissile material into private control, especially absent a lot of the governmental controls that existing power plants have in place.

(You have a typo in figure 1, years not tears.)

Comment author: turchin 06 April 2017 12:34:47PM 2 points [-]

Thanks for typo hint!

I think that they mostly should be operating under general government control. There are also several private companies which licensed to make nuclear power plants etc like Westinghouse, and the same companies could operate nuclear powered ships and submarines.

Comment author: MikeJohnson 09 December 2016 05:50:20PM *  4 points [-]

Yes, it would be quite civilizationally embarrassing to accidently p-zombie ourselves... more generally, it seems valuable to understand tradeoffs in consciousness. This seems to be an important component in any far-future planning.

Also- Andres has done some interesting exploratory work re: defining the problem of future drug epidemics, and discussing game-theoretic considerations.

Comment author: turchin 09 December 2016 07:36:47PM 1 point [-]

If we make an AGI which doesn't have qualia, it will probably will prove that no such thing exist and proceed with p-zombie us.

So in may be better to pursued the way to AGI which probably will provide it with qualia, and one such way is human upgrade

Comment author: turchin 09 December 2016 02:41:53PM 6 points [-]

I agree on your main premises - about importance of qualia, especially from the point of x-risks. For example, if humans will be uploaded, but without qualia, they will become p-zombies.

If qualia of extreme pleasure could be created and transferred, super-addictive drug epidemic will happen.

I have been thinking about qualia a lot and have some theories, but now I am concentrated on another topic.

Comment author: turchin 19 November 2016 06:33:15PM *  2 points [-]

My point of view: There were 3 possible outcomes of the election, H win, T win, no clear result. I used to think that the last is worst outcome, as it would result in civil war, end of progress etc. It still could happen if sides will increase mutual animosity. But it has very small probability.

Two other outcomes both could have positive and negative effects on x-risks. T-win negative outcomes are listed in the article, and I agree. Positive outcomes could be following: preventing nuclear war with Russian in short term and Thiel (who supports FAI) in the administration.

Negative outcome in case of H win: Higher probability of war with Russian over no-fly zone in Syria. Positive: everything else will be the same.

Disclaimer: I would vote for H if I can.

It may be also interesting to compare Trump-risk with risk of other leaders of nuclear powers, including Putin, Xi and Kim in North Korea, and even may be Le Pen (if she win) in France, Modi in India and Brexit.

Comment author: turchin 19 November 2016 06:20:38PM 2 points [-]

See also: What a Trump Presidency Means for Human Survival: One Expert’s Take
Phil Torres

http://ieet.org/index.php/IEET/more/torres20161114

Comment author: Denkenberger 13 November 2016 09:35:23PM 1 point [-]

That might work, though people would probably prefer fishing. But with a 10 km diameter impact, there probably would not be much fish or plankton.

Comment author: turchin 14 November 2016 12:53:40AM 0 points [-]

But may be some bacteria could still be suspended in the water as well as some organics?

Comment author: Denkenberger 07 November 2016 01:16:07PM 1 point [-]

Very nice! I would add another stage of defense of alternate foods. If we were actually prepared with these, then I don't think we would get civilization collapse for 1 km diameter and maybe not even for 10 km.

Comment author: turchin 07 November 2016 03:06:36PM 0 points [-]

I am now working on an article about submarines as possible refuges in case of a global catastrophe. One of idea I had for food provision for it is filtering of water to get plankton, the same ways as whale do it. What do you think about feasibility of such approach?

Comment author: Denkenberger 30 October 2016 12:30:59PM 2 points [-]

We have tried a little to engage with preppers. We wrote about them in our book. We know that all but the most extreme only have about one year of food storage, which would not last a five or 10 year nuclear winter. They tend to be focused on their families and local communities. Some of them are concerned about non-science-based risks. I did give a webinar to the American Preppers Network emphasizing the alternate foods that could be done on a household scale (and cheaper than food storage). They could help by testing out alternate foods, but I have never heard of any of them doing that.

Comment author: turchin 31 October 2016 10:33:04PM 1 point [-]

How long is your home supply? My is something like 5-10 days, but I cary month supply in my belly )))

There is also flutrackers.com community who discuss stockpiling in case of flu pandemic, but you probably know them.

11 my relatives had been starving in Sankt-Peterburg during WW2, so I was grown on the legends about it.

View more: Next