Comment author: Kaj_Sotala 12 January 2018 03:01:14PM *  1 point [-]

Hi Daniel,

you argue in section 3.3 of your paper that nanoprobes are likely to be the only viable route to WBE, because of the difficulty in capturing all of the relevant information in a brain if an approach such as destructive scanning is used.

You don't however seem to discuss the alternative path of neuroprosthesis-driven uploading:

we propose to connect to the human brain an exocortex, a prosthetic extension of the biological brain which would integrate with the mind as seamlessly as parts of the biological brain integrate with each other. [...] we make three assumptions which will be further fleshed out in the following sections:

There seems to be a relatively unified cortical algorithm which is capable of processing different types of information. Most, if not all, of the information processing in the brain of any given individual is carried out using variations of this basic algorithm. Therefore we do not need to study hundreds of different types of cortical algorithms before we can create the first version of an exocortex.
We already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness. We have a good reason to believe that an exocortex would be compatible with the existing cortex and would integrate with the mind.
The cortical algorithm has an inbuilt ability to transfer information between cortical areas. Connecting the brain with an exocortex would therefore allow the exocortex to gradually take over or at least become an interface for other exocortices.

In addition to allowing for mind coalescence, the exocortex could also provide a route for uploading human minds. It has been suggested that an upload can be created by copying the brain layer-by-layer [Moravec, 1988] or by cutting a brain into small slices and scanning them [Sandberg & Bostrom, 2008]. However, given our current technological status and understanding of the brain, we suggest that the exocortex might be a likely intermediate step. As an exocortex-equipped brain aged, degenerated and eventually died, an exocortex could take over its functions, until finally the original person existed purely in the exocortex and could be copied or moved to a different substrate.

This seems to avoid the objection of it being too hard to scan the brain in all detail. If we can replicate the high-level functioning of the cortical algorithm, then we can do so in a way which doesn't need to be biologically realistic, but which will still allow us to implement the brain's essential functions in a neural prosthesis (here's some prior work that also replicates some aspect of brain's functioning and re-implements it in a neuroprosthesis, without needing to capture all of the biological details). And if the cortical algorithm can be replicated in a way that allows the person's brain to gradually transfer over functions and memories as the biological brain accumulates damage, the same way that function in the biological brain gets reorganized and can remain intact even as it slowly accumulates massive damage, then that should allow the entirety of the person's cortical function to transfer over to the neuroprosthesis. (of course, there are still the non-cortical parts of the brain that need to be uploaded as well)

A large challenge here is in getting the required amount of neural connections between the exocortex and the biological brain; but we are already getting relatively close, taking into account that the corpus callosum that connects the two hemispheres "only" has on the order of 100 million connections:

Earlier this year, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Neural Engineering System Design. It aims to win approval from the US Food and Drug Administration within 4 years for a wireless human brain device that can monitor brain activity using 1 million electrodes simultaneously and selectively stimulate up to 100,000 neurons. (source)

Comment author: Daniel_Eth 13 January 2018 02:20:09AM *  1 point [-]

Neuroprosthesis-driven uploading seems vastly harder for several reasons:

• you'd still need to understand in great detail how the brain processes information (if you don't, you'll be left with an upload that, while perhaps intelligent, would not act like how the person acted, and perhaps even drastically so that it might be better to imagine it as a form of NAGI than as WBE)

• integrating the exocortex with the brain would likely still require nanotechnology able to interface with the brain

• ethical/ regulatory hurdles here seem immense

I'd actually expect that in order to understand the brain enough for neuroprosthesis-driven uploading, we'd still likely need to run experiments with nanoprobes (for the same arguments as in the paper: lots of the information processing happens on the sub-cellular level - this doesn't mean that we have to replicate this information processing in a biologically realistic manner, but we likely will need to at least understand how the information is processed)

Comment author: Daniel_Eth 12 January 2018 01:21:34AM 2 points [-]

Also here's a 5 minute talk I gave at EA Global London on the same topic: https://www.youtube.com/watch?v=jgSxmA7AiBo&index=30&list=PLwp9xeoX5p8POB5XyiHrLbHxx0n6PQIBf

Comment author: Daniel_Eth 03 September 2017 05:56:41AM *  1 point [-]

I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).

I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.

Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.

Comment author: kbog  (EA Profile) 29 August 2017 02:36:12AM *  2 points [-]

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data.

The same can be said for humans. And remember that we are looking at AI systems conditional upon them being effective enough to replace people on the battlefield. If they make serious errors much more frequently than people do, then it's unlikely that the military will want to use them.

Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead.

That requires automation not just at the tactical level, but all the way up to the theatre level. I don't think we should have AIs in charge of major military commands, but that's kind of a different issue, and it's not going to happen anytime soon. Plus, it's easy enough to control whether machines are in a defensive posture, offensive posture, peaceful posture, etc. We already have to do this with manned military units.

Comment author: Daniel_Eth 29 August 2017 06:59:02AM 1 point [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment author: Daniel_Eth 29 August 2017 12:51:42AM 4 points [-]

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

Comment author: ChristianKleineidam 10 August 2017 01:32:52PM 0 points [-]

Almost all diseases fundamentally occur at the nanoscale.

What exactly does that mean? What kind of nanotech are you thinking about?

Comment author: Daniel_Eth 20 August 2017 12:46:54PM 0 points [-]

The vast majority of ailments derive from unfortunate happenings at the subcellular level (i.e. the nanoscale). This includes amyloid buildup in alzheimers, DNA mutations in cancer, etc etc. Right now, medicine is - to a large degree - hoping to get lucky by finding chemicals that happen to combat these processes. But a more thorough ability of actually influencing events on this scale could be a boon for medicine. What type of nanotech am I envisioning exactly? That's pretty broad - though in the short/ medium term it could be carbon nanotubes targeting cancer cells (http://www.sciencedirect.com/science/article/pii/S0304419X10000144), could be DNA origami used to deliver drugs in a targeted way (http://www.nature.com/news/dna-robot-could-kill-cancer-cells-1.10047), or could be something else entirely.

Comment author: Daniel_Eth 09 August 2017 08:35:54PM 0 points [-]

Personally, I'd recommend donating to fund nanotechnology research (especially nanobiotechnology). Almost all diseases fundamentally occur at the nanoscale. I'd assume that our ability to manipulate matter at this scale in targeted ways is close to necessary and sufficient to cure many diseases, and that once we get advanced nanotechnology our medicine will improve leaps and bounds. Unfortunately, people like to feel that their interventions are more direct, so basic research that could lead to better tools to cure many diseases is likely drastically underfunded.

Comment author: FeepingCreature 30 June 2017 11:42:36AM 3 points [-]

But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is

Dangling sentence.

In my personal belief, the "hard AI takeoff" scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, "mathematics/programming". So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

Comment author: Daniel_Eth 19 July 2017 04:10:31AM *  0 points [-]

My 2 cents: math/ programming is only half the battle. Here's an analogy - you could be the best programmer in the world, but if you don't understand chess, you can't program a computer to beat a human at chess, and if you don't understand quantum physics, you can't program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).

In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn't just one thing - it's a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more "human" traits and in areas where computer act more like agents than just like tools, it's still more likely to happen in several waves instead of just one takeoff.

Comment author: Kaj_Sotala 10 July 2017 06:45:32PM 4 points [-]

There's a strong possibility, even in a soft takeoff, that an unaligned AI would not act in an alarming way until after it achieves a decisive strategic advantage.

That's assuming that the AI is confident that it will achieve a DSA eventually, and that no competitors will do so first. (In a soft takeoff it seems likely that there will be many AIs, thus many potential competitors.) The worse the AI thinks its chances are of eventually achieving a DSA first, the more rational it becomes for it to risk non-cooperative action at the point when it thinks it has the best chances of success - even if those chances were low. That might help reveal unaligned AIs during a soft takeoff.

Interestingly this suggests that the more AIs there are, the easier it might be to detect unaligned AIs (since every additional competitor decreases any given AI's odds of getting a DSA first), and it suggests some unintuitive containment strategies such as explicitly explaining to the AI when it would be rational for it to go uncooperative if it was unaligned, to increase the odds of unaligned AIs really risking hostile action early on and being discovered...

Comment author: Daniel_Eth 19 July 2017 03:41:40AM 0 points [-]

Or it could just assume the AI has an unbounded utility function (or bounded very highly). An AI could guess it only has a 1 in 1/B chance of reaching DSA, but that the payoff from reaching this is 100B higher than defecting early. Since there are 100B stars in the galaxy, it seems likely that in a multipolar situation with decent diversity of AIs, some would fulfill this criteria and decide to gamble.

Comment author: [deleted] 09 June 2017 10:21:55PM 3 points [-]

Would anyone be interested in an EA prediction market, where trading profits were donated to the EA charity of the investor's choosing, and the contracts were based on outcomes important to EAs (examples below)?

  • Will a nation state launch a nuclear weapon in 2017 that kills more than 1,000 people?

  • Will one of the current top five fast food chains offer an item containing cultured meat before 2023?

  • Will the total number of slaughtered farm animals in 2017 be less than that in 2016?

  • Will the 2017 infant mortality rate in the DRC be less than 5%?

In response to comment by [deleted] on Announcing Effective Altruism Grants
Comment author: Daniel_Eth 15 June 2017 04:17:19AM *  2 points [-]

While I'm generally in favor of the idea of prediction markets, I think we need to consider the potential negative PR from betting on catastrophes. So while betting on whether a fast food chain offers cultured meat before a certain date would probably be fine, I think it would be a really bad idea to bet on nuclear weapons being used.

View more: Next