Comment author: Daniel_Eth 03 September 2017 05:56:41AM *  1 point [-]

I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).

I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.

Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.

Comment author: kbog  (EA Profile) 29 August 2017 02:36:12AM *  2 points [-]

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data.

The same can be said for humans. And remember that we are looking at AI systems conditional upon them being effective enough to replace people on the battlefield. If they make serious errors much more frequently than people do, then it's unlikely that the military will want to use them.

Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead.

That requires automation not just at the tactical level, but all the way up to the theatre level. I don't think we should have AIs in charge of major military commands, but that's kind of a different issue, and it's not going to happen anytime soon. Plus, it's easy enough to control whether machines are in a defensive posture, offensive posture, peaceful posture, etc. We already have to do this with manned military units.

Comment author: Daniel_Eth 29 August 2017 06:59:02AM 1 point [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment author: Daniel_Eth 29 August 2017 12:51:42AM 4 points [-]

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

Comment author: ChristianKleineidam 10 August 2017 01:32:52PM 0 points [-]

Almost all diseases fundamentally occur at the nanoscale.

What exactly does that mean? What kind of nanotech are you thinking about?

Comment author: Daniel_Eth 20 August 2017 12:46:54PM 0 points [-]

The vast majority of ailments derive from unfortunate happenings at the subcellular level (i.e. the nanoscale). This includes amyloid buildup in alzheimers, DNA mutations in cancer, etc etc. Right now, medicine is - to a large degree - hoping to get lucky by finding chemicals that happen to combat these processes. But a more thorough ability of actually influencing events on this scale could be a boon for medicine. What type of nanotech am I envisioning exactly? That's pretty broad - though in the short/ medium term it could be carbon nanotubes targeting cancer cells (http://www.sciencedirect.com/science/article/pii/S0304419X10000144), could be DNA origami used to deliver drugs in a targeted way (http://www.nature.com/news/dna-robot-could-kill-cancer-cells-1.10047), or could be something else entirely.

Comment author: Daniel_Eth 09 August 2017 08:35:54PM 0 points [-]

Personally, I'd recommend donating to fund nanotechnology research (especially nanobiotechnology). Almost all diseases fundamentally occur at the nanoscale. I'd assume that our ability to manipulate matter at this scale in targeted ways is close to necessary and sufficient to cure many diseases, and that once we get advanced nanotechnology our medicine will improve leaps and bounds. Unfortunately, people like to feel that their interventions are more direct, so basic research that could lead to better tools to cure many diseases is likely drastically underfunded.

Comment author: FeepingCreature 30 June 2017 11:42:36AM 3 points [-]

But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is

Dangling sentence.

In my personal belief, the "hard AI takeoff" scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, "mathematics/programming". So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

Comment author: Daniel_Eth 19 July 2017 04:10:31AM *  0 points [-]

My 2 cents: math/ programming is only half the battle. Here's an analogy - you could be the best programmer in the world, but if you don't understand chess, you can't program a computer to beat a human at chess, and if you don't understand quantum physics, you can't program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).

In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn't just one thing - it's a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more "human" traits and in areas where computer act more like agents than just like tools, it's still more likely to happen in several waves instead of just one takeoff.

Comment author: Kaj_Sotala 10 July 2017 06:45:32PM 3 points [-]

There's a strong possibility, even in a soft takeoff, that an unaligned AI would not act in an alarming way until after it achieves a decisive strategic advantage.

That's assuming that the AI is confident that it will achieve a DSA eventually, and that no competitors will do so first. (In a soft takeoff it seems likely that there will be many AIs, thus many potential competitors.) The worse the AI thinks its chances are of eventually achieving a DSA first, the more rational it becomes for it to risk non-cooperative action at the point when it thinks it has the best chances of success - even if those chances were low. That might help reveal unaligned AIs during a soft takeoff.

Interestingly this suggests that the more AIs there are, the easier it might be to detect unaligned AIs (since every additional competitor decreases any given AI's odds of getting a DSA first), and it suggests some unintuitive containment strategies such as explicitly explaining to the AI when it would be rational for it to go uncooperative if it was unaligned, to increase the odds of unaligned AIs really risking hostile action early on and being discovered...

Comment author: Daniel_Eth 19 July 2017 03:41:40AM 0 points [-]

Or it could just assume the AI has an unbounded utility function (or bounded very highly). An AI could guess it only has a 1 in 1/B chance of reaching DSA, but that the payoff from reaching this is 100B higher than defecting early. Since there are 100B stars in the galaxy, it seems likely that in a multipolar situation with decent diversity of AIs, some would fulfill this criteria and decide to gamble.

Comment author: [deleted] 09 June 2017 10:21:55PM 3 points [-]

Would anyone be interested in an EA prediction market, where trading profits were donated to the EA charity of the investor's choosing, and the contracts were based on outcomes important to EAs (examples below)?

  • Will a nation state launch a nuclear weapon in 2017 that kills more than 1,000 people?

  • Will one of the current top five fast food chains offer an item containing cultured meat before 2023?

  • Will the total number of slaughtered farm animals in 2017 be less than that in 2016?

  • Will the 2017 infant mortality rate in the DRC be less than 5%?

In response to comment by [deleted] on Announcing Effective Altruism Grants
Comment author: Daniel_Eth 15 June 2017 04:17:19AM *  2 points [-]

While I'm generally in favor of the idea of prediction markets, I think we need to consider the potential negative PR from betting on catastrophes. So while betting on whether a fast food chain offers cultured meat before a certain date would probably be fine, I think it would be a really bad idea to bet on nuclear weapons being used.

In response to Political Ideology
Comment author: Daniel_Eth 03 June 2017 12:34:59AM 0 points [-]

I feel like you're drawing a lot of causations from correlations, which don't imply causation.

In response to Red teaming GiveWell
Comment author: Daniel_Eth 03 June 2017 12:28:27AM *  1 point [-]

While I applaud the idea of playing devil's advocate, I find the style of this post to be quite snide (eg liberal use of sarcastic rhetorical questions), which I think is problematic. Efforts to red team the community should be aimed at pointing out errors to be fixed, and I don't see how this helps. On the contrary, it can decrease morale and also signal to outsiders a lack of a sense of community within EA. It would be no more difficult to bring up potential problems in a simple, matter of factual manner.

View more: Next