Comment author: Daniel_Eth 09 August 2017 08:35:54PM -1 points [-]

Personally, I'd recommend donating to fund nanotechnology research (especially nanobiotechnology). Almost all diseases fundamentally occur at the nanoscale. I'd assume that our ability to manipulate matter at this scale in targeted ways is close to necessary and sufficient to cure many diseases, and that once we get advanced nanotechnology our medicine will improve leaps and bounds. Unfortunately, people like to feel that their interventions are more direct, so basic research that could lead to better tools to cure many diseases is likely drastically underfunded.

Comment author: FeepingCreature 30 June 2017 11:42:36AM 3 points [-]

But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is

Dangling sentence.

In my personal belief, the "hard AI takeoff" scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, "mathematics/programming". So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in "software development", such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it's almost circularly "AGI complete").

Comment author: Daniel_Eth 19 July 2017 04:10:31AM *  0 points [-]

My 2 cents: math/ programming is only half the battle. Here's an analogy - you could be the best programmer in the world, but if you don't understand chess, you can't program a computer to beat a human at chess, and if you don't understand quantum physics, you can't program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).

In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn't just one thing - it's a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more "human" traits and in areas where computer act more like agents than just like tools, it's still more likely to happen in several waves instead of just one takeoff.

Comment author: Kaj_Sotala 10 July 2017 06:45:32PM 3 points [-]

There's a strong possibility, even in a soft takeoff, that an unaligned AI would not act in an alarming way until after it achieves a decisive strategic advantage.

That's assuming that the AI is confident that it will achieve a DSA eventually, and that no competitors will do so first. (In a soft takeoff it seems likely that there will be many AIs, thus many potential competitors.) The worse the AI thinks its chances are of eventually achieving a DSA first, the more rational it becomes for it to risk non-cooperative action at the point when it thinks it has the best chances of success - even if those chances were low. That might help reveal unaligned AIs during a soft takeoff.

Interestingly this suggests that the more AIs there are, the easier it might be to detect unaligned AIs (since every additional competitor decreases any given AI's odds of getting a DSA first), and it suggests some unintuitive containment strategies such as explicitly explaining to the AI when it would be rational for it to go uncooperative if it was unaligned, to increase the odds of unaligned AIs really risking hostile action early on and being discovered...

Comment author: Daniel_Eth 19 July 2017 03:41:40AM 0 points [-]

Or it could just assume the AI has an unbounded utility function (or bounded very highly). An AI could guess it only has a 1 in 1/B chance of reaching DSA, but that the payoff from reaching this is 100B higher than defecting early. Since there are 100B stars in the galaxy, it seems likely that in a multipolar situation with decent diversity of AIs, some would fulfill this criteria and decide to gamble.

Comment author: [deleted] 09 June 2017 10:21:55PM 3 points [-]

Would anyone be interested in an EA prediction market, where trading profits were donated to the EA charity of the investor's choosing, and the contracts were based on outcomes important to EAs (examples below)?

  • Will a nation state launch a nuclear weapon in 2017 that kills more than 1,000 people?

  • Will one of the current top five fast food chains offer an item containing cultured meat before 2023?

  • Will the total number of slaughtered farm animals in 2017 be less than that in 2016?

  • Will the 2017 infant mortality rate in the DRC be less than 5%?

In response to comment by [deleted] on Announcing Effective Altruism Grants
Comment author: Daniel_Eth 15 June 2017 04:17:19AM *  1 point [-]

While I'm generally in favor of the idea of prediction markets, I think we need to consider the potential negative PR from betting on catastrophes. So while betting on whether a fast food chain offers cultured meat before a certain date would probably be fine, I think it would be a really bad idea to bet on nuclear weapons being used.

In response to Political Ideology
Comment author: Daniel_Eth 03 June 2017 12:34:59AM 0 points [-]

I feel like you're drawing a lot of causations from correlations, which don't imply causation.

In response to Red teaming GiveWell
Comment author: Daniel_Eth 03 June 2017 12:28:27AM *  1 point [-]

While I applaud the idea of playing devil's advocate, I find the style of this post to be quite snide (eg liberal use of sarcastic rhetorical questions), which I think is problematic. Efforts to red team the community should be aimed at pointing out errors to be fixed, and I don't see how this helps. On the contrary, it can decrease morale and also signal to outsiders a lack of a sense of community within EA. It would be no more difficult to bring up potential problems in a simple, matter of factual manner.

Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:11:23AM 4 points [-]

There's no shortage of bad ventures in the Valley

Every time in the past week or so that I've seen someone talk about a bad venture, they've given the same example. That suggests that there is indeed a shortage of bad ventures--or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are "bad" in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)

Comment author: Daniel_Eth 24 April 2017 04:32:46AM 3 points [-]

Or that there's one recent venture that's so laughably bad that everyone is talking about it right now...

Comment author: ChristianKleineidam 23 April 2017 07:30:23AM 2 points [-]

As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

I'm not sure that's true. There are a lot of venture funds in the Valley but that doesn't mean it's easy to get any venture fund to give you money.

Comment author: Daniel_Eth 24 April 2017 01:32:39AM 1 point [-]
Comment author: Daniel_Eth 17 April 2017 07:08:49AM 0 points [-]

"So far, we haven't found any way to achieve all three goals at once. As an example, we can try to remove any incentive on the system's part to control whether its suspend button is pushed by giving the system a switching objective function that always assigns the same expected utility to the button being on or off"

Wouldn't this potentially have another negative effect of giving the system an incentive to "expect" an unjustifiably high probability of successfully filling the cauldron? That way if the button is pressed and it's suspended, it gets a higher reward than if it expected a lower chance of success. This is basically an example of reward hacking.

Comment author: Daniel_Eth 14 April 2017 10:14:14PM 4 points [-]

This is great! Probably the best intro to AI safety that I've seen.

View more: Next