Comment author: [deleted] 09 June 2017 10:21:55PM 3 points [-]

Would anyone be interested in an EA prediction market, where trading profits were donated to the EA charity of the investor's choosing, and the contracts were based on outcomes important to EAs (examples below)?

  • Will a nation state launch a nuclear weapon in 2017 that kills more than 1,000 people?

  • Will one of the current top five fast food chains offer an item containing cultured meat before 2023?

  • Will the total number of slaughtered farm animals in 2017 be less than that in 2016?

  • Will the 2017 infant mortality rate in the DRC be less than 5%?

In response to comment by [deleted] on Announcing Effective Altruism Grants
Comment author: Daniel_Eth 15 June 2017 04:17:19AM *  2 points [-]

While I'm generally in favor of the idea of prediction markets, I think we need to consider the potential negative PR from betting on catastrophes. So while betting on whether a fast food chain offers cultured meat before a certain date would probably be fine, I think it would be a really bad idea to bet on nuclear weapons being used.

In response to Political Ideology
Comment author: Daniel_Eth 03 June 2017 12:34:59AM 0 points [-]

I feel like you're drawing a lot of causations from correlations, which don't imply causation.

In response to Red teaming GiveWell
Comment author: Daniel_Eth 03 June 2017 12:28:27AM *  1 point [-]

While I applaud the idea of playing devil's advocate, I find the style of this post to be quite snide (eg liberal use of sarcastic rhetorical questions), which I think is problematic. Efforts to red team the community should be aimed at pointing out errors to be fixed, and I don't see how this helps. On the contrary, it can decrease morale and also signal to outsiders a lack of a sense of community within EA. It would be no more difficult to bring up potential problems in a simple, matter of factual manner.

Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:11:23AM 4 points [-]

There's no shortage of bad ventures in the Valley

Every time in the past week or so that I've seen someone talk about a bad venture, they've given the same example. That suggests that there is indeed a shortage of bad ventures--or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are "bad" in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)

Comment author: Daniel_Eth 24 April 2017 04:32:46AM 3 points [-]

Or that there's one recent venture that's so laughably bad that everyone is talking about it right now...

Comment author: ChristianKleineidam 23 April 2017 07:30:23AM 2 points [-]

As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

I'm not sure that's true. There are a lot of venture funds in the Valley but that doesn't mean it's easy to get any venture fund to give you money.

Comment author: Daniel_Eth 24 April 2017 01:32:39AM 1 point [-]
Comment author: Daniel_Eth 17 April 2017 07:08:49AM 0 points [-]

"So far, we haven't found any way to achieve all three goals at once. As an example, we can try to remove any incentive on the system's part to control whether its suspend button is pushed by giving the system a switching objective function that always assigns the same expected utility to the button being on or off"

Wouldn't this potentially have another negative effect of giving the system an incentive to "expect" an unjustifiably high probability of successfully filling the cauldron? That way if the button is pressed and it's suspended, it gets a higher reward than if it expected a lower chance of success. This is basically an example of reward hacking.

Comment author: Daniel_Eth 14 April 2017 10:14:14PM 4 points [-]

This is great! Probably the best intro to AI safety that I've seen.

Comment author: Daniel_Eth 07 April 2017 12:43:27AM *  2 points [-]

2 (Different ways of adjusting for ‘purchasing power’) is tough, since not all items will scale the same amount. And markets typically are aimed at specific populations, so rich countries like America often won't even have markets for the poorest people in the world. The implication of this is that living on $2 per day in America is basically impossible, while living on $2 per day, even when "adjusted for purchasing power" in some poorer parts of the world (while still incredibly difficult), is more manageable.

Comment author: Daniel_Eth 07 April 2017 12:25:21AM 0 points [-]

Looks like good work! My biggest question is how would you get people to actually do this? I'd imagine there are a lot of people that would want to go to Mars since that seems like a great adventure, but living in a submarine in case there's a catastrophe isn't something that I think would appeal to many people, nor is funding the project.

Comment author: Daniel_Eth 31 March 2017 01:58:10PM *  1 point [-]

I think it's a really bad idea to try to slow down AI research. In addition to the fact that you'll antagonize almost all of the AI community and make them not take AI safety research as seriously, consider what would happen on the off chance that you actually succeeded.

There are a lot of AI firms, so if you're able to convince some to slow down, then the ones that don't slow down would be the ones that care less about AI safety. Much better idea to get the ones who care about AI safety to focus on AI safety than to potentially cede their cutting-edge research position to others who care less.

I think creating more Stuart Russells is just about the best thing that can be done for AI Safety. What he has different from others who care about AI Safety is that he's a prestigious CS professor, while many who focus on AI Safety, even if they have good ideas, aren't affiliated with a well-known and well-respected institution. Even when Nick Bostrom or Steven Hawking talk about AI, they're often dismissed by people who say "well sure they're smart, but they're not computer scientists, so what do they know?"

I'm actually a little surprised that they seemed so resistant to your idea. It seems to me that there is so much noise on this topic, that the marginal negative from creating more noise is basically zero, and if there's a chance you could cut through the noise and provide a platform to people who know what they're talking about here then that would be good.

View more: Prev | Next