Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:11:23AM 3 points [-]

There's no shortage of bad ventures in the Valley

Every time in the past week or so that I've seen someone talk about a bad venture, they've given the same example. That suggests that there is indeed a shortage of bad ventures--or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are "bad" in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)

Comment author: Daniel_Eth 24 April 2017 04:32:46AM 1 point [-]

Or that there's one recent venture that's so laughably bad that everyone is talking about it right now...

Comment author: ChristianKleineidam 23 April 2017 07:30:23AM 2 points [-]

As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

I'm not sure that's true. There are a lot of venture funds in the Valley but that doesn't mean it's easy to get any venture fund to give you money.

Comment author: Daniel_Eth 24 April 2017 01:32:39AM 0 points [-]
Comment author: Daniel_Eth 17 April 2017 07:08:49AM 0 points [-]

"So far, we haven't found any way to achieve all three goals at once. As an example, we can try to remove any incentive on the system's part to control whether its suspend button is pushed by giving the system a switching objective function that always assigns the same expected utility to the button being on or off"

Wouldn't this potentially have another negative effect of giving the system an incentive to "expect" an unjustifiably high probability of successfully filling the cauldron? That way if the button is pressed and it's suspended, it gets a higher reward than if it expected a lower chance of success. This is basically an example of reward hacking.

Comment author: Daniel_Eth 14 April 2017 10:14:14PM 4 points [-]

This is great! Probably the best intro to AI safety that I've seen.

Comment author: Daniel_Eth 07 April 2017 12:43:27AM *  2 points [-]

2 (Different ways of adjusting for ‘purchasing power’) is tough, since not all items will scale the same amount. And markets typically are aimed at specific populations, so rich countries like America often won't even have markets for the poorest people in the world. The implication of this is that living on $2 per day in America is basically impossible, while living on $2 per day, even when "adjusted for purchasing power" in some poorer parts of the world (while still incredibly difficult), is more manageable.

Comment author: Daniel_Eth 07 April 2017 12:25:21AM 0 points [-]

Looks like good work! My biggest question is how would you get people to actually do this? I'd imagine there are a lot of people that would want to go to Mars since that seems like a great adventure, but living in a submarine in case there's a catastrophe isn't something that I think would appeal to many people, nor is funding the project.

Comment author: Daniel_Eth 31 March 2017 01:58:10PM *  1 point [-]

I think it's a really bad idea to try to slow down AI research. In addition to the fact that you'll antagonize almost all of the AI community and make them not take AI safety research as seriously, consider what would happen on the off chance that you actually succeeded.

There are a lot of AI firms, so if you're able to convince some to slow down, then the ones that don't slow down would be the ones that care less about AI safety. Much better idea to get the ones who care about AI safety to focus on AI safety than to potentially cede their cutting-edge research position to others who care less.

I think creating more Stuart Russells is just about the best thing that can be done for AI Safety. What he has different from others who care about AI Safety is that he's a prestigious CS professor, while many who focus on AI Safety, even if they have good ideas, aren't affiliated with a well-known and well-respected institution. Even when Nick Bostrom or Steven Hawking talk about AI, they're often dismissed by people who say "well sure they're smart, but they're not computer scientists, so what do they know?"

I'm actually a little surprised that they seemed so resistant to your idea. It seems to me that there is so much noise on this topic, that the marginal negative from creating more noise is basically zero, and if there's a chance you could cut through the noise and provide a platform to people who know what they're talking about here then that would be good.

In response to Utopia In The Fog
Comment author: Daniel_Eth 30 March 2017 07:05:49PM 2 points [-]

Isn't Elon Musk's OpenAI basically operating under this assumption? His main thing seems to be to make sure AGI is distributed broadly so no one group with evil intentions controls it. Bostrom responded that might be a bad idea, since AGI could be quite dangerous, and we similarly don't want to give nukes to everyone so that they're "democratized."

Multi-agent outcomes seem like a possibility to me, but I think the alignment problem is still quite important. If none of the AGI have human values, I'd assume we're very likely screwed, while we might not be if some do have human values.

For WBE I'd assume the most important things for its "friendliness" is that we upload people who are virtuous and our ability and willingness to find "brain tweaks" that increase things like compassion. If you're interested, here's a paper I published where I argued that we will probably create WBE by around 2060 if we don't get AGI through other means first: https://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0008/jagi-2013-0008.xml

"Industry and academia seem to be placing much more effort into even the very speculative strains of AI research than into emulation." Actually, I'm gonna somewhat disagree with that statement. Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that's not the reason that research is conducting right now.

Comment author: Daniel_Eth 30 March 2017 06:16:38PM 5 points [-]

Regarding “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.”

I agree that being haughty is typically bad. But the argument "X implies Y, and you claim to believe X. Do you also accept the natural conclusion, Y?" when Y is ridiculous is a legitimate argument to make. At that point, the other person either can accept the implication, change his mind on X, or argue that X does not imply Y. It seems like the thing you have most of a problem with is the tone though. Is that correct?

Comment author: rochelleh  (EA Profile) 26 March 2017 07:07:52PM 0 points [-]

Some EA projects may fall within the scope of that existing political activist funding opportunity as well.

Comment author: Daniel_Eth 27 March 2017 02:42:58AM 1 point [-]

Any ideas of which projects in particular?

View more: Next