L

LawrenceC

10 karmaJoined Jun 2015

Comments
6

What are concrete actions that you think EAs can do to help with this?

Thanks! This makes sense.

Awesome! Glad to hear that EAGx is still happening. I think it makes a lot of sense to pivot away from having many EAGx conferences of variable quality to a few high quality ones.

While we continue to think that this is an important function, CEA believes that, at least at the moment, our efforts to improve the world are bottlenecked by our ability to help promising people become fully engaged, rather than attracting new interest.

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

Not super relevant to Peter's question, but I would be interested in hearing why you're bullish on the Far Future EA Fund.

My suspicion is that MIRI agrees with you - if you read their job post on their software engineering internship, it seems that they're looking for people who can rapidly prototype and test AI Alignment ideas that have implications in machine learning.

What do you think of popular portrayals of AI-risk in general? Do you think there's much of a point either in trying to spread broad awareness of the issue? Do you think that any such efforts ultimately do more harm than good, and that we should try to keep AI-risk more secretive?

For example, are things like like Ex Machina, which doesn't really present the full AI arguement, but does make it obvious that AI is a risk, or Wait But Why's AI posts good?

Thanks!