Comment author: LawrenceC 05 August 2017 10:59:20PM 1 point [-]

What are concrete actions that you think EAs can do to help with this?

In response to comment by LawrenceC on EAGx Relaunch
Comment author: Roxanne_Heston  (EA Profile) 24 July 2017 07:27:35PM *  3 points [-]

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

They were often stretched so thin from making the main event happen that they didn't have the capacity to ensure that their follow-up events were solid. We think part of the problem will be mitigated if the events themselves are smaller and more targeted towards groups with a specific level of EA understanding.

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

Local groups can apply for funding through the EAGx funding application, as well as use the event-organizing resources we generated for EAGx. Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa. If they're running a career or rationality workshop they may be able to get 80,000 Hours' or CFAR's advice or direct support.

Here are the event-organizing resources, if you'd like to check them out: https://goo.gl/zw8AjW

In response to comment by Roxanne_Heston  (EA Profile) on EAGx Relaunch
Comment author: LawrenceC 25 July 2017 04:31:50AM 0 points [-]

Thanks! This makes sense.

In response to EAGx Relaunch
Comment author: LawrenceC 23 July 2017 06:27:17AM *  1 point [-]

Awesome! Glad to hear that EAGx is still happening. I think it makes a lot of sense to pivot away from having many EAGx conferences of variable quality to a few high quality ones.

While we continue to think that this is an important function, CEA believes that, at least at the moment, our efforts to improve the world are bottlenecked by our ability to help promising people become fully engaged, rather than attracting new interest.

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

Comment author: Daniel_Dewey 10 July 2017 07:27:24PM 3 points [-]

I am very bullish on the Far Future EA Fund, and donate there myself. There's one other possible nonprofit that I'll publicize in the future if it gets to the stage where it can use donations (I don't want to hype this up as an uber-solution, just a nonprofit that I think could be promising).

I unfortunately don't spend a lot of time thinking about individual donation opportunities, and the things I think are most promising often get partly funded through Open Phil (e.g. CHAI and FHI), but I think diversifying the funding source for orgs like CHAI and FHI is valuable, so I'd consider them as well.

Comment author: LawrenceC 23 July 2017 05:24:51AM 3 points [-]

Not super relevant to Peter's question, but I would be interested in hearing why you're bullish on the Far Future EA Fund.

Comment author: Daniel_Dewey 10 July 2017 07:35:51PM 3 points [-]

Thanks for these thoughts. (Your second link is broken, FYI.)

On empirical feedback: my current suspicion is that there are some problems where empirical feedback is pretty hard to get, but I actually think we could get more empirical feedback on how well HRAD can be used to diagnose and solve problems in AI systems. For example, it seems like many AI systems implicitly do some amount of logical-uncertainty-type reasoning (e.g. AlphaGo, which is really all about logical uncertainty over the result of expensive game-tree computations) -- maybe HRAD could be used to understand how those systems could fail?

I'm less convinced that the "ignored physical aspect of computation" is a very promising direction to follow, but I may not fully understand the position you're arguing for.

Comment author: LawrenceC 23 July 2017 05:18:29AM 0 points [-]

My suspicion is that MIRI agrees with you - if you read their job post on their software engineering internship, it seems that they're looking for people who can rapidly prototype and test AI Alignment ideas that have implications in machine learning.

Comment author: LawrenceC 11 June 2015 09:15:54PM *  3 points [-]

What do you think of popular portrayals of AI-risk in general? Do you think there's much of a point either in trying to spread broad awareness of the issue? Do you think that any such efforts ultimately do more harm than good, and that we should try to keep AI-risk more secretive?

For example, are things like like Ex Machina, which doesn't really present the full AI arguement, but does make it obvious that AI is a risk, or Wait But Why's AI posts good?

Thanks!