Comment author: PeterMcIntyre  (EA Profile) 28 September 2017 12:56:01AM 9 points [-]

Great article, thanks Carrick!

If you're an EA who wants to work on AI policy/strategy (including in support roles), you should absolutely get in touch with 80,000 Hours about coaching. Often, we've been able to help people interested in the area clarify how they can contribute, made introductions etc.

Apply for coaching here.

Comment author: Kaj_Sotala 20 June 2017 09:09:49PM *  2 points [-]

Hi Peter, thanks for the response!

Your comment seems to suggest that you don't think the arguments in my post are relevant for technical AI safety research. Do you feel that I didn't make a persuasive case for psych/cogsci being relevant for value learning/multi-level world-models research, or do you not count these as technical AI safety research? Or am I misunderstanding you somehow?

I agree that the "understanding psychology may help persuade more people to work on/care about AI safety" and "analyzing human intelligences may suggest things about takeoff scenarios" points aren't related to technical safety research, but value learning and multi-level world-models are very much technical problems to me.

Comment author: PeterMcIntyre  (EA Profile) 22 June 2017 04:10:14PM *  1 point [-]

We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats: 1. Personal fit could dominate this equation though, so I'd be excited about people tackling AI safety from a variety of fields. 2. It's an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.

Comment author: PeterMcIntyre  (EA Profile) 20 June 2017 06:58:24PM 1 point [-]

Hi Kaj,

Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.

We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.

We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, so don’t think it’s the only valid path by any means.

We’re planning on releasing other guides and paths for non-technical approaches, such as the AI safety policy career guide, which also recommends studying political science and public policy, law, and ethics, among others.

15

12 Awesome Things You Should Do After EA Global

Effective altruism is littered with lowbrow articles. I thought I would contribute something a little more academic to our discourse, so I have written a listicle (sorrynotsorry) of what we can do to get the most out of the conference now that it’s over. Note: RSSers/ Feedliers might want to switch... Read More
Comment author: PeterMcIntyre  (EA Profile) 26 July 2015 11:21:11PM 5 points [-]

Thanks for writing this up! It's very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I'd be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?

Comment author: PeterMcIntyre  (EA Profile) 03 June 2015 01:42:51AM 1 point [-]

Hi Eric, thanks for writing these and pointing us to them. I think this is a great idea. I just posted these on our business society and law society Facebook page to test the waters and see what response we'd get from a similar input. Out of interest, what has the response been that you've gotten so far?

Comment author: PeterMcIntyre  (EA Profile) 06 May 2015 11:19:46PM 3 points [-]

Thanks for posting this. I think explicitly asking for critical feedback is very useful.

If the intervention is not currently supported by a large body of research then we want to fund/carry out a randomized controlled trial to test whether it’s worth pursuing this intervention.

RCTs are seriously expensive, would take years to get meaningful data, would need to be replicated as well before you could put much faith in it, and it wouldn't align with the core skillset I'd imagine you'd need to be starting an organisation (so you'd need to outsource it, which would increase the costs even more). As Ryan said, it might be more useful to useful to aim to be recommended by OPP, or search for another kind of EA market inefficiency. Your other ideas of finding supportable but neglected interventions and doing them sounds pretty useful though.

0

Meetup : TrivEA Night by Effective Altruism UNSW

Discussion article for the meetup : TrivEA Night by Effective Altruism UNSW WHEN: 20 May 2015 11:10:15AM (+1000) WHERE: Roundhouse, UNSW, Sydney Great puns aside, EA UNSW is holding a trivia night. Maximum team size of 8, no minimum (if you come alone, we'll put you in a team with... Read More
11

Should You Visit an EA Hub?

This a post co-written by Brenton Mayer and Peter McIntyre.   Peter Hurford’s recent post challenges us to find ways to engage new EAs. In this post we explore travelling to an EA hub as a means to achieve this goal, and consider whether we would advise others to undertake... Read More
Comment author: PeterMcIntyre  (EA Profile) 02 April 2015 01:40:00PM 1 point [-]

If I remember correctly, CEA et al. decided against pursuing this strategy due to risk adversity. Due to the large downsides which may be unique to EA, it's not clear - to me at least - that our personal strategy should differ from this. I'd be interested in seeing some more thoughts on this.

View more: Next