JoshuaFox comments on What Should the Average EA Do About AI Alignment? - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread.

Comment author: JoshuaFox 27 February 2017 11:06:56AM *  2 points [-]

Outreach can be valuable, although it is rare to have high-value opportunities. If you can publish, lecture or talk 1-on-1 with highly relevant audiences, then you may sway the Zeitgeist a little and so contribute towards getting donors or researchers on board.

Relevant audiences include:

  • tech moguls and other potential big donors; people who may have the potential to become or at influence those moguls.

  • researchers in relevant areas such as game theory; smart people in elite educational tracks who may have the potential to become or influence such researchers.

Comment author: jsteinhardt 28 February 2017 06:48:15AM 11 points [-]

I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don't understand much about AI. I think outreach by non-experts could end up being net-negative.

Comment author: kbog  (EA Profile) 28 February 2017 11:33:56PM 1 point [-]

It is very different for 1-on-1 engagement with highly relevant audiences than it is for general online discourse.

Comment author: Raemon 28 February 2017 08:47:21PM 0 points [-]

I agree with this concern, thanks. When I rewrite this post in a more finalized form I'll include reasoning like this.