capybaralet comments on What Should the Average EA Do About AI Alignment? - Effective Altruism Forum
posted by
on
25 February 2017 08:07PM
You are viewing a comment permalink. View the original post to see all comments and the full post content.
= 1ffb9810742e55d426cb37191bcb1f2a)
Subscribe to RSS Feed
Comments (39)
I'm also very interested in hearing you elaborate a bit.
I guess you are arguing that AIS is a social rather than a technical problem. Personally, I think there are aspects of both, but that the social/coordination side is much more significant.
RE: "MIRI has focused in on an extremely specific kind of AI", I disagree. I think MIRI has aimed to study AGI in as much generality as possible and mostly succeeded in that (although I'm less optimistic than them that results which apply to idealized agents will carry over and produce meaningful insights in real-world resource-limited agents). But I'm also curious what you think MIRIs research is focusing on vs. ignoring.
I also would not equate technical AIS with MIRI's research.
Is it necessary to be convinced? I think the argument for AIS as a priority is strong so long as the concerns have some validity to them, and cannot be dismissed out of hand.