John_Maxwell_IV comments on Ask MIRI Anything (AMA) - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 13 October 2016 01:21:15AM *  1 point [-]

I sometimes see influential senior staff at MIRI make statements on social media that pertain to controversial moral questions. These statements are not accompanied by disclaimers that they are speaking on behalf of themselves and not their employer. Is it safe to assume that these statements represent the de facto position of the organization?

This seems relevant to your organizational mission since MIRI's goal is essentially to make AI moral, but a donor's notion of what's moral might not correspond with MIRI's position. Forcefully worded statements on controversial moral questions could also broadcast willingness to engage in brinkmanship re: a future AI arms race, if different teams in the arms race were staffed by people who fell on different sides of the question.

Comment author: So8res 13 October 2016 05:43:03PM *  9 points [-]

Posts or comments on personal Twitter accounts, Facebook walls, etc. should not be assumed to represent any official or consensus MIRI position, unless noted otherwise. I'll echo Rob's comment here that "a good safety approach should be robust to the fact that the designers don’t have all the answers". If an AI project hinges on the research team being completely free from epistemic shortcomings and moral failings, then the project is doomed (and should change how it's doing alignment research).

I suspect we're on the same page about it being important to err in the direction of system designs that don't encourage arms races or other zero-sum conflicts between parties with different object-level beliefs or preferences. See also the CEV discussion above.