18

RobBensinger comments on Ask MIRI Anything (AMA) - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobBensinger 13 October 2016 02:26:19AM *  10 points [-]

“Human interests” is an unfortunate word choice; Nate talked about this last year too, and we’ve tried to avoid phrasings like that. Unfortunately, most ways of gesturing at the idea of global welfare aren’t very clear or widely understood, or they sound weird, or they borrow arguably speciesist language (“humane,” "humanitarian," “philanthropy”...).

I’m pretty sure everyone at MIRI thinks we should value all sentient life (and extremely sure at least in the case of Eliezer, Nate, and myself), including sentient non-human animals and any sentient machines we someday develop. Eliezer thinks, as an empirical hypothesis, that relatively few animal species have subjective experience. Other people at MIRI, myself included, think a larger number of animal species have subjective experience. There's no "consensus MIRI view" on this point, but I think it's important to separate the empirical question from the strictly moral one, and I'm confident that if we learn more about what "subjective experience" is and how it's implemented in brains, then people at MIRI will update. It's also important to keep in mind that a good safety approach should be robust to the fact that the designers don’t have all the answers, and that humanity as a whole hasn’t fully developed scientifically (or morally).