A

amc

11 karmaJoined Sep 2014

Comments
6

I don't understand this. Have you written about this or have a link that explains it?

I tried to figure out whether MIRI’s directions for AI alignment were good, by reading a lot of stuff that had been written online; I did a pretty bad job of thinking about all this.

I'm curious about why you think you did a bad job at this. Could you roughly explain what you did and what you should have done instead?

If you can manage it, head to the Seattle Secular Solstice on Dec 10, 2016. Many of us from Vancouver are going.

we prioritize research we think would be useful in less optimistic scenarios as well.

I don't think I've seen anything from MIRI on this before. Can you describe or point me to some of this research?

Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.

I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it's the cheapest way to help.

End of History Illusion sounds like what you're looking for.