Comment author:Squark
22 December 2017 10:19:44AM
*
2 points
[-]

Nice review! Two comments so far:

Re Critch's paper, the result is actually very intuitive once you understand the underlying mechanism. Critch considers a situation of, so to speak, Aumannian disagreement. That is, two agents hold different beliefs, despite being aware of each other's beliefs, because some assumption of Aumann's theorem is false: e.g. each agent considers emself smarter than the other. For example, imagine that Alice believes the Alpha Centauri system has more than 10 planets (call it "proposition P"), Bob believes it has less than 10 planets ("proposition not-P") and each is aware of the other's belief and considers it to be foolish. In this case, an AI that benefits Alice if P is true and benefits Bob if not-P is true would seem like an excellent deal for both of them, because each will be sure the AI is in eir own favor. In a way, the AI constitutes a bet between the two agents.

Critch writes: "It is also assumed that the players have common knowledge of one another’s posterior... Future work should design solutions for facilitating the process of attaining common knowledge, or to obviate the need to assume it." Indeed it is interesting to study what happens when each agents does not know the other's beliefs.

I will risk being accused of self-advertisement, but given that one of my papers appeared in the review it doesn't seem too arrogant to point at another which IMHO is not less important, namely "Forecasting using incomplete models", a paper that builds on Logical Induction in order to develop a way to reason about complex environments that doesn't require logic/deduction. I think it would be nice if this paper was included, although of course it's your review and your judgment whether it merits it.

## Comments (16)

Best*2 points [-]Nice review! Two comments so far:

disagreement. That is, two agents hold different beliefs, despite being aware of each other's beliefs, because some assumption of Aumann's theorem is false: e.g. each agent considers emself smarter than the other. For example, imagine that Alice believes the Alpha Centauri system has more than 10 planets (call it "proposition P"), Bob believes it has less than 10 planets ("proposition not-P") and each is aware of the other's belief and considers it to be foolish. In this case, an AI that benefits Alice if P is true and benefits Bob if not-P is true would seem like an excellent deal for both of them, because each will be sure the AI is in eir own favor. In a way, the AI constitutes abetbetween the two agents.Critch writes: "It is also assumed that the players have common knowledge of one another’s posterior... Future work should design solutions for facilitating the process of attaining common knowledge, or to obviate the need to assume it." Indeed it is interesting to study what happens when each agents does

notknow the other's beliefs.