rohinmshah comments on The counterfactual impact of agents acting in concert - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: rohinmshah  (EA Profile) 27 May 2018 07:08:23PM 4 points [-]

It's not a paradox. The problem is just that, if everyone thought this way, we would get suboptimal outcomes -- so maybe we should figure out how to avoid that.

Suppose there are three possible outcomes: P has cost $2000 and gives 15 utility to the world Q has cost $1000 and gives 10 utility to the world R has cost $1000 and gives 10 utility to the world

Suppose Alice and Bob each have $1000 to donate. Consider two scenarios:

Scenario 1: Both Alice and Bob give $1000 to P. The world gets 15 more utility. Both Alice and Bob are counterfactually responsible for giving 15 utility to the world.

Scenario 2: Alice gives $1000 to Q and Bob gives $1000 to R. The world gets 20 more utility. Both Alice and Bob are counterfactually responsible for giving 10 utility to the world.

From the world's perspective, scenario 2 is better. However, from Alice and Bob's individual perspective (if they are maximizing their own counterfactual impact), scenario 1 is better. This seems wrong, we'd want to somehow coordinate so that we achieve scenario 2 instead of scenario 1.

Comment author: Carl_Shulman 28 May 2018 05:31:01AM 4 points [-]

Are you neglecting to count the negative impact from causing other people to do the suboptimal thing? If I use my funds to set up an exploding matching grant that will divert the funds of other donors from better things too a less effective charity, that is a negative part of my impact.

Comment author: rohinmshah  (EA Profile) 29 May 2018 05:13:07PM 2 points [-]

Yes, that's right. I agree that a perfect calculation of your counterfactual impact would do the right thing in this scenario, and probably all scenarios. This is an empirical claim that the actual impact calculations that meta-orgs do are of the form that I wrote in my previous comment.

For example, consider the impact calculations that GWWC and other meta orgs have. If those impact calculations (with their current methodologies) showed a ratio of 1.1:1, that seems nominally worthwhile (you still have the multiplicative impact), but I would expect that it would be better to give directly to charities to avoid effects like the ones Joey talked about in his post.

A true full counterfactual impact calculation would consider the world in which GWWC just sends the money straight to charities and convinces other meta orgs to do the same, at which point they see that more money gets donated to charities in total, and so they all close operations and send money straight to charities. I'm arguing that this doesn't happen in practice. (I think Joey and Peter are arguing the same thing.)

Comment author: Halstead 28 May 2018 11:04:46AM *  1 point [-]

Firstly, people who believe in the correct account of counterfactual impact would have incentives to coordinate in the case you outline. Alice would maximise her counterfactual impact (defined as I define it) by coordinating with Bob on project R. The counterfactual impact of her coordinating with Bob would be +5 utility compared to scenario 1. There is no puzzle here.

Secondly, dividing counterfactual impact by contribution does not solve all these coordination problems. If everyone thought as per the Shapely value, then no rational altruists would ever vote, even when the true theory dictates that the expected value of doing so was very high.

Also consider the $1bn benefits case outlined above. Suppose that the situation is as described above but my action costs $2 and I take one billionth of the credit for the success of the project. In that case, the Shapely-adjusted benefits of my action would be $1 and the costs $2, so my action would not be worthwhile. I would therefore leave $1bn of value on the table.

Comment author: rohinmshah  (EA Profile) 29 May 2018 06:28:50PM 1 point [-]

For the first point, see my response to Carl above. I think you're right in theory, but in practice it's still a problem.

For the second point, I agree with Flodorner that you would either use the Shapley value, or you would use the probability of changing the outcome, not both. I don't know much about Shapley values, but I suspect I would agree with you that they are suboptimal in many cases. I don't think there is a good theoretical solution besides "consider every possible outcome and choose the best one" which we obviously can't do as humans. Shapley values are one tractable way of attacking the problem without having to think about all possible worlds, but I'm not surprised that there are cases where they fail. I'm advocating for "think about this scenario", not "use Shapley values".

I think the $1bn benefits case is a good example of a pathological case where Shapley values fail horribly (assuming they do what you say they do, again, I don't know much about them).

My overall position is something like "In the real world when we can't consider all possibilities, one common failure mode in impact calculations is the failure to consider the scenario in which all the participants who contributed to this outcome instead do other altruistic things with their money".

Comment author: Flodorner 29 May 2018 08:51:16PM 0 points [-]

At this point, i think that to analyze the $1bn case correctly, you'd have to substract everyone's opportunity cost in the calculation of the shapley value (if you want to use it here). This way, the example should yield what we expect.

I might do a more general writeup about shapley values, their advantages, disadvantages and when it makes sense to use them, if i find the time to read a bit more about the topic first.

Comment author: Peter_Hurford  (EA Profile) 28 May 2018 05:30:13AM 0 points [-]

^ This is what I wanted to say, but even better than how I was going to say it.