Comment author: rohinmshah  (EA Profile) 17 June 2018 09:28:35AM *  3 points [-]

Just wanted to note that this does not mean that you should enter any donor lottery if you have economies of scale. (Not that anyone is saying this.) For example, if a terrorist group needs $100,000 to launch a devastating attack, but won't be able to do anything with their current amount of $10,000, you probably shouldn't enter a donor lottery with them.

Comment author: Halstead 30 May 2018 09:39:06AM 0 points [-]

In Joey's example, I can donate $500 to GWWC instead of AMF. If I donate to AMF, AMF gets $500 compared to the world in which i don't donate. If I donate to GWWC, then AMF gets $1000 compared to the world in which I don't donate. Clearly, I should donate to GWWC if I care about counterfactual impact. If GWWC donates the $500 directly to AMF, then value has been lost.

The coordination problem is a separate question to how individual organisations should count their own counterfactual impact.

Comment author: rohinmshah  (EA Profile) 01 June 2018 09:32:13PM 1 point [-]

Forget about the organization's own counterfactual impact for a moment.

Do you agree that, from the world's perspective, it would be better in Joey's scenario if GWWC, Charity Science, and TLYCS were to all donate their money directly to AMF?

Comment author: rohinmshah  (EA Profile) 29 May 2018 08:24:05PM 1 point [-]

What are some examples of direct work student groups can do? My understanding was that most groups wanted to do direct work for many of the reasons you mention (certainly I wanted that) but there weren't any opportunities to do so.

I focused on field building mainly because it was the only plausible option that would have real impact. (Like Greg, I'm averse to doing direct work that will knowably be low direct impact.)

Comment author: Halstead 29 May 2018 02:30:45PM 0 points [-]

If that is what he is arguing I agree, but I don't think he is arguing that. He writes

"This person would become quadruple counted in EA, with each organization using their donations as impact to justify their running."

Each organisation would in fact be right to count the impact in the way described.

Comment author: rohinmshah  (EA Profile) 29 May 2018 06:29:02PM 1 point [-]

To try to narrow down the disagreement: Would you donate to GWWC instead of AMF if their impact calculation (using their current methodology) showed that $1.10 went to AMF for every $1 given to GWWC? I wouldn't.

Comment author: Halstead 28 May 2018 11:04:46AM *  1 point [-]

Firstly, people who believe in the correct account of counterfactual impact would have incentives to coordinate in the case you outline. Alice would maximise her counterfactual impact (defined as I define it) by coordinating with Bob on project R. The counterfactual impact of her coordinating with Bob would be +5 utility compared to scenario 1. There is no puzzle here.

Secondly, dividing counterfactual impact by contribution does not solve all these coordination problems. If everyone thought as per the Shapely value, then no rational altruists would ever vote, even when the true theory dictates that the expected value of doing so was very high.

Also consider the $1bn benefits case outlined above. Suppose that the situation is as described above but my action costs $2 and I take one billionth of the credit for the success of the project. In that case, the Shapely-adjusted benefits of my action would be $1 and the costs $2, so my action would not be worthwhile. I would therefore leave $1bn of value on the table.

Comment author: rohinmshah  (EA Profile) 29 May 2018 06:28:50PM 1 point [-]

For the first point, see my response to Carl above. I think you're right in theory, but in practice it's still a problem.

For the second point, I agree with Flodorner that you would either use the Shapley value, or you would use the probability of changing the outcome, not both. I don't know much about Shapley values, but I suspect I would agree with you that they are suboptimal in many cases. I don't think there is a good theoretical solution besides "consider every possible outcome and choose the best one" which we obviously can't do as humans. Shapley values are one tractable way of attacking the problem without having to think about all possible worlds, but I'm not surprised that there are cases where they fail. I'm advocating for "think about this scenario", not "use Shapley values".

I think the $1bn benefits case is a good example of a pathological case where Shapley values fail horribly (assuming they do what you say they do, again, I don't know much about them).

My overall position is something like "In the real world when we can't consider all possibilities, one common failure mode in impact calculations is the failure to consider the scenario in which all the participants who contributed to this outcome instead do other altruistic things with their money".

Comment author: Carl_Shulman 28 May 2018 05:31:01AM 4 points [-]

Are you neglecting to count the negative impact from causing other people to do the suboptimal thing? If I use my funds to set up an exploding matching grant that will divert the funds of other donors from better things too a less effective charity, that is a negative part of my impact.

Comment author: rohinmshah  (EA Profile) 29 May 2018 05:13:07PM 2 points [-]

Yes, that's right. I agree that a perfect calculation of your counterfactual impact would do the right thing in this scenario, and probably all scenarios. This is an empirical claim that the actual impact calculations that meta-orgs do are of the form that I wrote in my previous comment.

For example, consider the impact calculations that GWWC and other meta orgs have. If those impact calculations (with their current methodologies) showed a ratio of 1.1:1, that seems nominally worthwhile (you still have the multiplicative impact), but I would expect that it would be better to give directly to charities to avoid effects like the ones Joey talked about in his post.

A true full counterfactual impact calculation would consider the world in which GWWC just sends the money straight to charities and convinces other meta orgs to do the same, at which point they see that more money gets donated to charities in total, and so they all close operations and send money straight to charities. I'm arguing that this doesn't happen in practice. (I think Joey and Peter are arguing the same thing.)

Comment author: Halstead 27 May 2018 04:04:14PM *  1 point [-]

It needs to be explained why there is a paradox. I have not yet seen an explanation of why there might be thought to be one. EAs are concerned with having counterfactual impact. If you were a necessary condition of some benefit B occurring, then you have had counterfactual impact.

Re voting I'm appealing to how almost everyone in the academic literature assesses the expected value of voting, which is not by dividing the total value by each voter. I'm also appealing to a common EA idea which is discussed by Parfit and mentioned in Will's book, which is that voting is sometimes rational for altruistic voters. On your approach, it would pretty much always be irrational to vote even if the social benefits were extremely large: every social benefit would always be divided by the number of decisive voters, and so would be divided by many millions in any large election

I don't understand why the expected value approach says that the first few votes have a value of 0. Also, the ordering in which votes are cast is completely irrelevant to judging a voter's counterfactual imapct because all votes are indistinguishable wrt causing the outcome: it doesn't matter if I voted first and Emma voted last, we would still be decisive voters.

Comment author: rohinmshah  (EA Profile) 27 May 2018 07:08:23PM 4 points [-]

It's not a paradox. The problem is just that, if everyone thought this way, we would get suboptimal outcomes -- so maybe we should figure out how to avoid that.

Suppose there are three possible outcomes: P has cost $2000 and gives 15 utility to the world Q has cost $1000 and gives 10 utility to the world R has cost $1000 and gives 10 utility to the world

Suppose Alice and Bob each have $1000 to donate. Consider two scenarios:

Scenario 1: Both Alice and Bob give $1000 to P. The world gets 15 more utility. Both Alice and Bob are counterfactually responsible for giving 15 utility to the world.

Scenario 2: Alice gives $1000 to Q and Bob gives $1000 to R. The world gets 20 more utility. Both Alice and Bob are counterfactually responsible for giving 10 utility to the world.

From the world's perspective, scenario 2 is better. However, from Alice and Bob's individual perspective (if they are maximizing their own counterfactual impact), scenario 1 is better. This seems wrong, we'd want to somehow coordinate so that we achieve scenario 2 instead of scenario 1.

Comment author: rohinmshah  (EA Profile) 03 January 2018 03:44:51AM 4 points [-]

Crucial Premise: Necessarily, the more someone is willing to pay for a good, the more welfare they get from consuming that good.

It seems to me that this premise as you've stated it is in fact true. The thing that is false is a stronger statement:

Strengthened Premise: Necessarily, if person A is willing to pay more for a good than person B, then person A gets more welfare from that good than person B.

For touting/scalping, you also need to think about the utility of people besides Pete and Rich -- for example, the producers of the show and the scalper (who is trading his time for money). Then there are also more diffuse effects, where if tickets go for $1000 instead of $50, there will be more Book of Mormon plays in the future since it is more lucrative, and more people can watch it. The main benefit of markets is mainly through these sorts of effects.

Comment author: ClaireZabel 29 October 2017 10:43:21PM 17 points [-]

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment author: rohinmshah  (EA Profile) 30 October 2017 12:45:33AM 1 point [-]

As one data point, I did not have this association with "impressions" vs. "beliefs", even though I do in fact distinguish between these two kinds of credences and often report both (usually with a long clunky explanation since I don't know of good terminology for it).

Comment author: rohinmshah  (EA Profile) 13 October 2017 12:10:46AM *  4 points [-]

EA Berkeley seemed more positive about their student-led EA class, calling it “very successful”, but we believe it was many times less ambitious

Yeah, that's accurate. I doubt that any of our students are more likely to go into prioritization research as a result of the class. I could name a few people who might change their career as a result of the class, but that would also be a pretty low number, and for each individual person I'd put the probability at less than 50%. "Very successful" here means that a large fraction of the students were convinced of EA ideas and were taking actions in support of them (such as taking the GWWC pledge, and going veg*n). It certainly seems a lot harder to cause career changes, without explicitly selecting for people who want to change their career (as in an 80K workshop).

We implicitly predicted that other team members would also be more motivated by the ambitious nature of the Project, but this turned out not to be the case. If anything, motivation increased after we shifted to less ambitious goals.

We observed the same thing. In the first iteration of EA Berkeley's class, there was some large amount of money (probably ~$5000) that was allocated for the final project, and students were asked to propose projects that they could run with that money. This was in some sense even more ambitious than OxPrio, since donating it to a charity was a baseline -- students were encouraged to think of more out-of-the-box ideas as well. What ended up happening was that the project was too open-ended for students to really make progress on, and while people proposed projects because it was required to pass the course, they didn't actually get implemented, and we used the $5000 to fund costs for EA Berkeley in future semesters.

View more: Next