Comment author: HoldenKarnofsky 22 June 2017 01:05:21AM 6 points [-]

I super highly recommend reading this report. In full, including many of the appendices (and footnotes :) )

I thought it was really interesting, and helpful for thinking this question through and understanding the state of what evidence and arguments are out there (unfortunately there is much less to go on than I’d even expected, though).

I was the most proximate audience for the report, so discount my recommendation as much as feels appropriate with that in mind.

Comment author: RomeoStevens 28 March 2017 07:22:54PM *  0 points [-]

Whoops, I somehow didn't see this until now. Scattered EA discourse, shrug.

I am in support of only engaging selectively.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions,

great!

I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

agreed

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

the whole thing. Principles are better as descriptions and not prescriptions :)

WRT preventing views from calcifying, I think it is very very important to actively cultivate something similar to

"But we ran those conversations with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again." -Herbert Simon, Nobel Laureate, founding father of the AI field

I've been researching top and breakout performance and this sort of thing keeps coming up again and again. Fortunately, creative reasoning is not magic. It has been studied and has some parameters that can be intentionally inculcated.

This talk gives a brief overview: https://vimeo.com/89936101

And I recommend skimming one of Edward deBono's books, such as six thinking hats. He outlined much of the sort of reasoning of 0 to 1, the Lean Startup, and others way back in the early nineties. It may be that openPhil is already having such conversations internally. In which case, great! That would make me much more bullish on the idea that openPhil has a chance at outsize impact. My main proxy metric is an Umeshism: if you never output any batshit crazy ideas your process is way too conservative.

Comment author: HoldenKarnofsky 30 March 2017 11:43:19PM 0 points [-]

The principles were meant as descriptions, not prescriptions.

I'm quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: "I think that one of the best ways to learn is to share one's impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things." But because the risks are what they are, I've concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone's attitude toward low-stakes brainstorming from their public communications.

Comment author: MichaelPlant 05 March 2017 07:48:06PM 1 point [-]

I'd like to build on the causal chain point. I think there's something unsatisfying about the way Holden's set up the problem.

I took the general thought as: "we don't get useful comments from the general public, we get useful comments from those few people who read lots of our stuff then talk to us privately". But if the general way things work is 1. people read the OPP blog (public) then 2. talk to OPP privately (perhaps because they don't believe anyone takes public discourse seriously), but doing 2. means you are then no longer part of the general public, then almost by definition public discourse isn't going to be useful: those motivated enough to engage in private correspondence are now not counted as part of public discourse!

Maybe I've misunderstood something, but it seems very plausible to me that the public discourse generates those useful private conversations even if the useful comments don't happen on public forums themselves.

I'm also uncertain if the EA forum counts as public discourse Holden doesn't expect to be useful, or private discourse which might be, which puts pressure on the general point. If you typify 'public discourse' as 'talking to people who don't know much' then of course you wouldn't expect it to be useful.

Comment author: HoldenKarnofsky 07 March 2017 02:44:00AM 2 points [-]

Michael, this post wasn't arguing that there are no benefits to public discourse; it's describing how my model has changed. I think the causal chain you describe is possible and has played out that way in some cases, but it seems to call for "sharing enough thinking to get potentially helpful people interested" rather than for "sharing thinking and addressing criticisms comprehensively (or anything close to it)."

The EA Forum counts for me as public discourse, and I see it as being useful in some ways, along the lines described in the post.

Comment author: John_Maxwell_IV 01 March 2017 11:52:33PM *  17 points [-]

Interesting post.

I wonder if it'd be useful to make a distinction between the "relatively small number of highly engaged, highly informed people" vs "insiders".

I could easily imagine this causal chain:

  1. Making your work open acts as an advertisement for your organization.

  2. Some of the people who see the advertisement become highly engaged & highly informed about your work.

  3. Some of the highly engaged & informed people form relationships with you beyond public discourse, making them "insiders".

If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.

I think this story is quite plausibly true. I'm not sure the EA movement would have ever come about without the existence of Givewell. Givewell's publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.

I can easily imagine a parallel universe "Closed Philanthropy Project" with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn't be surprised if many foundations already had giving philosophies very much like OpenPhil's, but we don't hear about them because they don't make their research public.

I didn't quite realize when I signed up just what it meant to read ten thousand applications a year. It's also the most important thing that we do because one of Y Combinator's great innovations was that we were one of the first investors to truly equalize the playing field to all companies across the world. Traditionally, venture investors only really consider companies who come through a personal referral. They might have an email address on their site where you can send a business plan to, but in general they don't take those very seriously. There's usually some associate who reads the business plans that come in over the transom. Whereas at Y Combinator, we said explicitly, "We don't really care who you know or if you don't know anyone. We're just going to read every application that comes in and treat them all equally."

Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.

So, a power law distribution in commenter usefulness isn't sufficient to show that openness lacks benefits.

As an aside, I hadn't previously gotten a strong impression that OpenPhil's openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.

For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).

Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it's plausible that the average EA puts commenting on OpenPhil blog posts in the "time wasted on the internet" category, and it might not require a ton of effort to change that.

To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator's website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.

More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn't a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.

Comment author: HoldenKarnofsky 07 March 2017 02:42:16AM *  5 points [-]

Hi John, thanks for the thoughts.

I agree with what you say about public discourse as an "advertisement" and "critical first step," and allude to this somewhat in the post. And we plan to continue a level of participation of public discourse that seems appropriate for that goal - which is distinct from the level of public discourse that would make it feasible for readers to understand the full thinking behind the many decisions we make.

I don't so much agree that there is a lot of low-hanging fruit to be had in terms of getting more potentially helpful criticism from the outside. We have published lists of questions and asked for help thinking about them (see this series from 2015 as well as this recent post; another recent example is the Worldview Diversification post, which ended with an explicit call for more ideas, somewhat along the lines you suggest). We do generally thank people for their input, make changes when warranted, and let people know when we've made changes (recent example from GiveWell).

And the issue isn't that we've gotten no input, or that all the input we've gotten has been low-quality. I've seen and had many discussions about our work with many very sharp people, including via phone and in-person research discussions. I've found these discussions helpful in the sense of focusing my thoughts on the most controversial premises, understanding where others are coming from, etc. But I've become fairly convinced - through these discussions and through simply reflecting on what kind of feedback I would be giving groups like GiveWell and Open Phil, if I still worked in finance and only engaged with their work occasionally - that it's unrealistic to expect many novel considerations to be raised by people without a great deal of context.

Even if there isn't low-hanging fruit, there might still be "high-hanging fruit." It's possible that if we put enough effort and creative thinking in, we could find a way to get a dramatic increase in the quantity and quality of feedback via public discourse. But we don't currently have any ideas for this that seem highly promising; my overall model of the world (as discussed in the previous paragraph) predicts that it would be very difficult; and the opportunity cost of such a project is higher than it used to be.

Comment author: RomeoStevens 23 February 2017 10:25:12PM *  10 points [-]

I'm skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I'm skeptical of any particular story being causal.

To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.

There was a thread on Marginal Revolution many years ago about why more economists don't do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly 'made fools of themselves' in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.

So, three claims.

  • Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
  • Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
  • Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
Comment author: HoldenKarnofsky 01 March 2017 06:32:45PM 5 points [-]

Thanks for the thoughts!

I'm not sure I fully understand what you're advocating. You talk about "only selectively engag[ing] with criticism" but I'm not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.

I agree that "we should be skeptical of our stories about why we do things, even after we try to correct for this." I'm not sure that the reasons I've given are the true ones, but they are my best guess. I note that the reasons I give here aren't necessarily very different from the reasons others making similar transitions would give privately.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

Comment author: vipulnaik 24 February 2017 09:16:53PM 9 points [-]

Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.

In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility.

I'll post each point as a reply comment to this since the overall comment exceeds the length limits for a comment.

Comment author: HoldenKarnofsky 01 March 2017 06:31:00PM 6 points [-]

Thanks for the thoughts, Vipul! Responses follow.

(1) I'm sorry to hear that you've found my writing too vague. There is always a tradeoff between time spent, breadth of issues covered, and detail/precision. The posts you hold up as more precise are on narrower topics; the posts you say are too vague are attempts to summarize/distill views I have (or changes of opinions I've had) that stem from a lot of different premises, many hard to articulate, but that are important enough that I've tried to give people an idea of what I'm thinking. In many cases their aim is to give people an idea of what factors we are and aren't weighing, and to help people locate beliefs of ours they disagree (or might disagree) with, rather than to provide everything needed to evaluate our decisions (which I don't consider feasible).

While I concede that these posts have had limited precision, I strongly disagree with this: "the vagueness is not a bug, from your perspective, it's a corollary of trying to make your content really hard for people to take issue with." That is not my intention. The primary goal of these posts has been to help people understand where I'm coming from and where the most likely points of disagreement are likely to lie. Perhaps they failed at this (I suspect different readers feel differently about this), but that was what they were aiming to do, and if I hadn't thought they could do that, I wouldn't have written them.

(2) I agree with all of your thoughts here except for the way you've characterized my comments. Is there a part of this essay that you thought was making a universal claim about transparency, as opposed to a claim about my own experience with it and how it has affected my own behavior and principles? The quote you provide does not seem to point this way.

(3) My definition of "public discourse" does not exclude benefits that come from fundraising/advocacy/promotion. It simply defines "public discourse" as writing whose focus is on truth-seeking rather than those things. This post, and any Open Phil blog post, would count as "public discourse" by my definition, and any fundraising benefits of these posts would count as benefits of public discourse.

I also did not claim that the reputational effects of openness are skewed negative. I believe that the reputational effects of our public discourse have been net positive. I believe that the reputational effects of less careful public discourse would be skewed negative, and that has implications for how time-consuming it is for us to engage, which in turn has implications for how much we engage.

(4) We have incurred few costs from public discourse, but we are trying to avoid risks that we perceive. As for "who gets the blame," I didn't intend to cover that topic one way or the other in this post. The intent of the post was to help people understand how and why my attitude toward public discourse has changed and what to expect from me in the future.

Comment author: HoldenKarnofsky 01 March 2017 06:29:21PM 8 points [-]

Thanks for the comments, everyone!

I appreciate the kind words about the quality and usefulness of our content. To be clear, we still have a strong preference to share content publicly when it seems it would be useful and when we don't see significant downsides. And generally, the content that seems most likely to be helpful has fairly limited overlap with the content that poses the biggest risks.

I have responded to questions and criticisms on the appropriate threads.

39

Some Thoughts on Public Discourse

Thanks to Ben Hoffman and several of my coworkers for reviewing a draft of this. It seems to me that there have been some disagreements lately in the effective altruism community regarding the proper role and conduct for public discourse (in particular, discussions on the public Web). I decided to share... Read More
Comment author: HoldenKarnofsky 17 February 2017 05:13:40AM 17 points [-]

Hi Ben,

Thanks for putting so much thought into this topic and sharing your feedback.

I'm going to discuss the reasoning behind the "splitting" recommendation that was made in 2015, as well as our current stance, and how they relate to your points. I'll start with the latter because I think that will make this comment easier to follow. I'll then address some more specific points and suggestions.

I'm not addressing recommendations addressed to GiveWell - I think it will make more sense for someone more involved in GiveWell to do that - though I will address both the 2015 and 2016 decisions about how much to recommend that Good Ventures support GiveWell's top charities, because I was closely involved in those decisions.

Current stance on Good Ventures support for GiveWell's top charities. As noted here, we (Open Phil) currently feel that the "last dollar" probably beats GiveWell's top charities according to our (extrapolated) values. We are quite uncertain of this view at this time and are hoping to do a more thorough investigation and writeup this year. We recommended $50 million to top charities for the 2016 giving season, for reasons laid out in that post and not discussed in the original post on this thread.

You seem to find our take on the "last dollar" a difficult-to-justify conclusion (or at least difficult to square with the fact that we are currently well under eventual peak giving, and not closing the gap via the actions you list under "symmetry"). You argue that the key issue here is the question of returns to scale, and say that we should regrant to larger organizations if we think returns are increasing, and smaller organizations if returns are decreasing. But I don't think the question "Are returns to scale increasing or decreasing?" is a particularly core question here (nor does it have a single general answer). Instead, our reason for thinking our "last dollar" can beat top charities and many other options is largely bound up in our model of ourselves as people who aspire to become "experts" in the domain of giving away large amounts of money effectively and according to the basic stance of effective altruism. I've written about my model of broad market efficiency in the past; I don't think it is trivial to "beat the market," but nor do I think it is prohibitively difficult, and I expect that we can do so in the long run. Another key part of the view is that there is more than one plausible worldview under which it looks (in the long run) quite tractable to spend essentially arbitrary amounts of money in a way that has better value for money than top charities (this is also discussed in the post on our current view ).

Previously, our best guess was different. We thought that the "last dollar" was worse than top charities - but not much worse, and with very low confidence. We fully funded things we thought were much better than the "last dollar" (including certain top charities grants) but not things we thought were relatively close when they also posed coordination issues. For this case, fully funding top charities would have had pros and cons relative to splitting: we think the dollars we spent would've done slightly more good, but the dollars spent by others would've done less good (and we think we have a good sense of the counterfactual for most of those dollars). We guessed that the latter outweighed the former.

I think that an important factor playing into both decisions, and a potentially key factor causing you and me to see things differently, pertains to conservatism. For the 2015 decision in particular, we didn't have much time to think carefully about these issues, and "fully funding" might be the kind of thing we couldn't easily walk back (we worried about a consistent dynamic in which our entering a cause led to other donors' immediately fleeing it). It's often the case that when we need to make high-stakes decisions without sufficient time or information, we err on the side of preserving option value and avoiding particularly bad outcomes (especially those that pose risks to GiveWell or Open Phil as an organization); this often leads to "hacky" actions that are knowably not ideal for any particular set of facts and values, if we had confidently sorted these facts and values out (but we haven't).

Responses to more specific points

"First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?"

I don't think this is a case of "defecting" or "adversarial framing." We were trying to approximate the outcome we would've reached if we'd been able to have a friendly, open discussion and coordination with individual donors, which we couldn't.

"if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s 'fair share' seems more likely to be in excess of 80%, than a 50-50 split."

We expected individual giving to grow over time, and thought that it would grow less if we had a policy of fully funding top charities. Calculating "fair share" based on current giving alone, as opposed to giving capacity construed more broadly and over a longer-term, would have created the kinds of problematic incentives we wrote that we were worried about. 50% is within range of what I'd guess would be a long-term fair share. Given that it is within range, 50% was chosen as a proportion that would (accurately) signal that we had chosen it fairly arbitrarily, in order to commit credibly to splitting, as mentioned in the post.

"This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma." The ethical objection was to being misleading, not to the game-theoretic aspects of the approach.

I don't follow your argument under "Influence via habituation vs track record." The reason there was "not enough money to cover the whole thing" was because we were unwilling to pay more than what we considered our fair share, due to the incentives it would create and the long-run implications for total positive impact. We were open about that. I also think that the "surface case" for low-engagement donors who didn't read our work was about as close to the truth as a surface case could be. (I would describe the "surface case" as something like: "If I give this money, then bednets will be delivered; if I do not, that will not happen." I do not believe that the majority of GiveWell donors - including very large donors - base their giving on Open Phil's opinions, or in many cases even know what Open Phil is.) I don't see how this situation implies any of your #1-#3, and I don't see how it is deceptive.

"Access via size" and "Independence via many funders" were not part of our reasoning.

(Continued in next comment)

Comment author: HoldenKarnofsky 17 February 2017 05:14:13AM 13 points [-]

(Continued from previous comment)

Thoughts on your recommendations. I appreciate your making suggestions, and providing helpful context on the spirit in which you intend them. Here I only address suggestions for Open Phil.

  • Maintaining a list of open investigations: I see some case for this, but at the moment we don't plan on it. I don't think we can succinctly and efficiently maintain such a list without incurring a number of risks (e.g., causing people to excessively plan on our support; causing controversy due to hasty communication or miscommunication). Instead, we encourage people who want to know whether we're working on something to contact us and ask.
  • We have considered and in some cases done some (limited) execution on all of the suggestions you make under "Symmetry," and all remain potential tools if we want to ramp up giving further in the future. I think they are all good ideas, perhaps things we should have done more of already, and perhaps things we will do more of later on. However, I do not think the situation is "symmetrical" as you imply because our mission - which we are building up expertise and capacity around optimizing for - is giving away large sums of money effectively and according to the basic stance of effective altruism. The same is not generally true of our grantees. We generally try to do something approximating "give to grantees up until the point where marginal dollars would be worse than our last dollar" (though of course very imprecisely and with many additional considerations. Finally, I'll add that any of the four options you list - and many more - are things we could probably find a way of doing if we put in some time and internal discussion, resulting in good outcomes. But we think that time and internal discussion is better spent on other priorities that will lead to better outcomes. In general, any new idea we pursue involves a fair amount of discussion and refinement, which itself has major opportunity costs, so we accept a degree of inertia in our policies and approaches.
  • For reasons stated above and in previous posts, I don't believe the optimal level of funding for top charities is 100% of the gap or 0%. I also wish to note that your comment "I expect fairly few donors would accept this offer. But it still seems like it would be a powerful, credible signal of cooperative intent." highlights what I suspect may be one of the most important disagreements underlying this discussion. As noted above, we are comfortable with "hacky" approaches to dilemmas that let us move on to our next priority, and we are very unlikely to undertake time-consuming projects with little expected impact other than to signal cooperative intent in a general and undirected way. For us, a disagreement whose importance is mostly symbolic is not likely to become a priority. We would be more likely to prioritize disagreements that implied we could do much more good (or much less harm) if we took some action, such that this action is competitive with our other priorities.
  • I think your final suggestion would have substantial costs, and don't agree that you've identified sufficient harms to consider it.

I'm not sure I've understood all of your points, but hopefully this is helpful in identifying which threads would be useful to pursue further. Thanks again for your thoughtful feedback.

Comment author: HoldenKarnofsky 17 February 2017 05:13:40AM 17 points [-]

Hi Ben,

Thanks for putting so much thought into this topic and sharing your feedback.

I'm going to discuss the reasoning behind the "splitting" recommendation that was made in 2015, as well as our current stance, and how they relate to your points. I'll start with the latter because I think that will make this comment easier to follow. I'll then address some more specific points and suggestions.

I'm not addressing recommendations addressed to GiveWell - I think it will make more sense for someone more involved in GiveWell to do that - though I will address both the 2015 and 2016 decisions about how much to recommend that Good Ventures support GiveWell's top charities, because I was closely involved in those decisions.

Current stance on Good Ventures support for GiveWell's top charities. As noted here, we (Open Phil) currently feel that the "last dollar" probably beats GiveWell's top charities according to our (extrapolated) values. We are quite uncertain of this view at this time and are hoping to do a more thorough investigation and writeup this year. We recommended $50 million to top charities for the 2016 giving season, for reasons laid out in that post and not discussed in the original post on this thread.

You seem to find our take on the "last dollar" a difficult-to-justify conclusion (or at least difficult to square with the fact that we are currently well under eventual peak giving, and not closing the gap via the actions you list under "symmetry"). You argue that the key issue here is the question of returns to scale, and say that we should regrant to larger organizations if we think returns are increasing, and smaller organizations if returns are decreasing. But I don't think the question "Are returns to scale increasing or decreasing?" is a particularly core question here (nor does it have a single general answer). Instead, our reason for thinking our "last dollar" can beat top charities and many other options is largely bound up in our model of ourselves as people who aspire to become "experts" in the domain of giving away large amounts of money effectively and according to the basic stance of effective altruism. I've written about my model of broad market efficiency in the past; I don't think it is trivial to "beat the market," but nor do I think it is prohibitively difficult, and I expect that we can do so in the long run. Another key part of the view is that there is more than one plausible worldview under which it looks (in the long run) quite tractable to spend essentially arbitrary amounts of money in a way that has better value for money than top charities (this is also discussed in the post on our current view ).

Previously, our best guess was different. We thought that the "last dollar" was worse than top charities - but not much worse, and with very low confidence. We fully funded things we thought were much better than the "last dollar" (including certain top charities grants) but not things we thought were relatively close when they also posed coordination issues. For this case, fully funding top charities would have had pros and cons relative to splitting: we think the dollars we spent would've done slightly more good, but the dollars spent by others would've done less good (and we think we have a good sense of the counterfactual for most of those dollars). We guessed that the latter outweighed the former.

I think that an important factor playing into both decisions, and a potentially key factor causing you and me to see things differently, pertains to conservatism. For the 2015 decision in particular, we didn't have much time to think carefully about these issues, and "fully funding" might be the kind of thing we couldn't easily walk back (we worried about a consistent dynamic in which our entering a cause led to other donors' immediately fleeing it). It's often the case that when we need to make high-stakes decisions without sufficient time or information, we err on the side of preserving option value and avoiding particularly bad outcomes (especially those that pose risks to GiveWell or Open Phil as an organization); this often leads to "hacky" actions that are knowably not ideal for any particular set of facts and values, if we had confidently sorted these facts and values out (but we haven't).

Responses to more specific points

"First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?"

I don't think this is a case of "defecting" or "adversarial framing." We were trying to approximate the outcome we would've reached if we'd been able to have a friendly, open discussion and coordination with individual donors, which we couldn't.

"if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s 'fair share' seems more likely to be in excess of 80%, than a 50-50 split."

We expected individual giving to grow over time, and thought that it would grow less if we had a policy of fully funding top charities. Calculating "fair share" based on current giving alone, as opposed to giving capacity construed more broadly and over a longer-term, would have created the kinds of problematic incentives we wrote that we were worried about. 50% is within range of what I'd guess would be a long-term fair share. Given that it is within range, 50% was chosen as a proportion that would (accurately) signal that we had chosen it fairly arbitrarily, in order to commit credibly to splitting, as mentioned in the post.

"This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma." The ethical objection was to being misleading, not to the game-theoretic aspects of the approach.

I don't follow your argument under "Influence via habituation vs track record." The reason there was "not enough money to cover the whole thing" was because we were unwilling to pay more than what we considered our fair share, due to the incentives it would create and the long-run implications for total positive impact. We were open about that. I also think that the "surface case" for low-engagement donors who didn't read our work was about as close to the truth as a surface case could be. (I would describe the "surface case" as something like: "If I give this money, then bednets will be delivered; if I do not, that will not happen." I do not believe that the majority of GiveWell donors - including very large donors - base their giving on Open Phil's opinions, or in many cases even know what Open Phil is.) I don't see how this situation implies any of your #1-#3, and I don't see how it is deceptive.

"Access via size" and "Independence via many funders" were not part of our reasoning.

(Continued in next comment)

View more: Next