Yesterday, there was a Facebook thread discussing arguments against the Giving What We Can (GWWC) pledge, where people promise to donate 10% of all future income to charity (https://www.givingwhatwecan.org/pledge/). The thread didn't seem very productive, but I do think there are strong arguments against the pledge, at least as it exists now. Hence, I thought I'd write up some of the arguments against, and try to have a better discussion here. Of course, I only speak for myself, not any of the thread participants (though I'd welcome their endorsements if they agree). There are also arguments in favor of the pledge, but since many others have already covered them, I won't be including them here.
First, and most obviously, the pledge recommends a flat 10% donation, regardless of a person's income. The general consensus is that utility of money goes as log(income), so giving a fixed percentage is more painful per unit of good done at lower incomes than higher ones (hence, eg., progressive income taxes). Different professions also have dramatically different ratios of direct impact to money generated. Eg., an American congressman's salary is $174,000, but their votes in Congress are so important that even an 0.1% improvement in voting skill outweighs a $17,400 donation; hence, spending even a tiny amount of effort donating the 10% is likely not worth it. On the other extreme, a high-frequency trading job produces lots of money, but almost no direct impact. Therefore, the best donation percentage will vary hugely from person to person. Given the high human capital and low income of the median effective altruist, my guess is that for many people here, the best percentage is <1%; on the flip side, for a typical billionaire, it might be 90% or more. The GWWC pledge encourages most to donate too much, while lowballing a smaller number of large donors.
Second, the GWWC pledge uses the phrase "for the rest of my life or until the day I retire". This is a very long-term commitment; since most EAs are young (IIRC, ~50% of pledge takers were students when they took it), it might often be for fifty years or more. As EA itself is so young (under five years old, depending on exact definitions), so rapidly growing, and so much in flux, it's probably a bad idea to "lock in" fixed strategies, for the same reason that people who take a new job every month shouldn't buy a house. This is especially true for students, or others who will shortly make large career changes (as 80,000 Hours encourages). People in that position have very little information about their life in 2040, and are therefore in a bad position to make binding decisions about it. In response to this argument, pledge taker Rob Wiblin said that, if he changed his mind about donating 10% every year being the best choice, he would simply un-take the pledge. However, this is certainly not encouraged by the pledge itself, which says "for the rest of my life" and doesn't contemplate leaving.
Third, the number of GWWC pledge takers is used as a very prominent metric within EA. It's listed in bold, 72-point font on the GWWC homepage, was the very first thing mentioned in Will MacAskill's monthly CEA update, and is found in many other places discussing "the state of EA" or EA's growth rate. This is problematic because, as psychologist Dan Ariely says, "you are what you measure". GWWC pledge count is a bad metric for EA as a whole, because:
- it doesn't account for efficacy of donations; while EA/GWWC encourages donating effectively, an ineffective donor is still included in the total
- it doesn't account for amount of donations, so five small donors count more than one big donor, even though the big donor probably gives more
- it doesn't account for direct work (eg. discovering a cure for cancer as a research scientist) at all
- it creates weird biases regarding timing; possible future donations through ~2060 are totaled on the GWWC homepage, but no adjustment is made for the dramatic differences between 2016 EA/humanity vs. 2060 EA/humanity, creating the illusion of a "perpetual present"
It might be OK to use GWWC pledge count as one metric, measuring one aspect of EA, along with a suite of other metrics that captured what it missed. However, as far as I know, those other metrics more-or-less don't exist right now (I think 80,000 Hours is tracking "number of career changes" internally, but not sure if that's been published anywhere). Quantifying and highlighting this one metric, while not quantifying other things, creates quite a large bias. [EDIT: 80,000 Hours has indeed been publishing this metric, eg. here. However, I still think that it gets much less prominence than the GWWC pledge count. Even within 80K's own post, it's listed below other (IMO much less important) metrics, like web traffic and newsletter subscriber count.)
Fourth, although this isn't explicit in the pledge itself, I think many people taking the pledge intend to donate their 10% to GiveWell-recommended charities. (The GWWC pledge was originally specific to global poverty, and GWWC's charity recommendations largely overlap with GiveWell's.) This seems like a failure to propagate updates. For example, suppose your friend Joe is going camping in Nevada next week; he packs his RV with tents, clothes, food, water, and other equipment. The day before, Joe says he's changed his mind, and is actually camping in the mountains of Alaska. That's all well and good, but now that he's made this change, Joe needs to propagate that change through the other parts of his plan. He can't just buy a new map and drive to a different spot. A change like that will affect what clothes he needs to bring, how he should equip his vehicle, what emergency preparations he should take, what he'll do for fun, and probably even things like who will come with him or what food he carries.
Of course, I don't speak for GiveWell, but my impression is that the initial GiveWell focus was on upper-middle-class people making four-figure donations every holiday season. This has a bunch of implications, but the biggest is that the target audience is mostly busy with work, relatively risk-averse, and is giving away "spare cash" that won't be missed that much (if there's a lean year, they'll just donate less). In that context, the initial GiveWell model (in ~2007-2010) made a lot of sense. However, the GWWC audience is intrinsically different; almost all pledge takers are making a very serious commitment, since it's a substantial fixed fraction of income every year for decades. And since taking the pledge is still relatively "weird", the average pledge taker will be much less risk-averse. Given those assumptions, it makes much more sense to do a lot of research yourself, rather than "outsourcing thinking"; this is especially true given the current deep disagreements in EA on what "the most good" even means. (I also expect it would be much more motivating for the donor.)
Added: Michael Dickens also posted about this last month; I think the arguments largely overlap, but that he fleshes out some of them in more detail. H/T Kit
We may have an intractable disagreement here and it's pretty tangential to the point at hand, but for posterity I'll state my general position below anyway*.
More to the point at hand though, if you could actually spell out what you think should be done instead of the GWWC pledge, that'd really help direct the discussion. 'Maybe there should be no pledge at all' is a completely fine response, and for the avoidance of doubt I'm being completely not-sarcastic there.
I did do the fermi myself. 0.1% improvement seemed crazy high to me for the time someone might spend deciding their annual donation, so I wouldn't exactly call your calculation 'conservative', but I certainly concede its not crazy.
Re. outsourcing, your own calculation suggested a x57 difference. I had a x2 difference. rohinmshah elsewhere had a x3 difference. Given that I don't see why I need to cover more than a couple of orders of magnitude with outsourcing, and we both seem to think that outsourcing can credibly do that. I wouldn't expect outsourcing to help once we're above x50-ish and didn't mean to imply otherwise. So I think we basically agree on the limits of what outsourcing can do, you just seem to have a implied multiplier well in excess of x1000 (otherwise I don't know where 0.0001% and the 'orders of magnitude' comment come from), which I wasn't at all anticipating. Taking that for granted though, your position seems reasonable.
Let's compromise by not promoting the GWWC pledge to congresspeople or anyone else who can credibly influence billions of dollars?
I think the average federal dollar you can influence is quite a bit worse than Give Directly FWIW, though in my fermi I assumed they were equivalent as well.
Why not? Seriously. It's not uncommon for people to move countries in the developed world and incur a 10% higher tax rate in the process. I really doubt most people in that situation ever think about that again after the first couple of years.
Ok, sure. Givewell, money moved: http://www.givewell.org/about/impact REG, money moved: https://reg-charity.org/reg-second-semi-annual-report-on-money-moved-2015/
There's also a whole bunch of metrics in the EA Survey that people often reference: http://effective-altruism.com/ea/zw/the_2015_survey_of_effective_altruists_results/
Ah, this is interesting. Can you clarify what you mean by 'a metric for EA as a whole'? Do you not think that, e.g., Givewell's money moved numbers fill a similar function? If not, why not?
I sort of get this argument, it was when you said 'risk-averse' that I got stuck. To clarify, is this specific to the GWWC pledge or would any "weird" behaviour do? To take a slightly silly example, would you expect people who shower very irregularly (a 'weird' behaviour) to be more risk-seeking on similar grounds?
*
For firebombing to even happen, someone had to think it was the best of the available options. In fact, probably lots of someones had to think that. Those someones probably know lots more about the US military options than you or I. So to argue that firebombing is bad in the face of that probably-superior expertise, providing a concrete alternative (or set of alternatives) seems like the bare minimum you need to do.
Note that I said you need a counterfactual in the background. That caveat was there precisely to pre-empt cases like the one you gave, where the counterfactual is clearly and directly implied by the criticism. But as soon as you make multiple criticisms implicitly suggesting different counterfactuals, as you have here, it's worth spelling out exactly what alternative you are suggesting. Discussions get terribly confusing otherwise.
The above points are practical rather than technical. But on a technical level, criticism is clearly meaningless in cases where there is no choice. Nobody criticizes gravity for pulling you to your death if you step off a cliff. So to criticize something you need to establish that it is not like gravity; it can be fixed/improved/eliminated. Put another way, meaningful criticism is not 'this is not perfect', rather it is 'this is not optimal'. Which in turns requires a counterfactual, albeit potentially an implicit one.
So yeah, in short I'm generally pretty comfortable with the 'all unconstructive criticism is meaningless' approach. I consider it both technically true and practically useful.