Comment author: vipulnaik 12 January 2017 06:24:38AM 13 points [-]

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment author: atucker 12 January 2017 02:37:40PM 4 points [-]

I suspect that a crux of the issue about the relative importance of growth vs. epistemic virtue is whether you expect most of the value of the EA community comes from novel insights and research that it does, or through moving money to the things that are already known about.

In the early days of EA I think that GiveWell's quality was a major factor in getting people to donate, but I think that the EA movement is large enough now that growth isn't necessarily related to rigor -- the largest charities (like Salvation Army or YMCA) don't seem to be particularly epistemically rigorous at all. I'm not sure how closely the marginal EA is checking claims, and I think that EA is now mainstream enough that more people don't experience strong social pressure to justify it.

Comment author: kbog  (EA Profile) 11 January 2017 10:25:39PM *  1 point [-]

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through,

True. But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

and that it encourages you to be slightly evil more than you have to be.

Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it's actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

Comment author: atucker 12 January 2017 03:12:29AM 1 point [-]

But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won't donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad.

Being at all intellectually dishonest is much worse for an intellectual movement's prospects than it is for normal groups.

instead of assuming that it's actually true to a significant degree

The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens.

Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.

Comment author: kbog  (EA Profile) 11 January 2017 09:48:13PM *  1 point [-]

Why Our Kind Can't Cooperate (Eliezer Yudkowsky)

Note to casual viewers that the content of this is not what the title makes it sound like. He's not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it's been hard.

From the recent Sarah Constantin post

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake. Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.

Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?

Obviously, this is a terrible failure mode.

I don't buy this logic. Obviously there's a huge difference between taking power and then expending effort into positive activities, or taking power and not giving it up at all. Suppose that tomorrow we all found out that a major corporation was the front for a shady utilitarian network that had accumulated enough power and capital to fill all current EA funding gaps, or something like that. Since at some point you actually do accomplish good, it's clearly not indistinguishable.

I mean, you can keep kicking things back and say "why not secretly acquire MORE power today and wait till tomorrow, and then you'll never do any good?" but there's obvious empirical limitations to that, and besides it's a problem of decision theory which is present across all kinds of things and doesn't have much to do with gaining power in particular.

In practical terms, people (not EAs) who try to gain power with future promises of making things nicer are often either corrupt or corruptible, so we have that to worry about. But it's not sufficient to show that the basic strategy doesn't work.

...

{epistemic status: extremely low confidence}

The way I see a lot of these organizational problems where they seem to have controversial standards and practices is that core people are getting just a little bit too hung up on EA This and EA That and Community This and Community That... in reality what you should do is take pride in your organization, those few people and resources you have in your control or to your left and right, and make it as strong as possible. Not by cheating to get money or anything, but by fundamentally adhering to good principles of leadership, and really taking pride in it (without thinking about overall consequences all the time). If you do that, you probably won't have these kinds of problems, which seem to be kind of common whenever the organization itself is made subservient to some higher ideal (e.g. cryonics organizations, political activism, religions). I haven't been inside these EA organizations so I don't know how they work, but I know how good leadership works in other places and that's what seems to be different. It probably sounds obvious that everyone in an EA organization should run it as well as they can, but after I hear about these occasional issues I get the sense that it's kind of important to just sit and meditate on that basic point instead of always talking about the big blurry community.

To succeed at our goals:

I'd agree with all that. It all seems pretty reasonable.

Comment author: atucker 11 January 2017 10:10:01PM *  2 points [-]

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through, and that it encourages you to be slightly evil more than you have to be.

I've heard other people argue that that strategy literally doesn't work, making a claim roughly along the lines of "if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult". I'm not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.

Comment author: Robert_Wiblin 04 December 2016 11:36:56PM *  11 points [-]

"Rob Wiblin said that, if he changed his mind about donating 10% every year being the best choice, he would simply un-take the pledge. However, this is certainly not encouraged by the pledge itself, which says "for the rest of my life" and doesn't contemplate leaving."

Hi Alyssa - FYI, the FAQ about the pledge says the following:

"How does it work? Is it legally binding?

The Pledge is not a contract and is not legally binding. It is, however, a public declaration of lasting commitment to the cause. It is a promise, or oath, to be made seriously and with every expectation of keeping it. All those who want to become a member of Giving What We Can must make the Pledge and report their income and donations each year.

If someone decides that they can no longer keep the Pledge (for instance due to serious unforeseen circumstances), then they can simply cease to be a member. They can of course rejoin later if they renew their commitment. Obviously taking the Pledge is something to be considered seriously, but we understand if a member can no longer keep it."

https://www.givingwhatwecan.org/about-us/frequently-asked-questions/#42-how-does-it-work-is-it-legally-binding

Comment author: atucker 06 December 2016 03:21:41AM *  2 points [-]

I think that people shouldn't donate at least 10% of their income if they think that doing so interferes with the best way for them to do good, but I don't think that the current pledge or FAQ supports breaking it for that reason.

Coming to the conclusion that donating >=10% of one's income is not the best way to do good does not seem like a normal interpretation of "serious unforeseen circumstances".

A version of the pledge that I would be more interested in would be one that's largely the same, but has a clause to the effect that I can stop donating if I stop thinking that it's the best way to do good, and have engaged with people in good faith in coming to that decision.

Comment author: atucker 31 January 2016 07:43:41PM 3 points [-]

Something that surprised me from the Superforecasting book is that just having a registry helps, even when those predictions aren't part of a prediction market.

Maybe a prediction market is overkill right now? I think that registering predictions could be valuable even without the critical mass necessary for the market to have much liquidity. It seems that the advantage of prediction markets is in incentivizing people to try to participate and do well, but if we're just trying to track predictions that EAs are already trying to make then that might be enough.

Also, one of FLI's cofounders (Anthony Aguirre) started a prediction registry: http://www.metaculus.com/ , http://futureoflife.org/2016/01/24/predicting-the-future-of-life/

Comment author: atucker 06 October 2014 02:58:11PM 0 points [-]

I really liked Larks' comment, but I'd like to add that this also incentivizes research teams to go into secret. Many AI projects (and some biotech) are currently privately funded rather than government funded, and so they could profit by not publicizing their efforts.

Comment author: Katja_Grace 02 October 2014 07:17:03AM 2 points [-]

the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn't seem different in kind to the evidence probably available earlier in history. Even in the best cases, EAs often have to lean on a combination of more rigorous evidence and some not very rigorous or evidenced guesses about how indirect effects work out etc. So if the more rigorous evidence available were substantially less rigorous than it is, I think I would expect things to look pretty much the same, with us just having lower standards - e.g. only being willing to trust certain people's reports of how interventions were going. So I'm not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.

Comment author: atucker 02 October 2014 04:28:51PM 4 points [-]

My other point was that EA isn't new, but that we don't recognize earlier attempts because they're not really doing evidence in a way that we would recognize.

I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren't really on the table yet.

Comment author: atucker 01 October 2014 12:02:36AM *  9 points [-]

I agree with your points about there being disagreement about EA, but I don't think that they fully explain why people didn't come up with it earlier.

I think that there are two things going on here -- one is that the idea of thinking critically about how to improve other people's lives without much consideration of who they are or where they live and then doing the result of that thinking isn't actually new, and the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

To the first point, I'll just list Ethical Culture, the Methodists, John Stuart Mill's involvement with the East India Company, communists, Jesuits, and maybe some empires. I could go into more detail, but doing so would require more research than I want to do tonight.

To the second point, I don't think that anything resembling modern academic social science existed until relatively recently (around the 1890s?), and so prior to that there was nothing resembling peer-reviewed academic evidence about the efficacy of an intervention.

Giving them time to develop methods and be interrupted by two world wars, we would find that "evidence" was not actually developed until fairly recently, and that prior to that people had reasons for thinking that their ideas are likely to work (and maybe even be the most effective plans), but that those reasons would not constitute well-supported evidence in the sense used by the current EA community.

Also the internet makes it much easier for people with relatively rare opinions to find each other, and enables much more transparency much more easily than was possible prior to it.