Comment author: kbog  (EA Profile) 19 January 2017 01:54:16AM 1 point [-]

This is also argued by Railton in the paper "Alienation, Consequentialism and the Demands of Morality."

However I think you're making a bit of a conflation between act-based thinking and careful calculation. We might believe in rough, quick estimates and heuristics to guide our lives, but to still have those defined by act utilitarian guidelines rather than by common sense ethics or intuition. So as an example, when thinking about whether to be kind to someone, the act utilitarian makes a quick estimate rather than following a cumbersome calculation - but that quick estimate is to be unkind to them.

Comment author: Richard_Batty 17 January 2017 04:01:09PM 0 points [-]

What sort of discussion of leadership would you like to see? How was this done in the Army?

Comment author: kbog  (EA Profile) 17 January 2017 05:12:54PM *  1 point [-]

Military has a culture of leadership, which is related to people taking pride in their organization, as I described in a different comment. There are training classes and performance evaluations emphasizing leadership, but I don't think those make a large difference.

Comment author: vipulnaik 12 January 2017 06:24:38AM 13 points [-]

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment author: kbog  (EA Profile) 12 January 2017 06:49:35AM *  9 points [-]

I like your thoughts and agree with reframing it as epistemic virtue generally instead of just lying. But I think EAs are always too quick to think about behavior in terms of incentives and rational action. Especially when talking about each other. Since almost no one around here is rationally selfish, some people are rationally altruistic, and most people are probably some combination of altruism, selfishness and irrationality. But here people are thinking that it's some really hard problem where rational people are likely be dishonest and so we need to make it rational for people to be honest and so on.

We should remember all the ways that people can be primed or nudged to be honest or dishonest. This might be a hard aspect of an organization to evaluate from the outside but I would guess that it's at least as internally important as the desire to maximize growth metrics.

For one thing, culture is important. Who is leading? What is their leadership style? I'm not in the middle of all this meta stuff, but it's weird (coming from the Army) that I see so much talk about organizations but I don't think I've ever seen someone even mention the word "leadership."

Also, who is working at EA organizations? How many insiders and how many outsiders? I would suggest that ensuring that a minority of an organization is composed of identifiable outsiders or skeptical people would compel people to be more transparent just by making them feel like they are being watched. I know that some people have debated various reasons to have outsiders work for EA orgs - well here's another thing to consider.

I don't have much else to contribute, but all you LessWrong people who have been reading behavioral econ literature since day one should be jumping all over this.

Comment author: atucker 12 January 2017 03:12:29AM 1 point [-]

But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won't donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad.

Being at all intellectually dishonest is much worse for an intellectual movement's prospects than it is for normal groups.

instead of assuming that it's actually true to a significant degree

The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens.

Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.

Comment author: kbog  (EA Profile) 12 January 2017 03:45:39AM *  1 point [-]

I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad.

Okay, so there's some optimal balance to be had (there are always ways you can be more rigorous and less growth-oriented, towards a very unreasonable extreme). And we're trying to find the right point, so we can err on either side if we're not careful. I agree that dishonesty is very bad, but I'm just a bit worried that if we all start treating errors on one side like a large controversy then we're going to miss the occasions where we err on the other side, and then go a little too far, because we get really strong and socially damning feedback on one side, and nothing on the other side.

The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens.

To be perfectly blunt and honest, it's a blog post with some anecdotes. That's fine for saying that there's a problem to be investigated, but not for making conclusions about particular causal mechanisms. We don't have an idea of how these people's motivations changed (maybe they'd have the exact same plans before having come into their positions, maybe they become more fair and careful the more experience and power they get).

Anyway the reason I said that was just to defend the idea that obtaining power can be good overall. Not that there are no such problems associated with it.

Comment author: atucker 11 January 2017 10:10:01PM *  2 points [-]

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through, and that it encourages you to be slightly evil more than you have to be.

I've heard other people argue that that strategy literally doesn't work, making a claim roughly along the lines of "if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult". I'm not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.

Comment author: kbog  (EA Profile) 11 January 2017 10:25:39PM *  1 point [-]

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through,

True. But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

and that it encourages you to be slightly evil more than you have to be.

Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it's actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

Comment author: kbog  (EA Profile) 11 January 2017 09:48:13PM *  1 point [-]

Why Our Kind Can't Cooperate (Eliezer Yudkowsky)

Note to casual viewers that the content of this is not what the title makes it sound like. He's not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it's been hard.

From the recent Sarah Constantin post

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake. Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.

Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?

Obviously, this is a terrible failure mode.

I don't buy this logic. Obviously there's a huge difference between taking power and then expending effort into positive activities, or taking power and not giving it up at all. Suppose that tomorrow we all found out that a major corporation was the front for a shady utilitarian network that had accumulated enough power and capital to fill all current EA funding gaps, or something like that. Since at some point you actually do accomplish good, it's clearly not indistinguishable.

I mean, you can keep kicking things back and say "why not secretly acquire MORE power today and wait till tomorrow, and then you'll never do any good?" but there's obvious empirical limitations to that, and besides it's a problem of decision theory which is present across all kinds of things and doesn't have much to do with gaining power in particular.

In practical terms, people (not EAs) who try to gain power with future promises of making things nicer are often either corrupt or corruptible, so we have that to worry about. But it's not sufficient to show that the basic strategy doesn't work.

...

{epistemic status: extremely low confidence}

The way I see a lot of these organizational problems where they seem to have controversial standards and practices is that core people are getting just a little bit too hung up on EA This and EA That and Community This and Community That... in reality what you should do is take pride in your organization, those few people and resources you have in your control or to your left and right, and make it as strong as possible. Not by cheating to get money or anything, but by fundamentally adhering to good principles of leadership, and really taking pride in it (without thinking about overall consequences all the time). If you do that, you probably won't have these kinds of problems, which seem to be kind of common whenever the organization itself is made subservient to some higher ideal (e.g. cryonics organizations, political activism, religions). I haven't been inside these EA organizations so I don't know how they work, but I know how good leadership works in other places and that's what seems to be different. It probably sounds obvious that everyone in an EA organization should run it as well as they can, but after I hear about these occasional issues I get the sense that it's kind of important to just sit and meditate on that basic point instead of always talking about the big blurry community.

To succeed at our goals:

I'd agree with all that. It all seems pretty reasonable.

In response to comment by kbog  (EA Profile) on Rational Politics Project
Comment author: Gleb_T  (EA Profile) 09 January 2017 04:55:57AM 0 points [-]

Yup, we're focusing on a core of people who are upset about lies and deceptions in the US election and the Brexit campaign, and aiming to provide them with means to address these deceptions in an effective manner. That's the goal!

Comment author: kbog  (EA Profile) 09 January 2017 07:23:31PM 0 points [-]

I mean a core as in a fixed point of interest. E.g. a forum, a blog, a website, a college club, etc. Something to seed the initiative that can stand on its own without having thousands of active members. You can't gather interested people without having something valuable to attract them.

In response to comment by kbog  (EA Profile) on Rational Politics Project
Comment author: Gleb_T  (EA Profile) 08 January 2017 09:04:31PM 0 points [-]

Broad social movement. We're aiming to focus on social media organizing at first, and then spread to local grassroots organizing later. There will be a lot of marketing and PR associated with it as well.

Comment author: kbog  (EA Profile) 09 January 2017 04:26:32AM *  1 point [-]

I don't know if social movements ever start from concerted efforts like this. For instance, EA started because one or two organizations and philosophers got a lot of interest from a few people. Other social movements start spontaneously when people are triggered into protest and action by major events. It seems good to have an identifiable 'core' to any kind of movement, like the idea I had - "a formal or semi-formal structure to aggregate and compare evidence from both sides." If you leverage swarm intelligence, prediction markets, argument mapping or more basic online mechanisms then you can start to make something impressive that stands on its own. Though such a system would be more difficult to make successful if you tried to make it relevant for the broad population rather than just EAs. It's just one example.

Comment author: kbog  (EA Profile) 08 January 2017 07:16:05PM *  0 points [-]

I don't really get what it will do? Is it supposed to be a broad social movement? Or a new organization? Is it just going to be a name over a bunch of articles?

We thus anticipate that RAP will draw some heat from conservatives, and do not want to risk any backlash on the EA movement as a whole.

You're probably going to get more from liberals when you advocate being rational about conservatives.

Comment author: kbog  (EA Profile) 03 January 2017 04:09:37AM *  1 point [-]

Here are stats for the EA subreddit.

https://imgur.com/a/QSoMo

In March and April the group was created and advertised/linked from elsewhere.

View more: Next