Comment author: RobBensinger 30 October 2017 01:14:06AM *  2 points [-]

The dichotomy I see the most at MIRI is 'one's inside-view model' v. 'one's belief', where the latter tries to take into account things like model uncertainty, outside-view debiasing for addressing things like the planning fallacy, and deference to epistemic peers. Nate draws this distinction a lot.

Comment author: Stefan_Schubert 30 October 2017 01:22:30AM *  3 points [-]

I guess you could make a trichotomy:

a) Your inside-view model.

b) Your all-things-considered private signal, where you've added outside-view reasoning, taken model uncertainty into account, etc.

c) Your all-things-considered belief, which also takes the views of your epistemic peers into account.

Comment author: ClaireZabel 29 October 2017 10:43:21PM 17 points [-]

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment author: Stefan_Schubert 29 October 2017 11:05:42PM *  5 points [-]

I agree that this distinction is important and should be used more frequently. I also think good terminology is very important. Clunky terms are unlikely to be used.

Something along the lines of "impressions" or "seemings" may be good for "credence by my lights" (cf optical illusions, where the way certain matter of facts seem or appear to you differs from your beliefs about them). Another possibility is "private signal".

I don't think inside vs outside view is a good terminology. E.g., I may have a credence by my lights about X partly because I believe that X falls in a certain reference class. Such reasoning is normally called "outside-view"-reasoning, yet it doesn't involve deference to epistemic peers.

Comment author: Ben_West  (EA Profile) 26 October 2017 03:49:25PM 26 points [-]

I prefer to play the long game with my own investments in community building, and would rather for instance invest in someone reasonably sharp who has a track record of altruism and expresses interest in helping others most effectively than in someone even sharper who reasoned their way into EA and consumed all the jargon but has never really given anything up for other people

I believe that Toby Ord has talked about how, in the early days of EA, he had thought that it would be really easy to take people who are already altruistic and encourage them to be more concerned about effectiveness, but hard to take effectiveness minded people and convince them to do significant altruistic things. However, once he actually started talking to people, he found the opposite to be the case.

You mention "playing the long game" – are you suggesting that the "E first, A second" people are easier to get on board in the short run, but less dedicated and therefore in the long run "A first, E second" folks are more valuable? Or are you saying that my (possibly misremembered) quote from Toby is wrong entirely?

Comment author: Stefan_Schubert 27 October 2017 12:39:10AM *  12 points [-]

Katja Grace gives a related [edited - said "the same" - see Katja's comment below] argument here:

https://meteuphoric.wordpress.com/2013/07/09/effectiveness-or-altruism/

"When I was younger, I thought altruism was about the most promising way to make the world better. There were extremely cheap figures around for the cost to save a human life, and people seemed to not care. So prima facie it seemed that the highly effective giving opportunities were well worked out, and the main problem was that people tended to give $2 to such causes occasionally, rather than giving every spare cent they had, that wasn’t already earmarked for something more valuable than human lives.

These days I am much more optimistic about improving effectiveness than altruism, and not just because I’m less naive about cost-effectiveness estimates."

She goes on to list several reasons, including greater past success and greater neglect.

14

Considering Considerateness: Why communities of do-gooders should be exceptionally considerate

The CEA research team just published a new paper - Considering Considerateness: Why communities of do-gooders should be exceptionally considerate  ( PDF version ). The paper is co-authored by Stefan Schubert, Ben Garfinkel, and Owen Cotton-Barratt.  Summary When interacting with others you can be considerate of their preferences, for instance by... Read More
Comment author: Jay_Shooster 13 April 2017 09:30:31PM *  6 points [-]

"I think this article would have been better if it had noted these issues."

Yes, it would have! Very glad you raised them. This is part of what I had in mind when mentioning "reputational risk" but I'm glad you fleshed it out more fully.

That being said, I think there is a low cost way to reap the benefits I'm talking about with integrity. Perhaps we have different standards/expectations of what's misleading on a resume, and what kind of achievements should be required for certain accolades. Maybe a 20 min presentation that required a short application should be required before doing this. I don't know. But I find it hard to believe that we couldn't be much more generous with bestowing accolades to dedicated members of the community without engaging in deception.

Maybe I can try to restate this in a way that would seem less deceptive...

I genuinely believe that there are tons of deserving candidates for accolades and speaking engagements in our community. I think that we can do more to provide opportunities for these people at a very low cost. I hope to help organize an event like this in NYC. I probably wouldn't leave it open to just anyone to participate, but I would guess (from my experience with the NYC community) that few people would volunteer to speak who didn't have an interesting and informed perspective to share in a 15 minute presentation. Perhaps, I have an overly positive impression of the EA community though.

(ps. I think your response is a model of polite and constructive criticism. thanks for that!)

Comment author: Stefan_Schubert 13 April 2017 11:24:20PM *  0 points [-]

I genuinely believe that there are tons of deserving candidates for accolades and speaking engagements in our community.

That's probably true, but I don't think it follows that the suggested strategy is unproblematic.

I guess the most plausible argument against your suggested strategy rests on the premise that there are tons of deserving candidates outside of our community as well, and that we have no reason to believe that EAs are, at present, on average under-credited. If that is right, then the aggregate effect of us systematically choosing EAs over non-EAs could, at least theoretically, be that EAs on average got more credit for their efforts than non-EAs.

I don't know how strong this effect would be, but I do think that this counter-argument should be addressed.

Comment author: Stefan_Schubert 30 March 2017 10:17:50AM *  1 point [-]

Philosophy would attain to perfection when the mechanical labourers shall have philosophical heads, or the philosophers shall have mechanical hands.

Thomas Spratt, History of the Royal Society of London

12

Effective altruism: an elucidation and a defence

By John Halstead, Stefan Schubert, Joseph Millum, Mark Engelbert, Hayden Wilkinson, and James Snowden. Cross-posted from the Centre for Effective Altruism blog . A direct link to the article can be found here . Abstract In this paper, we discuss Iason Gabriel’s recent piece on criticisms of effective altruism .... Read More
15

Hard-to-reverse decisions destroy option value

This post is co-authored with Ben Garfinkel. It is cross-posted from the CEA blog . A PDF version can be found here . Summary: Some strategic decisions available to the effective altruism movement may be difficult to reverse. One example is making the movement’s brand explicitly political. Another is growing... Read More
12

Understanding cause-neutrality

I'm pleased to be able to share Understanding cause-neutrality , a new working paper produced by the research team at the Centre for Effective Altruism. ( PDF version .) Executive summary The term “cause-neutrality” has been used for at least four concepts. The first aim of this article is to... Read More
Comment author: the_jaded_one 19 February 2017 10:39:42AM *  15 points [-]

Political organizing is a highly accessible way for many EAs to have a potentially high impact. Many of us are doing it already. We propose that as a community we recognize it more formally as way to do good within an EA framework

I agree that EAs should look much more broadly at ways to do good, but I feel like doing political stuff to do good is a trap, or at least is full of traps.

Why do humans have politics? Why don't we just fire all the politicians and have a professional civil service that just does what's good?

  • Because people have different goals or values, and if a powerful group ends up in control of the apparatus of the state and pushes its agenda very hard and pisses a lot of people off, it is better to have that group ousted in an election than in a civil war.

But the takeaway is that politics is the arena in which we discuss ideas where different people in our societies disagree on what counts as good, and as a result it is a somewhat toxic arena with relatively poor intellectual standards. It strongly resists good decision-making and good quality debate, and strongly encourages rhetoric. EA needs to take sides in this like I need more holes in my head.

I think it would be fruitful for EA to get involved in politics, but not by taking sides; I get the impression that the best thing EAs can do is try to find pareto improvements that help both sides, and by making issues that are political into nonpolitical issues by de-ideologizing them and finding solutions that make everyone happy and make the world a better place.

Take a leaf out of Elon Musks's book. The right wing in the USA is engaging in some pretty crazy irrationality and science denial about global warming. Many people might see this as an opportunity to score points against the right, but global warming will not be solved by political hot air, it will be solved by making fossil fuels economically marginal or nonviable in most applications. In particular, we need to reduce car related emissions to near zero. So Musks goes and builds fast, sexy macho cars in factories in the USA which provide tens of thousands of manufacturing jobs for blue collar US workers, and emphasizes them as innovative, forward looking and pro-US. Our new right wing president is lapping it up. This is what effective altruism in politics looks like: the rhetoric ("look at these sexy, innovative US-made cars!") is in service of the goal (eliminating gasoline cars and therefore eventually CO2 emissions), not the other way around.

And if you want to see the opposite, go look at this. People are cancelling their Tesla orders because Musk is "acting as a conduit to the rise of white nationalism and fascism in the United States". Musk has an actual solution to a serious problem, and people on the political left want to destroy it because it doesn't conform perfectly to their political ideology. Did these people stop to think about whether this nascent boycott makes sense from a consequentialist perspective? As in, "let's delay the solution to a pressing global problem in order to mildly inconvenience our political enemy"?

Collaborating with existing social justice movements

I would personally like to see EA become more like Elon Musk and less like Buzzfeed. The Trump administration and movement is a bit like a screaming toddler; it's much easier to deal with by distracting it with it's favorite toys ("Macho! Innovative! Made in the US!") than by trying to start an argument with it. How can we find ways to persuade the Trump administration - or any other popular right wing regime - that doing good is in its interest and conforms to its ideology? How can we sound right wing enough that the political right (who currently hold all the legislative power in the US) practically thinks they thought of our ideas themselves?

Comment author: Stefan_Schubert 22 February 2017 11:06:44AM 2 points [-]

I agree with much of this. Prior to joining CEA, I worked a bit on the bipartisan issue of how to make politics more rational (1, 2, 3, 4). I still think this is a wortwhile area, though my main focus right now is on other areas.

View more: Next