14

Considering Considerateness: Why communities of do-gooders should be exceptionally considerate

The CEA research team just published a new paper - Considering Considerateness: Why communities of do-gooders should be exceptionally considerate  ( PDF version ). The paper is co-authored by Stefan Schubert, Ben Garfinkel, and Owen Cotton-Barratt.  Summary When interacting with others you can be considerate of their preferences, for instance by... Read More
Comment author: Jay_Shooster 13 April 2017 09:30:31PM *  6 points [-]

"I think this article would have been better if it had noted these issues."

Yes, it would have! Very glad you raised them. This is part of what I had in mind when mentioning "reputational risk" but I'm glad you fleshed it out more fully.

That being said, I think there is a low cost way to reap the benefits I'm talking about with integrity. Perhaps we have different standards/expectations of what's misleading on a resume, and what kind of achievements should be required for certain accolades. Maybe a 20 min presentation that required a short application should be required before doing this. I don't know. But I find it hard to believe that we couldn't be much more generous with bestowing accolades to dedicated members of the community without engaging in deception.

Maybe I can try to restate this in a way that would seem less deceptive...

I genuinely believe that there are tons of deserving candidates for accolades and speaking engagements in our community. I think that we can do more to provide opportunities for these people at a very low cost. I hope to help organize an event like this in NYC. I probably wouldn't leave it open to just anyone to participate, but I would guess (from my experience with the NYC community) that few people would volunteer to speak who didn't have an interesting and informed perspective to share in a 15 minute presentation. Perhaps, I have an overly positive impression of the EA community though.

(ps. I think your response is a model of polite and constructive criticism. thanks for that!)

Comment author: Stefan_Schubert 13 April 2017 11:24:20PM *  0 points [-]

I genuinely believe that there are tons of deserving candidates for accolades and speaking engagements in our community.

That's probably true, but I don't think it follows that the suggested strategy is unproblematic.

I guess the most plausible argument against your suggested strategy rests on the premise that there are tons of deserving candidates outside of our community as well, and that we have no reason to believe that EAs are, at present, on average under-credited. If that is right, then the aggregate effect of us systematically choosing EAs over non-EAs could, at least theoretically, be that EAs on average got more credit for their efforts than non-EAs.

I don't know how strong this effect would be, but I do think that this counter-argument should be addressed.

Comment author: Stefan_Schubert 30 March 2017 10:17:50AM *  1 point [-]

Philosophy would attain to perfection when the mechanical labourers shall have philosophical heads, or the philosophers shall have mechanical hands.

Thomas Spratt, History of the Royal Society of London

12

Effective altruism: an elucidation and a defence

By John Halstead, Stefan Schubert, Joseph Millum, Mark Engelbert, Hayden Wilkinson, and James Snowden. Cross-posted from the Centre for Effective Altruism blog . A direct link to the article can be found here . Abstract In this paper, we discuss Iason Gabriel’s recent piece on criticisms of effective altruism .... Read More
15

Hard-to-reverse decisions destroy option value

This post is co-authored with Ben Garfinkel. It is cross-posted from the CEA blog . A PDF version can be found here . Summary: Some strategic decisions available to the effective altruism movement may be difficult to reverse. One example is making the movement’s brand explicitly political. Another is growing... Read More
12

Understanding cause-neutrality

I'm pleased to be able to share Understanding cause-neutrality , a new working paper produced by the research team at the Centre for Effective Altruism. ( PDF version .) Executive summary The term “cause-neutrality” has been used for at least four concepts. The first aim of this article is to... Read More
Comment author: the_jaded_one 19 February 2017 10:39:42AM *  15 points [-]

Political organizing is a highly accessible way for many EAs to have a potentially high impact. Many of us are doing it already. We propose that as a community we recognize it more formally as way to do good within an EA framework

I agree that EAs should look much more broadly at ways to do good, but I feel like doing political stuff to do good is a trap, or at least is full of traps.

Why do humans have politics? Why don't we just fire all the politicians and have a professional civil service that just does what's good?

  • Because people have different goals or values, and if a powerful group ends up in control of the apparatus of the state and pushes its agenda very hard and pisses a lot of people off, it is better to have that group ousted in an election than in a civil war.

But the takeaway is that politics is the arena in which we discuss ideas where different people in our societies disagree on what counts as good, and as a result it is a somewhat toxic arena with relatively poor intellectual standards. It strongly resists good decision-making and good quality debate, and strongly encourages rhetoric. EA needs to take sides in this like I need more holes in my head.

I think it would be fruitful for EA to get involved in politics, but not by taking sides; I get the impression that the best thing EAs can do is try to find pareto improvements that help both sides, and by making issues that are political into nonpolitical issues by de-ideologizing them and finding solutions that make everyone happy and make the world a better place.

Take a leaf out of Elon Musks's book. The right wing in the USA is engaging in some pretty crazy irrationality and science denial about global warming. Many people might see this as an opportunity to score points against the right, but global warming will not be solved by political hot air, it will be solved by making fossil fuels economically marginal or nonviable in most applications. In particular, we need to reduce car related emissions to near zero. So Musks goes and builds fast, sexy macho cars in factories in the USA which provide tens of thousands of manufacturing jobs for blue collar US workers, and emphasizes them as innovative, forward looking and pro-US. Our new right wing president is lapping it up. This is what effective altruism in politics looks like: the rhetoric ("look at these sexy, innovative US-made cars!") is in service of the goal (eliminating gasoline cars and therefore eventually CO2 emissions), not the other way around.

And if you want to see the opposite, go look at this. People are cancelling their Tesla orders because Musk is "acting as a conduit to the rise of white nationalism and fascism in the United States". Musk has an actual solution to a serious problem, and people on the political left want to destroy it because it doesn't conform perfectly to their political ideology. Did these people stop to think about whether this nascent boycott makes sense from a consequentialist perspective? As in, "let's delay the solution to a pressing global problem in order to mildly inconvenience our political enemy"?

Collaborating with existing social justice movements

I would personally like to see EA become more like Elon Musk and less like Buzzfeed. The Trump administration and movement is a bit like a screaming toddler; it's much easier to deal with by distracting it with it's favorite toys ("Macho! Innovative! Made in the US!") than by trying to start an argument with it. How can we find ways to persuade the Trump administration - or any other popular right wing regime - that doing good is in its interest and conforms to its ideology? How can we sound right wing enough that the political right (who currently hold all the legislative power in the US) practically thinks they thought of our ideas themselves?

Comment author: Stefan_Schubert 22 February 2017 11:06:44AM 2 points [-]

I agree with much of this. Prior to joining CEA, I worked a bit on the bipartisan issue of how to make politics more rational (1, 2, 3, 4). I still think this is a wortwhile area, though my main focus right now is on other areas.

Comment author: rohinmshah  (EA Profile) 21 December 2016 08:18:47AM 0 points [-]

Re#6: The only object-level cause discussed is global poverty and health interventions. However, other object-level causes seem much more structurally similar to meta-level work.

It is definitely true for animal welfare, but in this case ACE takes this into account when making its recommendations, which defuses the trap. I'm not too familiar with X-risk organizations so I don't know to what extent it is true there -- it seems plausible that it is also an issue for X-risk organizations.

Re #7: Much of the impact from current work on X-risk plausibly derives from getting more valuable actors involved. Since there are several players in the EA X-risk space, this means that it may be hard to estimate which EA X-risk org caused more valuable actors to get involved in X-risk, just like it may be hard to estimate which EA meta-org caused EA movement growth. Thus this problem doesn't seem to be unique to meta-orgs.

I would in fact count this as "meta" work -- it would fall under "promoting effective altruism in the abstract".

What you're discussing is whether someone's investment makes a difference, or whether what they're trying to do would have occurred anyway.

My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality. There is no uncertainty about the counterfactual -- that's the whole point of the control. (Yes, there are problems with generalizing to new situations, and there can be problems with methodology, but it is still very good evidence.)

On the other hand, when somebody takes the GWWC pledge, you have next to no idea how much and where they would have donated had they not taken the pledge. (GWWC

In both cases you can have concerns about funding counterfactuals ("What if someone else had donated and my donation was useless?") but with meta work you often don't even know the counterfactuals for the actual intervention you are implementing.

It thus seems to me the situation regarding X-risk is quite analogous to that regarding meta-work on this score, too, but I am not sure I have understood your argument.

Given what you say about future investment into X-risk, it makes sense that the situation is analogous for X-risk. I wasn't aware of this.

To the contrary, meta-work can be a wise choice in face of uncertainty of what the best cause is. If you've spent a long time thinking about what the best cause is, and still have a lot of uncertainty, then I agree. The case I worry about is that people start doing meta work instead of thinking about cause prioritization, because that's simply easier, and you get to avoid analysis paralysis. As an anecdatum, I think that's partly happened to me.

Given what we know about human overconfidence, I think there is more reason to be worried that people are overconfident about their estimates of the relative marginal expected value of object-level causes, than that they withhold judgement of what object-level cause is best for too long.

Maybe, I'm not sure. It feels more to me like we should be worried about overconfidence once someone makes a decision, but I haven't seriously thought about it.

Comment author: Stefan_Schubert 21 December 2016 12:50:04PM 1 point [-]

I would in fact count this as "meta" work -- it would fall under "promoting effective altruism in the abstract".

I don't think that to promote X-risk should be counted as "promoting effective altruism in the abstract".

My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality.

There are two kinds of issues here:

1) Does the intervention have the intended effect, or would that effect have occurred anyway? 2) Does the donation make the intervention occur, or would that intervention have occurred anyway (for replaceability reasons)?

Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.

Comment author: Stefan_Schubert 21 December 2016 01:46:28AM *  2 points [-]

Re#6: The only object-level cause discussed is global poverty and health interventions. However, other object-level causes seem much more structurally similar to meta-level work. For instance, this description would seem to hold true of much of the work on X-risk:

[They] typically have many distinct activities for the same goal. These activities can have very different cost-effectiveness. The marginal dollar will typically fund the activity with the lowest (estimated) cost-effectiveness, and so will likely be significantly less impactful than the average dollar

Hence insofar as this is an issue (though see Rob's and Ben's comments) it's not unique to meta-level work.

Re #7: Much of the impact from current work on X-risk plausibly derives from getting more valuable actors involved. Since there are several players in the EA X-risk space, this means that it may be hard to estimate which EA X-risk org caused more valuable actors to get involved in X-risk, just like it may be hard to estimate which EA meta-org caused EA movement growth. Thus this problem doesn't seem to be unique to meta-orgs. (Also, I agree with Ben that one would like to see detailed case arguing that these are actually problems, rather than just pointing out that they might be problems.)

This points to the fact that much of the work within object-level causes is "meta" in the sense that it concerns getting more people involved, rather than in doing direct work. However, it is not "meta" in the sense used in this post. (Ben discussed this distinction in his reply to Hurford - see his remark on 'second level meta'.)

Generally, I think that the discussion on "meta" vs "object-level" work would gain from more precise definitions and more conceptual clarity. I'm currently working on that.

Re#8

I think that with most object-level causes this is less of an issue. When RCTs are conducted, they eliminate the problem, at least in theory (though you do run into problems when trying to generalize from RCTs to other environments).

I don't understand why global poverty and health RCTs (which I suppose is what you refer to) would make a difference. What you're discussing is whether someone's investment makes a difference, or whether what they're trying to do would have occurred anyway. For instance, whether them donating to AMF leads to less people dying from malaria. I think that's plausibly the case, but the question of RCTs vs other kinds of evidence - e.g. observational studies - seems orthogonal to that issue.

I think that this is a problem in far future areas (would the existential risk have happened, or would it have been solved anyway?), but people are aware of the problem and tackle it (research into the probabilities of various existential risks, looking for particularly neglected existential risks such as AI risk).

Current neglectedness of an existential risk is not necessarily a good guide to future neglectedness. Hence focussing on currently neglected risks does not guarantee that you have a large counterfactual impact.

I'm currently looking into the issue of future investment into X-risk and there doesn't seem to be that much research done on it, so it's not clear to me that people have tackled this problem. It's generally very difficult.

It thus seems to me the situation regarding X-risk is quite analogous to that regarding meta-work on this score, too, but I am not sure I have understood your argument.

Re#3 (which you support, though you don't comment on it further):

To the contrary, meta-work can be a wise choice in face of uncertainty of what the best cause is. Meta-work is supposed to give you resources which can be flexibly allocated across a range of causes. This means that if we're uncertain of what object-level cause is the best, meta-work might be our best choice (whereas being sure what the best cause is is a reason to work on that cause instead of doing meta-level work).

One of the more ingenious aspects of effective altruism is that it is fits an uncertain world so well. If the world were easy to predict, there would be less of a need for a movement which can shift cause as we gather more evidence of what the top cause is. However, that is not the world we're living in, as, e.g. the literature on forecasting shows.

Given what we know about human overconfidence, I think there is more reason to be worried that people are overconfident about their estimates of the relative marginal expected value of object-level causes, than that they withhold judgement of what object-level cause is best for too long.

Comment author: Stefan_Schubert 08 December 2016 02:06:20PM 6 points [-]

My impression is that the evidence provided by this article is poor. It quotes clearly unreliable sources such as Daily Express, Breitbart, and Sputnik News. To take just one example, the headline of the link quoting Polish experts above says:

Polish Experts: ‘Europe is at The End of its Existence. Western Europe is Practically Dead’

That is patently untrue.

View more: Next