Comment author: HaydnBelfield 02 October 2018 05:20:10PM 3 points [-]

If this research seems interesting to you, CSER is currently hiring!


CSER Special Issue: 'Futures of Research in Catastrophic and Existential Risk'

The Centre for the Study of Existential Risk's (CSER) special issue  Futures of Research in Catastrophic and Existential Risk  was recently published. CSER is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. The special... Read More
Comment author: Kerry_Vaughan 18 August 2018 12:25:39AM *  11 points [-]

Thanks Sam! This is really helpful. I'd be interested in talking on Skype about this sometime soon (just emailed you about it). Some thoughts below:

Is longtermism a cause?

One idea I've been thinking about is whether it makes sense to treat longtermism/the long-term future as a cause.

Longtermism is the view that most of the value of our actions lies in what happens in the future. You can hold that view and also hold the view that we are so uncertain about what will happen in the future that doing things with clear positive short-term effects is the best thing to do. Peter Hurford explains this view nicely here.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement. Yet, I also think there are substantial and reasonable disagreements about what that means practically speaking. I'd be in favor of us working to ensure that people entering the community understand the details of that disagreement.

My guess is that while CEA is very positive on longtermism, we aren't anywhere near as positive on the cause/intervention combinations that longtermism typically suggests. For example, personally speaking, if it turned out that recruiting ML PhDs to do technical AI-Safety didn't have a huge impact I would be surprised but not very surprised.

Threading the needle

My feeling as I've been thinking about representativeness is that getting this right requires threading a very difficult needle because we need to optimize against a large number of constraints and considerations. Some of the constraints include:

  • Cause areas shouldn't be tribes -- I think cause area allegiance is operating as a kind of tribal signal in the movement currently. You're either on the global poverty tribe or the X-risk tribe or the animal welfare tribe and then people tend to defend the views of the tribe they happen to be associated with. I think this needs to stop if we want to build a community that can actually figure out how to do the most good and then do it. Focusing on cause areas as the unit of analysis for representativeness entrenches the tribal concern, but it's hard to get away from because it's an easy-to-understand unit of analysis.
  • We shouldn't entrench existing cause areas -- we should be aiming for an EA that has the ability to shift its consensus on the most pressing problems as we learn more. Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder.
  • Cause-impartiality can include having a view -- cause impartiality means that you do an impartial calculation of impact to determine what to work on. Such a calculation should lead to developing views on what causes are most important. Intellectual progress probably includes decreasing our uncertainty and having stronger views.
  • The view of CEA staff should inform, but not determine our work -- I don't think it's realistic or plausible for CEA to take actions as if we have no view on the relative importance of different problems, but it's also the case that our views shouldn't substantially determine what happens.
  • CEA should sometimes exercise leadership in the community -- I don't think that social movements automatically become excellent. Excellence typically has to be achieved on purpose by dedicated, skilled actors. I think CEA will often do work that represents the community, but will sometimes want to lead the community on important issues. The allocation of resources across causes could be one such area for leadership although I'm not certain.

There are also some other considerations around methods of improving representativeness. For example, consulting established EA orgs on representativeness concerns has the effect of entrenching the current systems of power in a way that may be bad, but that gives you a sense of the consideration space.

CEA and cause-impartiality

Suggestion: CEA should actively champion cause impartiality

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial. Cause-impartiality means that you do an impartial calculation of the impact of the cause and act on the basis of that. This is certainly what we think we've done when coming to views on specific causes although there's obviously room for reasonable disagreement.

I would find it quite odd if major organizations in EA (even movement building organizations) had no view on what causes are most important. I think CEA should be aspiring to have detailed, nuanced views that take into account our wide uncertainty, not no views on the question.

Making people feel listened to

I broadly agree with your points here. Regularly talking to and listening to more people in the community is something that I'm personally committed to doing.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Just to clarify, I also don't think trying to find a number that defines representativeness is the right approach, but I also don't want this to be a purely philosophical conversation. I want it to drive action.

Comment author: HaydnBelfield 21 August 2018 08:25:27PM 1 point [-]

"Cause areas shouldn't be tribes" "We shouldn't entrench existing cause areas" "Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder."

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

In response to Open Thread #39
Comment author: HaydnBelfield 02 November 2017 09:08:55PM 2 points [-]

The recent quality of posts has been absolutely stellar*. Keep it up everyone!

*interesting, varied, informative, written to be helpful/useful, rigorous, etc

Comment author: HaydnBelfield 06 October 2017 06:18:20PM 2 points [-]

Really glad to see you taking conflicts of interest so seriously!

Comment author: HaydnBelfield 03 April 2017 07:06:22PM 1 point [-]

This is incredibly valuable (and even groundbreaking) work. Well done for doing it, and for writing it up so clearly and informatively!

Comment author: HaydnBelfield 03 April 2017 07:03:29PM 0 points [-]

Thanks for this!

I personally agree that Democratic control of Congress, or even Congress and the Presidency, would be great. But I'm not sure how likely that is, or how certain that I should be about that likelihood.

Even if there was a high certainty and high likelihood, I probably still wouldn't take that option - the increased risk for four years is just too high. As Michael_S says you get higher nuclear risk and higher pandemic risk. As I said in my post, I think Trump also raises the risks of increased global instability, increased international authoritarianism, climate change, and emerging technologies. Take climate change - we really don't have long to fix it! We need to make significant progress by 2030 - we can't afford to go backwards for four years.

[Writing in a personal capacity, my views are not my employer's]

Comment author: HaydnBelfield 02 March 2017 06:39:30PM 2 points [-]

Whatever happened to EA Ventures?

In response to EA Funds Beta Launch
Comment author: HaydnBelfield 28 February 2017 06:30:30PM 11 points [-]

This is a great idea and you've presented it fairly, clearly and persuasively. I've donated.

Comment author: TaraMacAulay 28 February 2017 07:23:40AM 13 points [-]

We plan to send quarterly updates to all EA Funds donors detailing the total size of the fund and details of any grants made in the period. We will also publish grant reports on the EA Funds website and will keep an updated grant history on the fund description page, much in the same manner as Open Phil. We plan to publish a more detailed review of the project in 3 months, at which time we will reassess, and possibly make significant changes to the current iteration of the funds.

While the EA Giving Group DAF (EAGG) will continue to run, we suspect that many donors interested in the EAGG will prefer to donate to the EA Community fund or the Far Future fund. These funds will be easier to use, tax deductible in both the UK and the US, and will not have a large minimum donation amount. We were actually inspired to create these funds, in part, due to the success of the EAGG - we saw this as something like a super-MVP version of this idea.

Comment author: HaydnBelfield 28 February 2017 06:12:54PM 4 points [-]

Peter's question was one I asked in the previous post as well. I'm pleased with this answer, thanks Tara.

View more: Next