Hello Effective Altruism Forum,
I am Seth Baum and I will be here to answer your questions 3 March 2015, 7-9 PM US ET (New York time). You can post questions in this thread in the meantime. Here is some more background:
I am Executive Director of the Global Catastrophic Risk Institute (GCRI). I co-founded GCRI in 2011 with Tony Barrett. GCRI is an independent, nonprofit think tank studying major risks to the survival of human civilization. We develop practical, effective ways to reduce the risks.
There is often some confusion among effective altruists about how GCRI uses the term “global catastrophic risk”. The bottom line is that we focus on risk of catastrophes that could cause major permanent harm. This is similar to some use of “existential risk”. You can read more about that here.
GCRI just announced major changes to GCRI’s identity and direction. We are focusing increasingly on in-house research oriented towards assessing the best ways of reducing the risks. This is at the heart of our new flagship integrated assessment project, which puts all the gcrs into one study to learn the best risk reduction opportunities.
If you’d like to stay up to date on GCRI, you can sign up for our monthly email newsletter. You can also support GCRI by donating.
And GCRI is not active on social media, but you can follow me on Twitter.
I am excited to have this chance to speak with the online effective altruism community. I was involved in the online utilitarianism community around 2006-2007 via my Felicifia blog. I’m really impressed with how the community has grown. A lot of people have put a lot of work into this. Thanks go in particular to Ryan Carey for setting up today’s AMA and for doing so much more.
There are also a few things I’m hoping to learn from you:
First, I am considering a research project on what motivates people to take on major global issues and/or to act on altruistic principles more generally. I would be interested in any resources you know of about this. It could be research on altruism/global issues in general or research on what motivates people to pursue effective altruism.
Second, I am interested in what you think are major open questions in gcr/xrisk. Are you facing decisions to get involved in gcr/xrisk, or to take certain actions to reduce the risks? For these decisions, is there information that would help you figure out what to do? Your answers here can help inform the directions GCRI pursues for its research. We aspire to help people make better decisions to more effectively reduce the risks.
Good questions!
The only plausible argument I can imagine for de-prioritizing GCR reduction is if there are other activities out there that can offer permanent expected gains that are comparably large as the permanent expected losses from GCRs. Nick Beckstead puts this well in his dissertation discussion of far future trajectories, or the concept of "existential hope" from Owen Cotton-Barratt & Toby Ord. But in practical terms the bulk of the opportunity appears to be in gcr/xrisk.
I contributed a small amount of content to this, along with one other GCRI affiliate, but the bulk of the credit goes to the lead authors Stuart Armstrong and Dennis Pamlin. There are synergies between this and GCRI's integrated assessment. We are in ongoing conversation about that. One core difference is that our integrated assessment focuses a lot more on interventions to reduce the risks.
I don't have data on person-hours. I am the only full-time GCRI staff. We have some people doing paid part-time work, and a lot of 'volunteering', though much of the 'volunteering' comes from people who participate in GCRI as part of their 'day job' - for example faculty members with related research interests.
What I'm proudest of is the high-level stakeholder engagement we've had, especially on nuclear weapons. This includes speaking at important DC think tanks, the United Nations, and more. Our research is good, but research isn't worth much unless the ideas actually go places. We're doing well with getting our ideas out to people who can really use them.
Then I guess you don't think it's plausible that we can't expect to make many permanent gains.
Why?