Hello Effective Altruism Forum,
I am Seth Baum and I will be here to answer your questions 3 March 2015, 7-9 PM US ET (New York time). You can post questions in this thread in the meantime. Here is some more background:
I am Executive Director of the Global Catastrophic Risk Institute (GCRI). I co-founded GCRI in 2011 with Tony Barrett. GCRI is an independent, nonprofit think tank studying major risks to the survival of human civilization. We develop practical, effective ways to reduce the risks.
There is often some confusion among effective altruists about how GCRI uses the term “global catastrophic risk”. The bottom line is that we focus on risk of catastrophes that could cause major permanent harm. This is similar to some use of “existential risk”. You can read more about that here.
GCRI just announced major changes to GCRI’s identity and direction. We are focusing increasingly on in-house research oriented towards assessing the best ways of reducing the risks. This is at the heart of our new flagship integrated assessment project, which puts all the gcrs into one study to learn the best risk reduction opportunities.
If you’d like to stay up to date on GCRI, you can sign up for our monthly email newsletter. You can also support GCRI by donating.
And GCRI is not active on social media, but you can follow me on Twitter.
I am excited to have this chance to speak with the online effective altruism community. I was involved in the online utilitarianism community around 2006-2007 via my Felicifia blog. I’m really impressed with how the community has grown. A lot of people have put a lot of work into this. Thanks go in particular to Ryan Carey for setting up today’s AMA and for doing so much more.
There are also a few things I’m hoping to learn from you:
First, I am considering a research project on what motivates people to take on major global issues and/or to act on altruistic principles more generally. I would be interested in any resources you know of about this. It could be research on altruism/global issues in general or research on what motivates people to pursue effective altruism.
Second, I am interested in what you think are major open questions in gcr/xrisk. Are you facing decisions to get involved in gcr/xrisk, or to take certain actions to reduce the risks? For these decisions, is there information that would help you figure out what to do? Your answers here can help inform the directions GCRI pursues for its research. We aspire to help people make better decisions to more effectively reduce the risks.
Thanks Ryan! And thanks again for organizing.
This is a really, really important question. In a sense, it all comes down to this. Otherwise there's not much point in doing risk analysis.
First, there are risk analysis positions that inform decision making very directly. (I'm speaking here in terms of 'decisions' instead of 'policies' but you can use these words pretty interchangeably.) These exist in both government and the private sector. However, as a general rule the risks in question are not gcrs - they are smaller risks.
For the gcrs it's trickier because companies can't make money off it. I've had some funny conversations with people in the insurance industry trying to get them to cover gcrs. I'm pretty sure it just can't be done. Governments can be much friendlier for gcr, as they don't need to make it profitable.
My big advice is to get involved in the decision processes as much as possible. GCRI calls this 'stakeholder engagement'. That is a core part of our integrated assessment, and our work in general. It means getting to know the people involved in the decisions, building relations with them, understanding their motivations and their opportunities for doing things differently, and above all finding ways to build gcr reductions into their decisions in ways that are agreeable to them. I cannot emphasize enough how important it is to listen to the decision makers and try to understand things from their perspective.
For example, if you want to reduce AI risk, then get out there and meet some AI researchers and AI funders and anyone else playing a role in AI development. Then talk to them about what they can do to reduce AI risk, and listen to them about what they are or aren't willing or able to do.
GCRI has so far done the most stakeholder engagement on nuclear weapons. I've been spending time at the United Nations, getting to know the diplomats and activists involved in the issues, and what the issues are from their perspectives. I'm giving talks on nuclear war risk, but much of the best stuff is in private conversations along the way.
At any rate, some of the best ways to reduce risks aren't what logically follow from the initial risk analysis, but it feeds back into the next analysis. So it's a two-way conversation. Ultimately I think that's the best way to go for actually reducing risks.