Mar 3 20152 min read 35

13

Hello Effective Altruism Forum,

I am Seth Baum and I will be here to answer your questions 3 March 2015, 7-9 PM US ET (New York time). You can post questions in this thread in the meantime. Here is some more background:

I am Executive Director of the Global Catastrophic Risk Institute (GCRI). I co-founded GCRI in 2011 with Tony Barrett. GCRI is an independent, nonprofit think tank studying major risks to the survival of human civilization. We develop practical, effective ways to reduce the risks.

There is often some confusion among effective altruists about how GCRI uses the term “global catastrophic risk”. The bottom line is that we focus on risk of catastrophes that could cause major permanent harm. This is similar to some use of “existential risk”. You can read more about that here.

GCRI just announced major changes to GCRI’s identity and direction. We are focusing increasingly on in-house research oriented towards assessing the best ways of reducing the risks. This is at the heart of our new flagship integrated assessment project, which puts all the gcrs into one study to learn the best risk reduction opportunities.

If you’d like to stay up to date on GCRI, you can sign up for our monthly email newsletter. You can also support GCRI by donating.
And GCRI is not active on social media, but you can follow me on Twitter.

I am excited to have this chance to speak with the online effective altruism community. I was involved in the online utilitarianism community around 2006-2007 via my Felicifia blog. I’m really impressed with how the community has grown. A lot of people have put a lot of work into this. Thanks go in particular to Ryan Carey for setting up today’s AMA and for doing so much more.

There are also a few things I’m hoping to learn from you:

First, I am considering a research project on what motivates people to take on major global issues and/or to act on altruistic principles more generally. I would be interested in any resources you know of about this. It could be research on altruism/global issues in general or research on what motivates people to pursue effective altruism.

Second, I am interested in what you think are major open questions in gcr/xrisk. Are you facing decisions to get involved in gcr/xrisk, or to take certain actions to reduce the risks? For these decisions, is there information that would help you figure out what to do? Your answers here can help inform the directions GCRI pursues for its research. We aspire to help people make better decisions to more effectively reduce the risks.

Comments35
Sorted by Click to highlight new comments since: Today at 10:27 AM

Well, I guess he did say you could ask him anything.

Here are four questions reposted from the announcement thread:

Bitton: Of all the arguments you've heard for de-prioritizing GCR reduction, which do you find most convincing?

Niel Bowerman: What is your assessment of the recent report by FHI and the Global Challenges Foundation? http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact-full-report-1.pdf How will your integrated assessment differ from this?

Niel Bowerman: How many man-hours per week are currently going into GCRI. How many paid staff do you have and who are they?

Niel Bowerman: - What would you say is the single most impressive achievement that GCRI has achieved to date?

Good questions!

Of all the arguments you've heard for de-prioritizing GCR reduction, which do you find most convincing?

The only plausible argument I can imagine for de-prioritizing GCR reduction is if there are other activities out there that can offer permanent expected gains that are comparably large as the permanent expected losses from GCRs. Nick Beckstead puts this well in his dissertation discussion of far future trajectories, or the concept of "existential hope" from Owen Cotton-Barratt & Toby Ord. But in practical terms the bulk of the opportunity appears to be in gcr/xrisk.

Niel Bowerman: What is your assessment of the recent report by FHI and the Global Challenges Foundation? http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact-full-report-1.pdf How will your integrated assessment differ from this?

I contributed a small amount of content to this, along with one other GCRI affiliate, but the bulk of the credit goes to the lead authors Stuart Armstrong and Dennis Pamlin. There are synergies between this and GCRI's integrated assessment. We are in ongoing conversation about that. One core difference is that our integrated assessment focuses a lot more on interventions to reduce the risks.

How many man-hours per week are currently going into GCRI. How many paid staff do you have and who are they?

I don't have data on person-hours. I am the only full-time GCRI staff. We have some people doing paid part-time work, and a lot of 'volunteering', though much of the 'volunteering' comes from people who participate in GCRI as part of their 'day job' - for example faculty members with related research interests.

What would you say is the single most impressive achievement that GCRI has achieved to date?

What I'm proudest of is the high-level stakeholder engagement we've had, especially on nuclear weapons. This includes speaking at important DC think tanks, the United Nations, and more. Our research is good, but research isn't worth much unless the ideas actually go places. We're doing well with getting our ideas out to people who can really use them.

The only plausible argument I can imagine for de-prioritizing GCR reduction is if there are other activities out there that can offer permanent expected gains that are comparably large as the permanent expected losses from GCRs.

Then I guess you don't think it's plausible that we can't expect to make many permanent gains.

Why?

Then I guess you don't think it's plausible that we can't expect to make many permanent gains. Why?

I'll have to look at that link later, but briefly, I do think it can be possible to make some permanent gains, but there seem to be significantly more opportunities to avoid permanent losses. That said, I do not wish to dismiss the possibility of permanent gains, and am very much willing to consider them as of potential comparable significance.

What funding will GCRI require over the coming year to maintain these activities?

What funding will GCRI require over the coming year to maintain these activities?

GCRI has a small base of ongoing funding that keeps the doors open, so to speak, except that we don't have any actual doors. I will say, not having an office space really lowers costs!

The important thing is that GCRI is in an excellent place to convert additional funding into additional productivity, mainly by freeing up additional person-hours of work.

My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?

The rest of questions are up to other readers.

Thanks very much for all of the work that you've done reducing catastrophic risks. Thanks in particular for appearing here to interface with EAs regarding your plans, progress and ideas. GCRI seems like it's an extremely valuable institution and it's great of you to give a window into how it all runs. I think that it's one to watch, and to support for all effective altruists!

Thanks Ryan! And thanks again for organizing.

My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?

This is a really, really important question. In a sense, it all comes down to this. Otherwise there's not much point in doing risk analysis.

First, there are risk analysis positions that inform decision making very directly. (I'm speaking here in terms of 'decisions' instead of 'policies' but you can use these words pretty interchangeably.) These exist in both government and the private sector. However, as a general rule the risks in question are not gcrs - they are smaller risks.

For the gcrs it's trickier because companies can't make money off it. I've had some funny conversations with people in the insurance industry trying to get them to cover gcrs. I'm pretty sure it just can't be done. Governments can be much friendlier for gcr, as they don't need to make it profitable.

My big advice is to get involved in the decision processes as much as possible. GCRI calls this 'stakeholder engagement'. That is a core part of our integrated assessment, and our work in general. It means getting to know the people involved in the decisions, building relations with them, understanding their motivations and their opportunities for doing things differently, and above all finding ways to build gcr reductions into their decisions in ways that are agreeable to them. I cannot emphasize enough how important it is to listen to the decision makers and try to understand things from their perspective.

For example, if you want to reduce AI risk, then get out there and meet some AI researchers and AI funders and anyone else playing a role in AI development. Then talk to them about what they can do to reduce AI risk, and listen to them about what they are or aren't willing or able to do.

GCRI has so far done the most stakeholder engagement on nuclear weapons. I've been spending time at the United Nations, getting to know the diplomats and activists involved in the issues, and what the issues are from their perspectives. I'm giving talks on nuclear war risk, but much of the best stuff is in private conversations along the way.

At any rate, some of the best ways to reduce risks aren't what logically follow from the initial risk analysis, but it feeds back into the next analysis. So it's a two-way conversation. Ultimately I think that's the best way to go for actually reducing risks.

Hey Seth,

Are you coordinating with FLI and FHI to have some division of labor? What would you identify GCRI's main comparative advantage?

Best, Ales

Hi Ales,

Are you coordinating with FLI and FHI to have some division of labor?

We are in regular contact with both FLI & FHI. FHI is more philosophical than GCRI. The most basic division of labor there is for FHI to develop fundamental theory and GCRI to make the ideas more applied. But this is a bit of a simplication, and the coordination there is informal. With FLI, I can't yet point to any conceptual division of labor, but we're certainly in touch. Actually I was just spending time with Max Tegmark over the weekend in NYC, and we had some nice conversations about that.

What would you identify GCRI's main comparative advantage?

GCRI comes from the world of risk analysis. Tony Barrett and I (GCRI's co-founders) met at a Society for Risk Analysis conference. So at the core of GCRI's identity and skill set is rigorous risk analysis and risk management methodology. We're also good at synthesizing insights across disciplines and across risks, as in our integrated assessment, and at developing practical risk reduction interventions. Other people and other groups may also be good at some of this, but these are some of our strengths.

OK, I'm wrapping up for the evening. Thank you all for these great questions and discussion. And thanks again to Ryan Carey for organizing.

I'll check back in tomorrow morning and try to answer any new questions that show up.

Thanks very much for giving some of your time to discuss this important topic with all of us! It's great to build a stronger connection between effective altruists and GCRI and to get a better idea of how you're thinking about analysing and predicting risks. Good luck with GCRI and I look forward to hearing how GCRI comes along with its new, research-focussed direction.

Thanks again for your time, comments and being a nucleation point for conversation!

Here's another question: what kind of researchers do you think are needed most at GCRI? And do you expect the kinds of researchers that come to you are very different from the ones that are needed for catastrophic risk research in general, like at FHI, MIRI, FLI or CSER?

what kind of researchers do you think are needed most at GCRI?

Right now, I would say researchers who can do detailed risk analysis similar to what we did in our inadvertent nuclear war paper: http://sethbaum.com/ac/2013_NuclearWar.html. The ability to work across multiple risks is extremely helpful. Our big missing piece has been on biosecurity risks. However, we have a new affiliate Gary Ackerman who is helping out with that. Also I'm participating in a biosecurity fellowship program that will also help. But we could still use more on biosecurity. That includes natural pandemics, biological weapons, biotech lab accidents, etc.

The other really important thing is people who can develop risk-reducing interventions that bring significant risk reductions and make sense from the perspective of the people who would take these actions. There's a lot of important social science to be done in understanding the motivations of key actors, whether it is politicians, emerging researchers, or whoever else.

And do you expect the kinds of researchers that come to you are very different from the ones that are needed for catastrophic risk research in general, like at FHI, MIRI, FLI or CSER?

Definitely different from MIRI, as they're currently focused on technical AI research and we do not do that. Relative to us, FHI is more philosophical, but we still talk with them a lot. CSER is just getting started with their post-docs arriving later this year, but I see a lot of parallels between CSER's research approaches and GCRI's. And I'm not quite sure what in-house research FLI is doing, so it's hard for me to comment on that.

Overall, we tend to attract more social science and policy research, and more quantitative risk analysis, though that may be changing with CSER doing similar work. Regardless, we have excellent relations with each of these organizations, and collaborate with them where appropriate.

Hi Seth. I'm just finishing up work and am going to dump a bunch of questions here, then run home. Sorry for the firehose, and thank you for your time and work!


If I wanted to work at GCRI or a similar think-tank/institution, what skills would make me most valuable?

What are your suggestions for someone who's technically inclined and interested in directly working on existential risk issues?

I'm particularly worried about the risks of totalitarianism, potentially leading to a what, IIRC, Bostrom calls a 'whimper': just a generally shitty future in which most people don't have a chance to achieve their potential. To me this seems as likely if not more so than AI risk. What are your thoughts?

Over the twentieth century we sort of systematically deconstructed a lot of our grand narratives, like 'progress'. Throwing out the narratives that supported colonialism was probably a net win, but it seems like we're now at a point where we really need some new stories for thinking about the dangerous place we are in, and the actions that we might need to take. Do you have any thoughts on narratives as a tool for dealing with x-risks?

How can we make our societies generally resilient to threats? Once we have some idea of how to make ourselves more resilient, how can we enact these ideas?

I think that a really robust space program could be very important for x-risk mitigation. What are your thoughts? Do you see space-policy advocacy as an x-risk related activity?

thank you for your time and work!

You're welcome!

If I wanted to work at GCRI or a similar think-tank/institution, what skills would make me most valuable?

Well, I regret that GCRI doesn't have the funds to be hiring right now. Also, I can't speak for other think tanks. GCRI runs a fairly unique operation. But I can say a bit on what we look for in people we work with.

Some important things to have for GCRI include: (1) a general understanding of gcr/xrisk issues, for example by reading research from GCRI, FHI, and our colleagues; (2) deep familiarity with specific important gcrs, including research literature, expert communities, and practitioner communities; (3) capability with relevant methodologies in quantitative risk analysis such as risk modeling and expert elicitation; (4) demonstrated ability to publish in academic journals or significant popular media outlets, speak at professional conferences, or otherwise get your ideas heard; (5) ability to work across academic disciplines and professions, and to work with teams of similarly diverse backgrounds.

What are your suggestions for someone who's technically inclined and interested in directly working on existential risk issues?

It depends on what you mean by 'technically inclined'. Could you clarify?

I'm particularly worried about the risks of totalitarianism, potentially leading to a what, IIRC, Bostrom calls a 'whimper': just a generally shitty future in which most people don't have a chance to achieve their potential. To me this seems as likely if not more so than AI risk. What are your thoughts?

I don't have confident estimates on relative probabilities, but I agree that totalitarianism is important to have on our radar. It's also a very delicate risk to handle, as it points directly to the highest bastions of power. Interestingly, totalitarianism risk resonates well with certain political conservatives who might otherwise dismiss gcr talk as alarmist. At any rate, I would not discourage you from looking into totalitarianism risk further.

Over the twentieth century we sort of systematically deconstructed a lot of our grand narratives, like 'progress'. Throwing out the narratives that supported colonialism was probably a net win, but it seems like we're now at a point where we really need some new stories for thinking about the dangerous place we are in, and the actions that we might need to take. Do you have any thoughts on narratives as a tool for dealing with x-risks?

First, I commend you for thinking in terms of deconstructed narratives and narratives as tools. I'm curious as to your background. Most people I know who self-identify as 'technically inclined' cannot speak coherently about narrative construction.

This is something I think about a lot. One narrative I use comes from James Martin's book 'The Meaning of the 21st Century'. The title on its own offers a narrative, essentially the same as in Martin Rees's 'Our Final Century'. Within the book, Martin speaks of this era of human civilization as going through a period of turbulence, like in a river with rapids. I don't have the exact quote here but I think he uses the river metaphor. At any rate, the point is that global civilization is going through a turbulent period. If we can successfully navigate the turbulence, we have a great, beautiful future ahead of us. I've used this in a lot of talks with a lot of different audiences and it seems to resonate pretty well.

How can we make our societies generally resilient to threats? Once we have some idea of how to make ourselves more resilient, how can we enact these ideas?

One common proposal is to stockpile food and other resources, or even to build refuges. This could be very helpful. An especially promising idea from Dave Denkenberger of GCRI and Joshua Pearce of Michigan Tech is to grow food from fossil fuels, trees, and other biomass. So even if the sun is blocked (as in e.g. nuclear winter) we can still feed ourselves. See http://www.appropedia.org/Feeding_Everyone_No_Matter_What. These are some technological solutions. It's also important to have social solutions. These are institutions that respond well to major disturbances, psychological practices, and more. We say a bit on this in http://sethbaum.com/ac/2013_AdaptationRecovery.html and http://gcrinstitute.org/aftermath, but this is an understudied area of gcr. However, there is a lot of great research on local-scale disaster vulnerability and resilience that can be leveraged for gcr.

I think that a really robust space program could be very important for x-risk mitigation. What are your thoughts? Do you see space-policy advocacy as an x-risk related activity?

It's certainly relevant. I used to think it was not promising due to the extremely high cost of space programs relative to activities on Earth. However, Jacob Haqq-Misra (http://haqqmisra.net) of GCRI and Blue Marble Space made the great point that space programs may be happening anyway for other reasons, in particular political, scientific, and economic reasons. It may be reasonably cost-effective to 'piggyback' gcr reduction into existing space programs. This relates back to an earlier comment I made about the importance of stakeholder engagement.

First, I commend you for thinking in terms of deconstructed narratives and narratives as tools. I'm curious as to your background. Most people I know who self-identify as 'technically inclined' cannot speak coherently about narrative construction.

I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a 'creative technologist' and now I'm very happy as a programmer, trying to learn as much math as I can in my free time.

I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a 'creative technologist' and now I'm very happy as a programmer, trying to learn as much math as I can in my free time.

Very interesting!

[anonymous]9y1
0
0

If you do not mind my taking a stab at this one...resiliency in complex adaptive systems is a function of their diversity. A more biologically diverse ecosystem is more resilient and less prone to collapse than one with fewer species and fewer genuses. Similarly, a more diverse economy is less prone to sudden catastrophic failure. In general this pattern can be summarized as: monopolies and concentrations of dominance and power are inherently less resilient and harmful. I have a paper extrapolating on these ideas here: theroadtopeace.blogspot.com In terms of how to enact my ideas, if they are right, it seems the most effective first step is to push for real enforcement of our anti-trust laws.

I see the logic here, but I would hesitate to treat it as universally applicable. Under some circumstances, more centralized structrues can outperform. For example if China or Wal-Mart decide to reduce greenhouse gas emissions, then you can get a lot more than if the US or the corner store decide to, because the latter are more decentralized. That's for avoiding catastrophes. For surviving them, sometimes you can get similar effects. However, local self-sufficiency can be really important. We argued this in http://sethbaum.com/ac/2013_AdaptationRecovery.html. As for anti-trust, perhaps this could help, but this doesn't strike me as the right place to start. It seems like a difficult area to make progress on relative to the potential gains in terms of gcr reduction. But I could be wrong, as I've not looked into it in any detail.

Total mixed bag of questions, feel free to answer any/all. Apologies if you've already written on the subject elsewhere; feel free to just link if so.

What is your current marginal project(s)? How much will they cost, and what's the expected output (if they get funded)

What is the biggest mistake you've made?

What is the biggest mistake you think others make?

What do you think about the costs and benefits of publishing in journals as strategy?

Do you think the world has become better or worse over time? How? Why?

Do you think the world has become more or less at risk over time? How? Why?

What you think about Value Drift?

What do you think will be the impact of the Elon Musk money?

How do you think about weighing future value vs current value?

What do you think about population growth/stagnation?

Why did you found a new institute rather than joining an existing one?

Are there any GCRs you are worried about that would not involve a high deathcount?

What's your probability distribution for GCR timescale?

Personal question, feel free to disregard, but this is an AMA:

How has concern about GCR's affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?

Total mixed bag of questions, feel free to answer any/all. Apologies if you've already written on the subject elsewhere; feel free to just link if so.

No worries.

What is your current marginal project(s)? How much will they cost, and what's the expected output (if they get funded)

We're currently fundraising in particular for integrated assessment, http://gcrinstitute.org/integrated-assessment. Most institutional funders have programs on only one risk at a time. We're patching integrated assessment work from other projects, but hope to get more dedicated integrated assessment funding. Something up to around $1M/yr would probably suit us well for now, but this is significantly higher than what we currently have, and every dollar helps.

What is the biggest mistake you've made?

This is actually an easy one, since we just finished shifting our focus. The biggest mistake we made was letting ourselves get caught up on an ad hoc, unfocused mix of projects, instead of prioritizing better. The integrated assessment is now our core means of prioritizing. See more at http://gcrinstitute.org/february-newsletter-new-directions-for-gcri.

What is the biggest mistake you think others make?

Well, most people make the mistake of not focusing mainly on gcr reduction. Within the gcr community, I think the biggest mistake is not focusing on how best to reduce the risks. Instead a lot of people focus on the risks themselves.

What do you think about the costs and benefits of publishing in journals as strategy?

We publish mainly in academic journals. It takes significant extra effort and introduces delays, but it almost always improves the quality of the final product, it attracts a wider audience, it can be used more widely, and it has significant reputation benefits. But we make heavy use of our academic careers and credentials. It's not for everyone, and that's OK.

Do you think the world has become better or worse over time? How? Why?

It's become better and worse. Population, per capita quality of life, and values seem to be improving. But risks are piling up.

Do you think the world has become more or less at risk over time? How? Why?

More, due mainly to technological and environmental change. Opportunities are also increasing. The opportunities are all around us (for example, the internet), but the risks can be so enormous.

What you think about Value Drift?

Define?

What do you think will be the impact of the Elon Musk money?

It depends on what proposals they get, but I'm cautiously optimistic that this will really help develop a culture of responsibility and safety among AI researchers. More so because it's not just money - FLI and others are actively nurturing relationships.

How do you think about weighing future value vs current value?

All units of intrinsic value should be weighted equally regardless of location in time or space. (Intrinsic value: see http://sethbaum.com/ac/2012_Value-CBA.html.)

What do you think about population growth/stagnation?

I don't get too worried about it.

Why did you found a new institute rather than joining an existing one?

Because Tony Barrett and I didn't see any existing institutes capable of working on gcr they way we thought it should be done, in particular working across all the risks with rigorous risk analysis & risk management methodology.

Are there any GCRs you are worried about that would not involve a high deathcount?

Totalitarianism is one. Another plausible one is toxic chemicals, but this might not be big enough to merit that level of concern. On toxics, see http://sethbaum.com/ac/2014_Rev-Grandjean.pdf.

What's your probability distribution for GCR timescale?

I'm not sure what you mean by that, but at any rate, I don't have confident estimates for specific probabilities.

Personal question, feel free to disregard, but this is an AMA: How has concern about GCR's affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?

It hasn't affected things like retirement or children. Maybe it should, but it hasn't. The bigger factor is not gcr per se but fanatacism towards helping others. I push myself pretty hard, but I would probably be doing the same if I was focusing on, say, global poverty or animal welfare instead of gcr.

What are GCRI's current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.

Also, with regard to the research project on altruism, my shoot-from-the-hip intuition is that you'll find somewhat different paths into effective altruism than other altruistic activities. Many folks I know now involved in EA were convinced by philosophical arguments from people like Peter Singer. I believe Tom Ash (tog.ash@gmail.com) embedded Qs about EA genesis stories in the census he and a few others conducted.

As for more general altruistic involvement, one promising body of work is on the role social groups play. Based on some of the research I did for Reducetarian message-framing, it seems like the best predictor of whether someone becomes a vegetarian is whether their friends also engage in vegetarianism (this accounts for more of the variance than self-reported interest in animal welfare or health benefits). The same was true of the civil right movement: the best predictor of whether students went down South to sign African Americans up to vote was whether they were part of a group that participated in this very activity.

Buzz words here to aid in the search: Social proof Peer pressure Normative social influence Conformity Social contagion

Literature to look into:

What are GCRI's current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.

We have an active synbio project modeling the risk and characterizing risk reduction opportunities, sponsored by the US Dept of Homeland Security: http://gcrinstitute.org/dhs-emerging-technologies-project.

I agree that synbio is an under-invested-in area across the gcr community. Ditto for other bio risks. GCRI is working to correct that, as is CSER.

Also, with regard to the research project on altruism, my shoot-from-the-hip intuition is that you'll find somewhat different paths into effective altruism than other altruistic activities. Many folks I know now involved in EA were convinced by philosophical arguments from people like Peter Singer. I believe Tom Ash (tog.ash@gmail.com) embedded Qs about EA genesis stories in the census he and a few others conducted.

Thanks! Very helpful.

As for more general altruistic involvement, one promising body of work is on the role social groups play. Based on some of the research I did for Reducetarian message-framing, it seems like the best predictor of whether someone becomes a vegetarian is whether their friends also engage in vegetarianism (this accounts for more of the variance than self-reported interest in animal welfare or health benefits). The same was true of the civil right movement: the best predictor of whether students went down South to sign African Americans up to vote was whether they were part of a group that participated in this very activity.

Thanks again! I recall seeing data indicating that health was the #1 reason for becoming vegetarian, but I haven't looked into this closely so I wouldn't dispute your findings.

Buzz words here to aid in the search: Social proof Peer pressure Normative social influence Conformity Social contagion

Literature to look into: - Sandy Pentland's "social physics" work: http://socialphysics.media.mit.edu/papers/ - Chapter 4 ("Social proof") of Cialdini's Influence: Science and Practice: http://www.amazon.com/Influence-Science-Practice-5th-Edition/dp/0205609996 - McKenzie-Mohr's book on Community–Based Social Marketing: http://www.cbsm.com/pages/guide/preface/

Thanks!

Here's one question: which risks are you most concerned about?

I shy away from ranking risks, for several reasons:

  • The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they're not really distinct risks.

  • Ultimately what's important to rank is not the risks themselves, but the actions we can take to reduce them. We may sometimes have better opportunities to reduce smaller risks. For example, maybe some astronomers should work on asteroid risks even though this is a relatively low probability risk.

Also, the answer to this question varies by time period. For, say, the next 12 months, nuclear war and pandemics are probably the biggest risks. For the next 50-100 years, we need to worry about these plus a mix of environmental and technological risks.

And who do you think has the power to reduce those risks?

There's the classic Margaret Mead quote, "Never underestimate the power of a small group of committed people to change the world. In fact, it is the only thing that ever has." There's a lot of truth to this, and I think the EA community is well on its way to being another case in point. That is as long as you don't slack off! :)

That said, I keep an eye on a mix of politicians, other government officials, researchers, activists, celebrities, journalists, philanthropists, entrepreneurs, and probably a few others. They all play significant roles and it's good to be able to work with all of them.

For what it's worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It's just easier for me to order the salad.

My interest in x-risk comes from wanting to work on big/serious problems. I can't think of a bigger one than x-risk.

For what it's worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It's just easier for me to order the salad.

I mainly eat veg foods too. It reduces environmental problems, which helps on gcr/xrisk. And it's good for livestock welfare, which is still a good thing to help on. And it lowers global food prices, which is good for global poverty. And apparently it's also healthy.

My interest in x-risk comes from wanting to work on big/serious problems. I can't think of a bigger one than x-risk.

Yeah, same here. I think the most difficult ethical issue with gcr/xrisk is the idea that other, smaller issues don't matter so much. It's like we don't care about the poor or something like that. What I say here is that no, it's precisely because we do care about the poor, and everyone else, that it's so important to reduce these risks. Because unless we avoid catastrophe, nothing else really matters. All that work on all those other issues would be for nothing.

Aren't we all impressed with how far effective altruists have come!

Here's one question: which risks are you most concerned about? And who do you think has the power to reduce those risks?

oops I think I answered this question up above. I think this is the link: http://effective-altruism.com/ea/fv/i_am_seth_baum_ama/2v9

[anonymous]9y1
0
0

One of the major obstacles to combating Global Warming at the governmental level in America is the large financial investment that the fossil fuel industry makes to politicians in return for tens of billions of dollars in government assistance every year (widely varied numbers depending on how one calculates the incentives and tax breaks and money for research and so on). There seems to me to be only one way to change the current corrupt money for control of politicians process, and that is to demand that all political donations be made anonymously, given to the government who then deposits it in the political party or candidates' account in a way that hides the identity of the donor from the recipient. This way the donor still has their "speech" and yet cannot wield undue influence on the politician. Most likely many such "donations" will stop as the corrupt people making them will understand that they can simply claim to have given and keep their money. What do you think of this idea? Why would it not work? How do we get it done?

One of the major obstacles to combating Global Warming at the governmental level in America is the large financial investment that the fossil fuel industry makes to politicians in return for tens of billions of dollars in government assistance every year (widely varied numbers depending on how one calculates the incentives and tax breaks and money for research and so on). There seems to me to be only one way to change the current corrupt money for control of politicians process, and that is to demand that all political donations be made anonymously, given to the government who then deposits it in the political party or candidates' account in a way that hides the identity of the donor from the recipient. This way the donor still has their "speech" and yet cannot wield undue influence on the politician. Most likely many such "donations" will stop as the corrupt people making them will understand that they can simply claim to have given and keep their money. What do you think of this idea? Why would it not work? How do we get it done?

First, I agree that a key to addressing global warming is to address the entrenched financial interests that have been opposing it. So you're zooming in on at least one of the most important parts of it.

Your idea makes sense, at least at first glance. I don't have a good sense for how politically feasible it is, but I'm afraid I'm skeptical. Any change to the structure of the political system that reduces large influences is likely to be fought by those influences. But I would not discourage you from looking into it further and giving it a try.

It looks like a good part of the conversation is starting to revolve around influencing policy. I think there's some big macro social/cultural forces that have been pushing people to be apolitical for a while now. The most interesting reform effort I've heard about lately is Lawrence Lessig's anti-PAC in the US.

How can we effectively level our political games up?

It looks like a good part of the conversation is starting to revolve around influencing policy. I think there's some big macro social/cultural forces that have been pushing people to be apolitical for a while now. The most interesting reform effort I've heard about lately is Lawrence Lessig's anti-PAC in the US. How can we effectively level our political games up?

I agree there are macro factors pushing people away from policy. However, that can actually increase the effectiveness of policy engagement: less competition.

A great way to level up in politics is to get involved in local politics. Local politics is seriously underrated. It is not terribly hard to actually change actual policies. And you make connections that can help you build towards higher levels.

For gcr, a good one is urban planning to reduce greenhouse gas emissions. I'm biased here, because I'm an urban planning junkie, but there's always loads of opportunity. Here in NYC I have my eye on a new zoning policy change. It's framed in terms of afforable housing, not global warming, but the effect is the same. See http://www.vox.com/2015/2/21/8080237/nyc-zoning-reform.