Comment author: marialma 16 October 2017 04:14:09AM *  0 points [-]

You'd have to get a ton of people with minimal overlap in professional interests/skills to work together in such an agency. And depending on the 'pet projects' of the people at the highest levels, you might get a disproportionate focus on one particular risk (similar to what you see in EA right now - might be difficult to retain biologists working on antimicrobial resistance, or gene drive safety). Then the politics of funding within such an agency would be another minefield entirely - "oh, why does the bioterrorism division get x% more funding than the halting nuclear proliferation division?".

How do you figure that X-risks are interdisciplinary in nature? AI safety has little crossover to, say bioterrorism. Speaking to what I personally know, even within disciplines, antimicrobial resistance research has very little to do with outbreak detection/epidemic control, even if researchers are interested in both. You already have agencies who work on each of these problems, so what's the point in creating a new one? It seems like unnecessary bureaucracy to have to figure out what specifically falls under X-risk Agency's domain vs DARPA vs the CDC vs DHS vs USDA vs NASA.

I think governments ARE interested in catastrophic risk; they're just not approaching it in the same ways as EA. They don't see threats to humanity's existence as all under one umbrella -- they are all separate issues that warrant separate approaches. Deciding which things are more or less deserving of funding is a political game motivated by different base assumptions. For one, the government is probably more likely to be concerned about risk of a deadly pandemic over AI security because the government will naturally weight current humans as higher value than future humans. If the government assigns weights that the EA community doesn't agree with, how are you going to handle that? Wouldn't the relative levels of funding that causes are currently getting already kind of reflect that?

Also, what about x risks that are associated with the government to begin with?

Comment author: Khorton 16 October 2017 04:43:22PM 0 points [-]

It sounds like your main concerns are creating needless bureaucracy and moving researchers/civil servants from areas where they have a natural fit (eg pandemic research in the Department of Health) to an interdisciplinary group where they can't easily draw on relevant expertise and might be unhappy over different funding levels.

The part of that I'm most concerned about is moving people from a relevant organisation to a less relevant organisation. It does make sense for pandemic preparation to be under health.

The part of the current system that I'm most concerned about is identification of new risks. In the policy world, things don't get done unless there's a clear person responsible for them. If there's no one responsible for thinking about "What else could go wrong?" no one will be thinking about it. Alternatively, if people are only responsible for thinking "What could go wrong?" in their own departments (Health, Defense, etc) it could be easy to miss a risk that falls outside of the current structure of government departments. Even if an outside expert spots a significant risk (think AI risk), if there's not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen. If we have a clear place to go every time a concern comes up, where concerns can be assessed against each other and prioritised, we would be better at dealing with risks.

In the US, maybe Homeland Security fills this role? Canada has a Minister of Public Safety and Emergency Preparedness, so that's quite clear. The UK doesn't have anything as clearcut, and that can be a problem when it comes to advocacy.

About your other points: -I don't like needless bureaucracy either, but it seems like bureaucracy is a major part of getting your issue on the agenda. It might be necessary, even if it doesn't seem incredibly efficient. -I actually think it would be really good to compare government funding for different catastrophic scenarios. I doubt it's based on anything so rational as weighting current people more highly than future people. :) -On government risks: hopefully if it's explicitly someone's job to consider risks, they can put forward policy ideas to mitigate those risks. I want those risks to be easy to notice and make policy around.

Comment author: marialma 14 October 2017 09:23:45PM 0 points [-]

I am not sure you are giving governments enough credit. Wrt things like gene drive safety, certain agencies are already working on these things. I know some researchers who just got a DARPA grant to work on how to contain and manage gene drives. US military research also includes plenty of stuff on bioterrorism - both agricultural and through pathogens. Grantmaking efforts are relatively rapid ways to get this stuff done, I think?

X-risk is so broad and cuts across so many different fields that dedicating an entire agency to it seems difficult, especially if you consider effectiveness.

Comment author: Khorton 15 October 2017 11:22:53PM *  0 points [-]

I think the breadth and interdisciplinary nature of x-risks are the best arguments for a dedicated agency with a mandate to consider any plausible catastrophic risks. It's too easy to overlook risks without a natural "home" in a particular department right now.

Comment author: RyanCarey 11 October 2017 01:57:19AM 0 points [-]

Maybe you would start with a small part of the defense bureaucracy?

Comment author: Khorton 11 October 2017 07:37:59AM 0 points [-]

We'd have to think very carefully about how we frame it. The solution is less obvious than it might appear at first, irreversible, and a major factor in how successful we are at improving government responses to risks overall.

They'll expect it to address different issues if it's under Defense rather than Health and Human Services or Homeland Security. If we make it a part of the defense bureaucracy, it's there forever, which has pros and cons. That would likely be a better approach somewhere like the US where defense is relatively well-funded than somewhere like Canada where the defense budget is regularly being cut. It's also a better approach if we're very concerned about nuclear war and bioterrorism and we want to frame AGI as a hostile power. It's a worse option if we want to frame dangerous AGI as domestic enterprise gone wrong and focus on issues like pandemics and climate change. If we decide creation of government agencies are an important part of our long-term policy strategy, several people should think very hard about where these agencies should be located in each government we lobby.

Comment author: nexech 10 October 2017 02:49:03PM 3 points [-]

Do you know of any resources discussing the pros and cons of the introduction of new government agencies?

An idea worth discussing, regardless.

Comment author: Khorton 10 October 2017 11:12:08PM 0 points [-]

I saw the idea in passing and it caught my eye. I'll look out for this kind of information over the next week.

Comment author: MHarris 10 October 2017 04:10:56PM 0 points [-]

One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.

Comment author: Khorton 10 October 2017 04:34:20PM *  0 points [-]

Both creating and sustaining a government agency will likely take more popular support than we currently have, but I still think it's an important long term goal.

I'm under the impression that agencies are less dependent on the ebb and flow of public opinion than individual policy ideas. However, they would certainly still need some public support. On the other hand, having an agency for catastrophic risk prevention might give the issue legitimacy and actually make it more popular.

3

Changing the Government's Approach to Catastrophic Risks

Many Effective Altruists are concerned with catastrophic risks—any potential events that could harm a large portion of humanity. Although we know that these risks need different responses, we recognize that catastrophic risks have a lot in common. By considering several catastrophic risks, we can prioritize our responses based on how... Read More
Comment author: Khorton 01 October 2017 08:46:55PM 6 points [-]

This makes a lot of sense to me - people usually give me a funny look if I mention AI risks. I'll try mentioning "AI accidents" to fellow public policy students and see if that phrase is more intuitive.

Comment author: Khorton 20 September 2017 09:18:03AM 1 point [-]

If possible, I'd reduce the reading age of the questions by using simpler words and shorter sentences. I consistently overestimate the reading ability of average citizens.

If these statements were really on a ballot, people would likely have seen advertisements or news clips about the proposal. Right now, people have never heard these proposals before. It's important that they understand what you're asking.

Comment author: Khorton 06 September 2017 07:46:15PM 1 point [-]

Seven Habits of Highly Effective People - Read it in high school and found it's influenced my thinking since, especially the part about keeping promises to yourself. Persepolis - Read it when I was 18 and found it a useful fictional introduction to a culture very different from my own. Many other books could do a similar job. Getting to Yes - Changed how I think about negotiation. The Gospel of John - Changed how I think about everything. Deep Work - I read this recently, and while I don't agree with everything (I think the author overreaches occasionally), I do think it would have been useful at 18.

Comment author: Khorton 24 June 2017 07:17:36PM 0 points [-]

I'm excited about the idea of new funds. As a prospective user, my preferences are:

  • Limited / well-organised choices. This is because I, like many people, get overwhelmed by too many choices. For example, perhaps I could choose between global poverty, animal welfare, and existential risks, and then choose between options within the category (eg "Low-Risk Global Poverty Fund" or "Food Security Research Fund").

  • Trustworthy fund managers / reasonable allocation of funds. There are many reasonable ways to vet new funds, but ultimately I'm using the service because I don't want to have to carefully vet them myself.

View more: Next