Comment author: SimonBeard 22 August 2018 10:56:34PM 4 points [-]

Can I ask why you actually want to catagorize ethics like this at all? I know that it is traditional and it can be very helpful when teaching ethics to set things out like this as if you don’t then students often miss the profound difference between different ethical theories. However a lot of exciting work has never fallen into these catagories, which in any case are basically post hoc classifications of the work of Aristotle, Plato and Mill. Hume’s work for instance is pretty clearly ‘none of the above’ and a lot of good creative work has been done over the past century or more in trying to break down the barriers between these schools (Sidgwick and Parfit being the two biggest names here, but by no means the only ones). Personally I think that there are a lot of good tools and poweful arguments to be found across the ethical spectrum and that so long as you apreciate the true breadty of diversity among ethical theories then breaking them down like this is no longer much help for anything really.

From an EA perspective I think that the one distinction that may be worth paying attention to, and that can fit into your ‘consiquentialism’ Vs ‘deontology and virtue ethics’ distinction, though it is not a perfect fit, are moral theories that can be incorporated into an ‘expected moral value’ framework and those that can’t. This is an important distinction because it places a limit on how far one can go in making precise judgements about what one ought to do in the face of uncertainty which is something that may be of concern to all EAs. However this is a distinction that emerges from practice rather than being baked into moral theories at the level of first principles and there are other aproaches, such as the ‘parliamentary model’ for handelling moral uncertainty, that seek to overcome it.

Comment author: rhys_lindmark 08 August 2018 01:15:05AM 2 points [-]

Hey Simon! Thanks writing up this paper. The final 1/3 is exactly what I was looking for!

Could you give us a bit more texture on why you think it's "best not to put this kind of number on risks"?

Comment author: SimonBeard 09 August 2018 06:23:53PM 8 points [-]

Hey Rhys

Thanks for prompting me on this. I was hoping to find time for a fuller reply to your but this will have to do, you only asked for the texture after all. My concerns are somewhat nebulous so please don't take this as any cast iron reason not to seek out estimates for the probability of different existential risks. However, I think they are important.

The first relates to the degree of uncertainty that surrounds any estimate of this kind and how it should be handled. There are actually several sources of this.

The first of these relates to the threshold for human extinction. We actually don't have very good models of how the human race might go extinct. Broadly speaking human beings are highly adaptable and we can of-course survive across an extremely wide range of habitats, at least with sufficient technology and planning. So roughly for human extinction to occur then a change must either be extremely profound (such as the destruction of the earth, our sun or the entire universe) very fast (such as a nuclear winter), something that can adapt to us (such as AGI or Aliens) or something that we chose not to adapt to (such as climate change). However, personally, I have a hard time even thinking about just what the limits of survivability might be. Now, it is relatively easy to cover this with a few simplifying assumptions. For instance that 10 degrees of climate change either-way would clearly represent an existential threat. However, these are only assumptions. Then there is the possibility that we will actually be more vulnerable to certain risks than it appears, for instance, that certain environmental changes might cause an irrevocable collapse in human civilization (or in the human microbiome if you are that way inclined. The Global Challenges Foundation used the concepts of 'infinite threshold' and 'infinite impact' to capture this kind of uncertainty, and I think they are useful concepts. However, they don't necaserilly speak to our concern to know about the probability of human extinction and x-risk, rather than that of potential x-risk triggers.

The other obvious source of uncertainty is the uncertainty about what will happen. This is more mundane in many ways, however when we are estimating the probability of an unprecedented event like this I think it is easy to understate the uncertainty inherent in such estimates, because there is simply so little data to contradict our main assumptions leading to overconfidence. The real issue with both of these however is not that uncertainty means that we should not put numerical values to the likelihood of anything, but that we are just incapable of dealing very well with numerical figures that are highly uncertain, especially where these are stated and debated in a public forum. Even if uncertainty ranges are presented, and they accurately reflect the degree of certainty the assessor can justifiably claim to have, they quickly get cut out with commentators preferring to focus on one simple figure, be it the mean, upper or lower bounds, to the exclusion of all else. This happens and we should not ignore the pitfalls it creates.

The second concern I have is about the context. In your post you mention the famous figure from the Stern review and this is a great example of what I mean. Stern came up with that figure for one reason, and one alone. He wanted to argue for the higher possible discount rate that he believed was ethically justified in order to give maximum credence to his conclusions (or if you are more cynical then perhaps 'he wanted to make it look like he was arguing for...'). However, he also thought that most economic arguments for discounting were not justified he was left with the conclusion that the only reason to prefer wellbeing today to wellbeing tomorrow was that there might be no tomorrow. His 0.1% chance of human extinction per year (Note that this is supposedly the 'background' rate by the way, it is definitely not the probability of a climate-induced extinction) was the highest figure he could propose that would not be taken as overly alarmest. If you think that sounds a bit off then reflect on the fact that the mortality rate in the UK at present is around 0.8%, so Stern was saying that one could expect more than 10% of human mortality in the near future to result from human extinction. I think that is not at all unreasonable, but I can see why he didn't want to put the background extinction risk any higher. Anyway, the key point here is that none of these considerations was really about producing any kind of estimate of the likelihood of human extinction, it was just a guess that he felt would be reasonably acceptable from the point of view of trying to push up the time discount rate a bit. However, of course, once it is out there it got used, and continues to get used, as if it was something quite different.

The third concern I have is that I think it can be at least somewhat problematic to break down exitential risks by threat, which people generally need to do if they are to assign probability estimates to them. To be fair, your are here interested in the probability of human extinction as a whole, which does not fact this particular problem. However many of the estimates that I have come across relate to specified threats. The issue here is that much of the damage from any particular threat comes from its systemic and cascading effects. For instance, when considering the existential threat from natural pandemics I am quite unconcerned that a naturally occurring (or even most man-made) pathogen might literally wipe out all of humanity, the selection pressures against that would be huge. I am somewhat more concerned that such a pandemic might cause a general breakdown in global order leading to massive global wars or the collapse of the global food supply. However, I am mostly concerned that a pandemic might cause a social collapse in a single state that possessed nuclear weapons leading to them becoming insecure. If I simply include this as the probability of either human extinction via pandemic or nuclear war then that seems to me to be misleading. However, if it got counted in both then this could lead to double counting later on. Of course, with great care and attention this sort of problem can be dealt with. However on the whole when people make assessments of the probability of existential risks they tend to pool together all the available information, much of which has been produced without any coordination making such double counting, or zero counting, not unlikely.

Please let me know if you would like me to try and write more about any of these issues (although to be honest I am currently quite stretched so this may have to wait a while). You may also be interested in a piece I Wrote with Peter Hurford and Catheryn Mercow (I won't lie, it was mostly they who wrote it) on how different EA organizations account for uncertainty which has had quite a bit of impact on my thinking

Also if you haven't already seen it you might be interested in this piece by Eliezer Yudkowsky

PS: Obviously these concerns have not yet lead me to give up on working with these kinds of estimates, and indeed I would like them to be made better in the future. However they still trouble me.

Comment author: Sean_o_h 07 August 2018 07:39:51AM 9 points [-]

Incidentally, CSER's Simon Beard has a working paper just up looking at sources of evidence for probability assessments of difference Xrisks and GCRs, and the underlying methodologies. It may be useful for people thinking about the topic of this post (I also imagine he'd be pleased to get comments, as this will go to a peer reviewed publication in due course).

Comment author: SimonBeard 07 August 2018 10:49:10AM 5 points [-]

We are indeed keen to get comments and feedback. Also note that the final 1/3rd or so of the paper is an extensive catalogue of assessments of the probability of different risks in which we try to incorporate all the sources we could find (though we are very happy if others know of more of these).

I will say however that the overwhelming sense I got in doing this study is that it is sometimes best not to put this kind of number on risks.

Comment author: SimonBeard 16 October 2017 05:18:40PM *  7 points [-]

The public beurocracy that I am most familiar with is that in the UK. Here, the government’s approach to risk, including Catastrophic risk, takes place within the Cabinet Office, and in particular their civil contingency secretariat. From what little I know of the US beurocracy I think it is the same in that risk management is the responsibility, at least in part, of the white house.

Risks fall under the authority of different government departments as you put it, in two different ways. Firstly, for the purpose of the Annual National Risk Assessment departments are assigned ‘ownership’ of risks for which they are percieved to have particular expertise. They are then tasked with producing a ‘reasonable worst case scenario’ for these risks, setting out different aspects of the potential impact. In the most recent assessment they were also asked to consider possible impacts beyond this reasonable worst case and that we might see as catestrophic.

These impacts are then collated and evaluated by the cabinet office for the purose of producing the National Risk Assessment, which is classified, and the National Risk Register, which is available here

The second way in which departments are tasked with responding to risks is that each department is asked to consider what effect the reasonable worst case scenario of each risk might have on them, and to consider ways of mitigating this risk. Contrary to the suggestion here however, this ownership is not 1 to 1 between departments and risks, but rather all departments are asked to submit plans in how to respond to all risks unless they can show that 1 - the impact on them will be minimal or 2 - if things are so bad that the impact on them would be significant then things have already become so bad that this is unlikely to be a significant priority. Obviously some departments wwill have a lot more to do to respond to some risks than others, but in-fact each department plans for the entire breadty of risks that fall into the national risk assessment.

I think that this process is preferable to having a seperate department for catestrophic risks in several respects. Firstly, the central executive department is best placed to coordinate activity acrosss other departments, which I think is something we would all agree is important. As other commentators have suggested, a seperate department might well end up having bugetary and other disputes with existing departments hambering such coordination. Secondly, this arrangement helps to highlight that risk mitigation should be right at the heart of government, and is part of its central executive function. Contrast this for instance with Climate change, which is part of its own department in many countries, but is consiquently a long way from most executive thinking - although there may be other reaosns for this of course.

There are some downsidew with this set up of course. irst amongst these is probably that it places catestrophic risks as just one amongst the entire selection of risks faced by a country, and to the extent that these risks are global may give them less rather than more priority in central government thinking. If catestrophic risk was primarilly under the foreighnn office / state department it is possible that global risks might get more priority, although it is also possible that exactly the same problem would reemerge. Another problem is that, even with the introduction of some consideration of scenarios that are beyond the ‘reasonable worst case’ it does seem like departments prefer to limit their considerations to what is reasonably likely, rather than what is potentially most catestrophic, because it is easier to base such scenarios on scientific evidence and the make good contingency plans for them. A final issue is that The cabinet office seems to classify its information by default, wheras it would probably be better from a scientific and campaigning perspective if assessments about catestrophic risks and how we are mitigating them where freely available.

Perhaps all of these problems together might imply that it would be net positive if, within the existing risk assessment frameworks, there was a specific office for assessing potentially catestrophic risks that could provide additional input alongside that from individual departments, do so more openley and potentially be situated between the Cabinet and Foreign Offices (Whitehouse and State Department). However, given the benefits of the existing system I do not see much likelyhood that an entire government department for existential and catestrophic risk would be a net improvement over the existing model, at least in the UK, even if such a thing were politically feasible.

  • This comment was informed by some recent meetings with civil servants, obviously it is possible that it reflects some of their biases in favour of the existing system which, due to its nature, is hard to evaluate from an external perspective.