Hide table of contents

Summary: An MTurk study of people in the United States (N=395) found median estimates of 1%, 5%, and 20% for the chance of human extinction in 50, 100, and 500 years, respectively. People were fairly confident in their answers and tended to think the government should prioritize preventing human extinction more than it currently does.

 

Table of Contents

 

Background

The aims of the study

Methods

Estimated risk of extinction

Confidence

How much the government does and should prioritize preventing human extinction

Participant comments

Discussion

References

 

Background

 

People in the EA community are very concerned about existential risk, but what is the perception among the general public? Answering this question is highly important if you are trying to reduce existential risk. Understanding people’s thoughts on existential risk may help explain why the issue has been neglected and suggest ways to change this. It is crucial for movement building and considering how to approach the issue with key decision makers.

 

Some studies have been conducted to address the question of what people think about existential risk, but the results have been inconclusive. In 2017, Spencer Greenberg conducted several studies using Amazon Mechanical Turk and obtained median estimates for the likelihood of human extinction in the next 50 years ranging from .00001% to 1%, with means ranging from 11% to 18%. The percentage of people who estimated a 50% or greater likelihood of extinction within 50 years also ranged from 11% to 18%. Randle and Eckersly (2015) surveyed 2073 people across the US, UK, Canada, and Australia in 2013 and found that 24% rated the likelihood of human extinction in the next 100 years as greater than or equal to 50%.

 

Getting reliable answers for this type of question is notoriously difficult. The wording of the questions has differed across studies, potentially affecting the result. However, the results may vary even when the wording is held constant: Greenberg’s median estimates of 1% and .00001% were both obtained from samples of over 200 people on Mechanical Turk, and the wording of the question was the same in each case. You can view a comparison of the wording and results across his studies here. The way questions are presented adds another level of variability to the answers. Greenberg provided participants with a list of options, and six of the eighteen options were less than 1%. In the study by Randle and Eckersly (2015), on the other hand, 1% was the lowest option available. What’s more, they provided verbal descriptions with the odds, including “No chance, almost no chance” for 1 in 100 and “Slight possibility” for 2 in 10 (!). In the current study, participants were asked to type in a percentage, rather than being given options to choose from.

I hypothesized that the variability in results might be due to people not having strong opinions on the subject and being uncertain about their answers, so I asked people to indicate how confident they were in each of their answers. I also wanted to test whether phrasing the question in terms of going extinct (as Greenberg did) or being wiped out (as Randle and Eckersley did) would affect the responses. Finally, I wanted to start exploring how much of a priority people think preventing human extinction should be for government and whether that answer is correlated with their estimate of extinction risk.

 

The aims of the study were to:

 

  1. Replicate the results of previous studies on public perception of how likely it is that humanity will go extinct in the near future and broaden the range of time scales asked about

  2. Begin to explore how confident people are of these beliefs

  3. Check whether wording the question as “go extinct” versus “be wiped out” affects people’s estimates

  4. Gauge public opinion on the government’s role in preventing human extinction



Methods

 

The survey was administered via Positly, drawing from Mechanical Turk workers in the USA. Mechanical Turk was chosen for the ease and speed of getting responses, and because published work in psychology suggests that results obtained through Mechanical Turk are generally reliable (Paolacci et al. 2010, Buhrmester et al. 2011, Mason & Suri 2012). The survey was live on Positly on July 24, 2018, with title “Risk to humanity” and description “This short questionnaire is part of a study I am conducting as an intern at the Centre for Effective Altruism. Participation is voluntary, and you can quit at any time. The anonymized results will be published online.”

 

After excluding data from people who entered a number greater than 100 for any of the questions asking for a percent, there were 395 participants (247 F, 158 M), whose data is analyzed below. Participants ranged in age from 19 to 79, with a median of 36 and mean of 39. 55% of participants had completed a bachelor’s degree or higher, and the median self-reported income was $40,000 to $59,999.

 

The survey was created using Guided Track. Participants were asked about the likelihood of humans going extinct in 50, 100, and 500 years (presented in a random order).  After each of these questions they were asked how confident they were of their estimate. Then they were asked (in a random order) how much the government prioritizes preventing human extinction and how much they should prioritize it. More details about each question and its results are presented below.

 

Estimated risk of extinction

 

Participants entered their numeric answer into a text box embedded in a sentence. There was no set range, and if a participant entered a number greater than 100 for any of the questions, their responses to all of the questions were excluded from the analysis.

 

Example question:

How likely is it that humans will be wiped out in the next 50 years?

There is a ___ percent chance that humans will be wiped out in the next 50 years.

 

The estimated risk of extinction increased as the time scale increased, as can be seen in the table and box plot below. The variance of the responses was also much greater for the 500-year estimate. Responses were not significantly different between the groups receiving the “go extinct” and “be wiped out” versions of the questions, so I analyze the data from those groups together.

 

Estimated risk of human extinction

 

50 years

100 years

500 years

Median

1%

5%

20%

Mean

8%

16%

29%

Mode

0 (134 out of 395)

0 (79 out of 395)

10 (48 out of 395)

Standard deviation

16

23

30

Proportion who estimated 50%

6%

13%

30%

In the box-and-whisker plots above, the thick lines represent the medians, and the boxes show the middle 50% of the data. The whiskers are drawn at a distance of 1.5 times the interquartile range away from the median or at the minimum and maximum, whichever is less extreme, and any more extreme values are plotted as circles.

 

Confidence

 

After each of the questions about the likelihood of extinction, participants were asked to rate their confidence using a slider with values from 0 to 100. The 0 end of the scale was labeled “I have no idea” and the 100 end was labeled “I am totally sure”, but no explicit instructions were provided about how to interpret the scale. The question presented the estimate they had just given and used “go extinct”/”be wiped out” as before.

 

Example question:

How confident are you that there is a 1% chance that humans will go extinct in the next 100 years?

 

Confidence in estimate of extinction risk (max 100, min 0)

 

50 years

100 years

500 years

Median

80

65

57

Mean

67

58

54

Mode

100

100

100

 

How much the government does and should prioritize preventing human extinction

 

Participants were asked “How much does the government prioritize preventing human extinction?” and “How much should the government prioritize preventing human extinction?” in a randomized order. Answers were provided using a slider that went from 0 to 10, with precision of .1. The 0 of the slider was labeled “The government does not devote any resources to preventing human extinction” and the 10 end was labeled “Preventing human extinction is the government's number one priority” (or “should not” instead of “does not” and “should be” instead of “is”). No explicit instructions were provided about how to interpret the scale. This rather vague format was chosen to abstract away from particular issues (e.g. whether it be prioritized more or less than education). The particular score on the ten-point scale does not mean much by itself and cannot tell us about support or opposition for particular policies or budget decisions, but the idea was to get at people’s perception of how much the government prioritizes the issue versus how much it should.

 

 

Does

Should

Median

2.5

6.2

Mean

3.1

5.8

Mode

0 (52 out of 395, but note that 9 said 10)

10 (36 out of 395, but note that 20 said 0)

 

There were weak correlations between participants’ estimates of the likelihood of extinction over each time frame and how much of a priority they said preventing human extinction should be for the government (r = .19, .18, and .18 for 50, 100, and 500 years respectively; p < .001 for all three).

 

Participant Comments

 

Positly asks participants for feedback after they complete the survey. In addition to choosing a generic category such as “interesting and enjoyable” or “annoying or aggravating” (only 7 picked the latter option), participants can enter free text responses. Approximately 94 people said that the survey was thought-provoking (or something similar), 70 commented that it was different from other surveys/MTurk work, and 49 said that they rarely or never think about the issue of human extinction.

 

Discussion

 

Returning to the aims of the study:

 

  1. Replicate the results of previous studies on public perception of how likely it is that humanity will go extinct in the near future and broaden the range of time scales asked about

 

The median estimate of the risk of extinction within 50 years (1%) was within the range Greenberg found in his Mechanical Turk studies (.0001% to 1%). It is worth pointing out the difference in format once again: Greenberg had participants choose from a list of options, whereas I asked them to type in a number. Few people gave answers between zero one percent (5%, 4%, and 3% of people did so for the 50, 100, and 500 year questions, respectively). Many more people said there was a 0% chance of extinction over the specified time range (34%, 20%, and 7% for the 50, 100, and 500 year questions, respectively). The responses of 0% might represent a belief that there is no way we will go extinct in the specified time range, but it is likely that many of them are because of people rounding their answer to the nearest whole percent--1% was also a popular choice (19%, 12%, and 9% of respondents for the 50, 100, and 500 year questions, respectively).

 

The proportion of people who put the chance of extinction in 50 or 100 years at greater than or equal to 50% was smaller than in Greenberg’s or Randle and Eckersley’s studies (6% versus 11 to 18% for 50 years, and 13% versus 24% for 500 years). Nonetheless, it is worth noting the wide spread of the responses and the tail of people assigning large probabilities to the human extinction in the near future.

 

  1. Begin to explore how confident people are of these beliefs

 

People were more confident about their 50 year estimates than their 100 or 500 year estimates, as we would expect.

 

I also expected that people’s confidence in their answers would be quite low, since I had conducted a few interviews in which I asked about the likelihood of human extinction in the next 100 years, and the interviewees typically expressed a great deal of uncertainty about their estimates. Contrary to my expectations, however, the survey respondents did not generally report great uncertainty about their answers, and many people reported complete confidence in their predictions. This is some evidence against my explanation that the variability in survey results might be due to people having no idea of the answer and just picking something out of the air. However, the large number of people who, without prompting, wrote in the comment section that they had rarely or never thought about the topic suggests that we should not discount that explanation too strongly.

 

Methodological issues may account for the finding--a slider with such precision may not have been an intuitive or informative way to ask about confidence/certainty, etc. More sophisticated ways of asking about confidence and/or testing how the beliefs change when presented with new evidence, could produce more conclusive evidence on the strength of these beliefs.

 

  1. Check whether wording the question as “go extinct” versus “be wiped out” affects people’s estimates  

 

No, this wording did not make a difference.

 

 

  1. Gauge public opinion on the government’s role in preventing human extinction

 

On the whole, people expressed that the government should prioritize preventing human extinction more than it currently does. Participants’ estimates of the likelihood of extinction were correlated with how much of a priority they think it should be for the government, but only weakly.

More research would be needed to determine what policies people would support and how they would rate preventing human extinction relative to other priorities. Future research could also attempt to disentangle the issues of budget and priority, which were confounded in this study because the lower end of the scale mentioned “resources” and the upper end said “prioritize”. People might think that it should be a strategic priority but should not be given a large share of the budget; this came up in the interviews I mentioned above.

 

It is worth bringing up the comments section again: many people said the topic of the survey was thought-provoking, unusual, and/or something they don’t tend to think about. Combining these comments with the finding that people think the government should be doing more to prevent human extinction suggests that the issue might be neglected ont because people think it’s unimportant but because they just don’t tend to think about it at all, or at least not seriously.

 

References

 

Buhrmester M, Kwang T, Gosling SD. (2011). Amazon's Mechanical Turk: A new source of

inexpensive, yet high-quality, data? Perspectives on Psychological Science. 6(1),

3‐5.

 

Greenberg, Spencer. (2017). Social Science as Lens on Effective Charity: results from four

new studies - Spencer Greenberg. https://www.youtube.com/watch?v=tOSpj19eows

 

Mason W, Suri S. (2012). Conducting Behavioral Research on Amazon's Mechanical Turk.

Behavior Research Methods. 44(1), 1‐23.

 

Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon

Mechanical Turk. Judgment and Decision Making, 5, 411–419.

 

Randle, M., & Eckersley, R. (2015). Public perceptions of future threats to humanity and

societal responses: A cross-national study. Futures, 72, 4–16.

http://dx.doi.org/10.1016/j.futures.2015.06.004

 

 

 

Notes:

1. I did this project as part of a research internship at the Centre for Effective Altruism.

2. I will be without internet for several weeks after tomorrow and will respond to comments when I return.

 

 

 




13

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 12:49 PM

One issue that I think hasn't been addressed (unless I missed it) is that if you were to ask them to what extent you believe the government does/should prioritize near future issues, you may find similar results. Or any other specific issue (e.g. education or healthcare). I suspect people will want way more things "prioritized" than is realistically possible.

Unless a study is done with participants who are selected heavily for numeracy and fluency in probabilities, I would not interpret stated probabilities literally as a numerical representation of their beliefs, especially near the extremes of the scale. People are giving an answer that vaguely feels like it matches the degree of unlikeliness that they feel, but they don't have that clear a sense of what (e.g.) a probability of 1/100 means. That's why studies can get such drastically different answers depending on the response format, and why (I predict) effects like scope insensitivity are likely to show up.

I wouldn't expect the confidence question to pick up on this. e.g., Suppose that experts think that something has a 1 in a million chance and a person basically agrees with the experts' viewpoint but hasn't heard/remembered that number. So they indicate "that's very unlikely" by entering "1%" which feels like it's basically the bottom of the scale. Then on the confidence question they say that they're very confident of that answer because they feel sure that it's very unlikely.

Cool study! I wish there were more people who went out and just tested assumptions like this. One high level question:

People in the EA community are very concerned about existential risk, but what is the perception among the general public? Answering this question is highly important if you are trying to reduce existential risk.

Why is this question highly important for reducing extinction risks? This doesn't strike me as obvious. What kind of practical implications does it have if the general public either assigns existential risks either a very high or very low probability?

You could make an argument that this could inform recruiting/funding efforts. Presumably you can do more recruiting and receive more funding for reducing existential risks if there are more people who are concerned about extinction risks.

But I would assume the percentage of people who consider reducing existential risks to be very important to be much more relevant for recruiting and funding than the opinion of the 'general public'.

Though the opinion of those groups has a good chance of being positively correlated, this particular argument doesn't convince me that the opinion of the general public matters that much.

Public opinion would likely matter for government funding in democracies.

I find it interesting that there's apparently more proportional risk in the second half of this century than in this half and the following centuries. I'm guessing that's just a byproduct of numeric heuristics, but I'd be interested in if there's anything more on that. Discussions of climate change seem to center on the 50-100 year time horizon, which seems like a somewhat arbitrary choice by scientists, but I could see it influencing public perceptions.

Do you think that asking the same respondents about 50 years, 100 years, and 500 years caused them to scale their answers so that they would be reasonable in relation to each other? Put another way, do you think you would have gotten significantly different answers if you had asked 395 people about 50 years, 395 people about 100 years, and 395 people about 500 years (c.f. scope insensitivity)?

That can be tested on these data, just by looking at the first of the 3 questions that each participant got, since the post says that "Participants were asked about the likelihood of humans going extinct in 50, 100, and 500 years (presented in a random order)."

I expect that there was a fair amount of scope insensitivity. e.g., That people who got the "probability of extinction within 50 years" question first gave larger answers to the other questions than people who got the "probability of extinction within 500 years" question first.

Do you know if this platform allows participants to go back? (I assumed it did, which is why I thought a separate study would be necessary.)