Hide table of contents

Summary

Descriptive ethics is the empirical study of people's values and ethical views, e.g. via a survey or questionnaire. This overview focuses on beliefs about population ethics and exchange rates between goods (e.g. happiness) and bads (e.g. suffering). Two variables seem particularly important and action-guiding in this context, especially when trying to make informed choices about how to best shape the long-term future: 1) One’s normative goods-to-bads ratio (N-ratio) and 2) one’s expected bads-to-goods ratio (E-ratio). I elaborate on how a framework consisting of these two variables could inform our decision-making with respect to shaping the long-term future, as well as facilitate cooperation among differing value systems and further moral reflection. I then present concrete ideas for further research in this area and investigate associated challenges. The last section lists resources which discuss further methodological and theoretical issues which were beyond the scope of the present text.

Descriptive ethics and long-term future prioritization

Recently, some debate has emerged on whether reducing extinction risk is the ideal course of action for shaping the long-term future. For instance, in the Global Priorities Institute (GPI) research agenda, Greaves & MacAskill (2017, p.13) ask “[...] whether it might be more important to ensure that future civilisation is good, assuming we don’t go extinct, than to ensure that future civilisation happens at all.” We could further ask to what extent we should focus our efforts on reducing risks of astronomical suffering (s-risks). Again, Greaves & MacAskill: “Should we be more concerned about avoiding the worst possible outcomes for the future than we are for ensuring the very best outcomes occur [...]?” Given the enormous stakes, these are arguably some of the most important questions facing those who prioritize shaping the long-term future.1

Some interventions increase both the quality of future civilization as well as its probability. Promoting international cooperation, for instance, likely reduces extinction risks as well as s-risks. However, it seems implausible for one single intervention to be optimally cost-effective at accomplishing both types of objectives at the same time. To the extent to which there is a tradeoff between different goals relating to shaping the long-term future, we should make a well-considered choice about how to prioritize among them.

Normative and expected ratios (aka exchange rates and future optimism)

I suggest that this choice can be informed by two important variables: One’s normative bads-to-goods ratio2 (N-ratio) and one’s empirically expected goods-to-bads ratio (E-ratio). Taken together, these variables can serve as a framework for choosing between different options to shape the long-term future.

(For utilitarians, N- and E-ratios amount to their normative / expected suffering-to-happiness ratios. But for most humans, there are bads besides suffering, e.g. injustice, and goods other than happiness, e.g. love, knowledge, or art. More on this below.)

I will elaborate in greater detail below on how to best interpret and measure these two ratios. For now, a few examples should suffice to illustrate the general concept. Someone with a high N-ratio of, say, 100:1 believes that reducing bads is one hundred times as important as increasing goods, whereas someone with an N-ratio of 1:1 thinks that increasing goods and reducing bads is of equal importance.3 Similarly, someone with an E-ratio of, say, 1000:1 thinks that there will be one thousand times as much good than bad in the future in expectation, whereas someone with a lower E-ratio is more pessimistic about the future.4

Note that I don’t assume an objective way to measure goods and bads, so a statement like “reducing suffering is x times more important than promoting happiness” is imprecise unless one further specifies what precisely is being compared. (See also the section "The measurability of happiness and suffering".)

In short, the more one's E-ratio exceeds one's N-ratio, the higher one’s expected value of the future, and the more one favors interventions that primarily reduce extinction risks.5 In contrast, the more one's N-ratio exceeds one's E-ratio, the more appealing become interventions that primarily reduce s-risks or otherwise improve the quality of the future without affecting its probability. The graphic below summarizes the discussion so far.

Of course, this reasoning is rather simplistic. In practice, considerations from comparative advantages, tractability, neglectedness, option value, moral trade, et cetera need to be factored in.6 See also Cause prioritization for downside-focused value systems for a more in-depth analysis.7

Interpreting and measuring N-ratios

The rest of this section elaborates on the meaning of N-ratios and explains one approach of measuring or at least approximating them. In short, I propose to approximate an individual’s N-ratio by measuring their response tendencies to various ethical thought experiments (e.g. as part of a questionnaire or survey) and comparing them to those of other individuals. These questions could be of (roughly) the following kind:

Imagine you could create a new world inhabited by X humans living in a utopian civilization free of involuntary suffering, and where everyone is extremely kind, intelligent, and compassionate. In this world, however, there also exist 100 humans who experience extreme suffering. 

What’s the smallest value of X for which you would want to create this world?

In short, people who respond with higher equivalence numbers X to such thought experiments should have higher N-ratios, on average.

Some words of caution are in order here. First, the final formulations of such questions should obviously contain more detailed information and, for example, specify how the inhabitants of the utopian society live, what form of suffering the humans experience precisely, et cetera. (See also the document “Preliminary Formulations of Ethical Thought Experiments” which contains much longer formulations.)

Second, an individual’s equivalence number X will depend on what form of ethical dilemma is used and its precise wording. For example, asking people to make intrapersonal instead of interpersonal trade-offs, or writing “preserving” instead of “creating”, will likely influence the responses.

Third, subjects’ equivalence numbers will depend on which type of bad or good is depicted. Hedonistic utilitarians, for instance, regard pleasure as the single most important good and would place great value on, say, computer programs experiencing extremely blissful states. Many other value systems would consider such programs to be of no positive value whatsoever. Fortunately, many if not most value systems regard suffering8 as one of the most important bads and also place substantial positive value on flourishing societies inhabited by humans experiencing eudaimonia – i.e. “human flourishing” or happiness plus various other goods, such as virtue and friendship.9 In conclusion, although N-ratios (as well as E-ratios) are generally agent-relative, well-chosen “suffering-to-eudaimonia ratios” will likely allow for more meaningful and robust interindividual comparisons while still being sufficiently natural and informative. (See also the section "N-ratios and E-ratios are agent-relative" of the appendix for a further discussion of this issue.)

However, even if we limit our discussion to various forms of suffering and eudaimonia, judgments might diverge substantially. For example, Anna might only be willing to trade one minute of physical torture in exchange for many years of eudaimonia, while she would trade one week of depression for just one hour of eudaimonia. Others might make different or even opposite choices. If we had asked Anna only the first question, we could have concluded that her N-ratio is high, but her stance on the second question suggests that the picture is more complicated.

Consequently, one might say that even different forms of suffering and happiness/eudaimonia comprise “axiologically distinct” categories and that, instead of generic “suffering-to-eudaimonia ratios” – let alone “bads-to-goods ratios” – we need more fine-grained ratios, e.g. “suffering_typeY-to-eudaimonia_typeZ ratios”.10

See also “Towards a Systematic Framework for Descriptive (Population) Ethics” for a more extensive overview of the relevant dimensions along which ethical thought experiments can and should vary. “Descriptive Ethics – Methodology and Literature Review” provides an in-depth discussion of various methodological and theoretical questions, such as how to prevent anchoring or framing effects, control for scope insensitivity, increase internal consistency, and so on.

The need for a survey (of effective altruists)

Do these considerations suggest that research in descriptive ethics is simply not feasible? This seems unlikely to me but it’s at least worth investigating further.

For illustration, imagine that a few hundred effective altruists completed a survey consisting of thirty different ethical thought experiments that vary along a certain number of dimensions, such as the form and intensity of suffering or happiness, its duration, or the number of beings involved.

We could now assign a percentile rank to every participant for each ethical thought experiment. If the concept of a general N-ratio is viable, we should observe that the percentile ranks of a given participant correlate across different dilemmas. That is, if someone gave very high equivalence numbers to the first, say, fifteen dilemmas, it should be more likely that this person also gave high equivalence numbers to the remaining dilemmas. Investigating whether there is such a correlation, how high it is, and how much it depends on the type or wording of each ethical thought experiment, could itself lead to interesting insights.

What could we learn from such a survey?

Important and action-guiding conclusions could be inferred from such a survey, both on an individual and on a group level.

First, consider the individual level. Imagine a participant answered with “infinite” in twenty dilemmas. Further assume that the average equivalence number of this participant in the remaining ten dilemmas was also extremely high, say, one billion. Unless this person has an unreasonably high E-ratio, this person should, ceteris paribus, prioritize interventions that reduce s-risks over, say, interventions that primarily reduce risks of extinction but which might also increase s-risks; especially so if they learn that most respondents with lower average equivalence numbers do the same.12

Second, let’s turn to the group level. It could be very useful to know how equivalence numbers among effective altruists are distributed. For example, central tendencies such as the median or average equivalence number could inform allocation decisions within the effective altruism movement as a whole. They could also serve as a starting point for finding compromise solutions or moral trades between varying groups within the EA movement – e.g. between groups with more upside-focused value systems and those with more downside-focused value systems. Lastly, engaging with the actual thought experiments of the survey, as well as its results and potential implications, could increase the moral reflection and sophistication of the participants, allowing them to make decisions more in line with their idealized preferences.

Descriptive ethics and its importance for multiverse-wide superrationality

Readers unfamiliar with the idea of multiverse-wide superrationality (MSR) are strongly encouraged to first read the paper “Multiverse-wide Cooperation via Correlated Decision Making” (Oesterheld, 2017) or the post “Multiverse-wide cooperation in a nutshell”. Readers unconvinced by or uninterested in MSR are welcome to skip this section.

To briefly summarize, MSR is the idea that by taking into account the values of superrationalists located elsewhere in the multiverse, it becomes more likely that they do the same for us. In order for MSR to work, it is essential to have at least some knowledge about how the values of superrationalists elsewhere in the multiverse are distributed. Surveying the values of (superrational) humans13 is one promising way of gaining such knowledge.14

Obtaining a better estimate of the average N-ratio of superrationalists in the multiverse seems especially action-guiding. For illustration, imagine we knew that most superrationalists in the multiverse have a very high N-ratio. All else equal and ignoring considerations from neglectedness, tractability, etc., this implies that superrationalists elsewhere in the multiverse would probably want us to prioritize the reduction of s-risks over the reduction of extinction risks.15 In contrast, if we knew that the average N-ratio among superrationalists in the multiverse is very low, reducing extinction risks would become more promising.

Another important question is to what extent and in what respects superrationalists discriminate between their native species and the species located elsewhere in the multiverse.16

The problem of biased, unreliable, and unstable judgments

Another challenge facing research in descriptive ethics is that at least some answers are likely to be driven by more or less superficial system 1 heuristics generating a variety of biases – e.g. empathy gap, duration neglect, scope insensitivity, and framing effects, to name just a few. While there are ways to facilitate the engagement of more controlled cognitive processes17 that make reflective judgments more likely, not every possible bias or confounder can be eliminated.

All in all, the skeptic has a point when she distrusts the results of such surveys because she assumes that most subjects merely pulled their equivalence numbers out of thin air. Ultimately, however, I think that reflecting on various ethical thought experiments in a systematic fashion, pulling equivalence numbers out of thin air and then using these numbers to make more informed decisions about how to best shape the long-term future is often better – in the sense of dragging in fewer biases and distorting intuitions – than pulling one’s entire decision out of thin air.18

A further problem is that the N-ratios of many subjects will likely fluctuate over the course of years or even weeks.19 Nonetheless, knowing one’s N-ratios will be informative and potentially action-guiding for some subjects – e.g. for those who already engaged in substantial amounts of moral reflection (and are thus likely to have more stable N-ratios), or for subjects who have particularly high N-ratios such that their priorities would only shift if their N-ratios changed dramatically. Studying the stability of N-ratios is also an interesting research project in itself. (See also the section “moral uncertainty” of another document for more notes on this topic.)

Further resources

The Google Docs listed below discuss further methodological, practical, and theoretical questions which were beyond the scope of the present text. As I might deprioritize the project for several months, I decided to publish my thinking at its current stage to enable others to access it in the meantime.

1) Descriptive Ethics – Methodology and Literature Review.
This document is motivated by the question of what we can learn from the existing literature –  particularly in health economics and experimental philosophy – on how to best elicit normative ratios. It also contains a lengthy critique of the two most relevant academic studies about population ethical views and examines how to best measure and control for various biases (such as scope insensitivity, framing effects, and so on).

2) Towards a Systematic Framework for Descriptive (Population) Ethics.
This document develops a systematic framework for descriptive ethics and provides a classification of dimensions along which ethical thought experiments can (and should) vary.

3) Preliminary Formulations of Ethical Thought Experiments.
This document contains preliminary formulations of ethical thought experiments. Note that the formulations are designed such that they can be presented to the general population and might be suboptimal for effective altruists.

4) Descriptive ethics – Ordinal Questions (incl. MSR) & Psychological Measures.
This document discusses the usefulness of existing psychological instruments (such as the Moral Foundations Questionnaire, the Cognitive Reflection Test, etc.). The document also includes tentative suggestions for how to assess other constructs such as moral reflection, happiness, and so on.

Opportunity to give feedback or collaborate

If you're interested in collaborating on the survey, feel free to email me directly at david.althaus[at]foundational-research.org. Please note that the above documents, as well as the project as whole, are very much a work in progress, so I ask you to understand that much of the material hasn't been polished and, in some cases, does not even accurately reflect my most recent thinking. This also means that there is a significant opportunity for collaborators to contribute their own ideas rather than to just execute an already settled plan. In any case, comments in the Google documents or under this text are highly appreciated, whether you're interested in becoming more involved in the project or not.

Acknowledgments

I want to thank Max Daniel, Caspar Oesterheld, Johannes Treutlein, Tobias Pulver, Jonas Vollmer, Tobias Baumann, Lucius Caviola, and Lukas Gloor for their extremely valuable inputs and comments. Thanks also to Nate Liu, Simon Knutsson, Brian Tomasik, Adrian Rorheim, Jan Brauner, Ewelina Tur, Jennifer Waldmann, and Ruairi Donnelly for their comments.

Appendix

N-ratios and E-ratios are agent-relative

Assuming moral anti-realism is true, there are no universal or “objective” goods and bads. Consequently, if we want to avoid confusion, E-ratios and N-ratios should ultimately refer to the values of a specific agent, or, to be more precise, a specific set of goods and bads.

For illustration, consider two hypothetical agents: Agent_1 has an N-ratio of 1:1 and an E-ratio of 1000:1, while agent_2 has an N-ratio of 1:1 and an E-ratio of 1:10. Do these agents share similar values but have radically different conceptions about how the future will likely unfold? Not necessarily. Agent_1 might be a total hedonistic utilitarian and agent_2 an AI that wants to maximize paperclips and minimize spam emails. Both might agree that the future will, in expectation, contain 1000 times as much pleasure as suffering but 10 times as many spam emails as paperclips.

Of course, the sets of bads and goods of humans will often overlap, at least to some extent. Consequently, if we learn that human_1 has a much lower E-ratio than human_2, this tells us that human_1 is probably more pessimistic than human_2 and that both likely disagree about how the future is going to unfold.

In this context, it also seems worth noting that there might be more overlap with regards to bads than with regards to goods. For illustration, consider the number of macroscopically distinct futures whose net value is extremely negative according to at least 99.5% of all humans. It seems plausible that this number is (much) greater than the number of macroscopically distinct futures whose net value is extremely positive according to at least 99.5% of all humans. In fact, those of us who are more pessimistic about the prospect of wide agreement on values might worry that the latter number is (close to) zero, especially if one doesn’t allow for long periods of moral reflection.

The measurability of happiness and suffering

In my view, there are probably no “objective” units of happiness or suffering. Thus, it can be misleading to talk about the absolute magnitude of N-ratios without specifying the concrete instantiations of bads and goods that were traded against each other.

For more details on the measurability of happiness and suffering (or lack thereof), I recommend the essays “Measuring Happiness and Suffering” and “What Is the Difference Between Weak Negative and Non-Negative Ethical Views?” by Simon Knutsson, especially this section and the description of Brian Tomasik’s views whose approach I share.

Other resources

Footnotes

[1] For more considerations along these lines, I especially recommend the section “The value of the future” of the GPI research agenda (p. 12 - 14).

[2] The term “exchange rate” is more common.

[3] Views with N-ratios greater than 1:1 have also been referred to as “negative-leaning”. Prominent examples include negative consequentialism and negative(-leaning) utilitarianism. However, the distinction between negative and “traditional” consequentialism is non-obvious, see e.g. What Is the Difference Between Weak Negative and Non-Negative Ethical Views? (Knutsson, 2016).

[4] Of course, if one person has an E-ratio that is unusually low or unusually high, this presents grounds for concern, as it could indicate a bias or lack of updating towards other people's judgment. Diverging N-ratios could also be subject to the same consideration, but because N-ratios concern normative disagreements, it is less clear to what extent updating towards other people's moral intuitions is demanded by epistemic rationality.

[5] Note that, even for a person with a very high E-ratio and a low N-ratio, interventions that primarily reduce extinction risks are neither necessarily optimal nor as cost-effective as, for instance, interventions that primarily increase the probability of the very best futures (such as certain forms of advocacy).

[6] I’m also ignoring interventions which don’t primarily affect extinction risks or s-risks but e.g. increase the probability of the very best futures.

[7] Particularly the sections “Downside-focused views prioritize s-risk reduction over utopia creation” and “Extinction risk reduction: Unlikely to be positive according to downside-focused views”.

[8] Particularly the involuntary, gratuitous suffering of innocent humans.

[9] See also section 4 of the Stanford Encyclopedia of Philosophy entry on “Well-Being”.

[10] In this context, see also the appendix for further discussion.

[11] It could be argued that many interventions in this area don’t actually increase s-risks because they will only affect how recovery will happen rather than if it will happen.

[12] However, it does necessarily follow that this person should actually pursue interventions which primarily reduce s-risks. For example, depending on the specifics of her values and other considerations such as neglectedness, tractability, et cetera, interventions that increase the quality of the long-term future while not primarily affecting s-risks might be even more promising.

[13] Cf. Oesterheld (2017, p. 66): “Because the sample size [of superrationalists] is so small, we may also look at humans in general, under the assumption that the values of superrationalists resemble the values of their native civilization. It may be that the values of superrationalists differ from those of other agents in systematic and predictable ways. General human values may thus yield some useful insights about the values of superrationalists.”

[14] We should not draw too strong conclusions from surveying current superrationalists because they might be atypical in various ways and thus not representative (cf. Oesterheld, 2017, p.71).

[15] One might retort that superrationalists should always focus on staying around so they can help to actualize the values of superrationalists elsewhere in the multiverse. But assuming the average N-ratio of superrationalists is sufficiently high, the possible upside from ensuring good futures in which superrationalists can actualize goods valued by superrationalists elsewhere in the multiverse is likely smaller than the possible downside from failing to prevent bad futures full of suffering or other forms of disvalue. Of course, one’s ultimate decision also has to be informed by one’s E-ratio and by other considerations such as tractability, neglectedness or one’s comparative advantage.

[16] For example, many people dislike the civilization of “ems” depicted in Hanson’s Age of Em although most ems are happy and presumably more similar to humans than the average alien. Generally, it seems that many humans wish that the eventual descendants of humanity retain a lot of their idiosyncratic values and customs. And given the rather unenthusiastic reactions of many readers to the utopian civilization of the “super happy people” (a race of extraterrestrials depicted in Eliezer Yudkowsky’s short story Three Worlds Collide), it seems not too implausible to conclude that many humans just don’t care much for the existence of alien civilizations elsewhere in the multiverse – however flourishing and utopian from the perspective of their inhabitants. If one further assumes that superrationalists (here and elsewhere in the multiverse) share these sentiments to some degree but also care at least somewhat about the prevention of suffering (even if experienced by aliens), this suggests that alien superrationalists would want us to prioritize avoiding the worst possible futures over ensuring the existence of a utopian (post-)human civilization. In contrast, the less superrationalists discriminate between the well-being of humans and aliens, the less substantive this line of argumentation. Of course, this whole line of reasoning is very speculative to begin with, and should be taken with a (big) grain of salt.

[17] See e.g. the sections “How to test and control for various biases” and ”Increasing validity and internal consistency“ of the document “Descriptive Ethics – Methodology & Literature Review” for further relevant methodological considerations.

[18] Just as pulling other numerical values (probabilities, cost estimates, etc.) out of thin air and then using these numbers to inform one’s decision is often better than pulling the decision out of thin air (in this context, see e.g. How to Measure Anything by D. Hubbard).

[19] For example, due to random fluctuations in mood or because subjects further reflected on their values.

Comments8
Sorted by Click to highlight new comments since: Today at 10:00 AM

Thanks for the interesting post! I just wanted to ask if there are any updates on these research projects? I think work along these lines could be pretty promising. One potential partner for cooperation could be clearerthinking.org. They already have a survey tool for intrinsic values and this seems to hit in a similar direction.

Thanks! David Moss and others from Rethink Priorities have done some excellent work in this area. 

Lucius Caviola, Geoffrey Goodwin, Andreas Morgensen, and I have been working on an academic paper in this area.

One potential partner for cooperation could be clearerthinking.org

Agree, good idea!

[anonymous]6y1
0
0

This project seems to be a bit similar to an idea that I have. I start with a population ethical view of variable critical level utilitarianism https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/ So everyone can choose his or her own preferred critical level utility. Most people seem to agreggate around two values: 1) the totalists prefer a critical level of 0, which corresponds with total utilitarianism (the totalist view) and 2) the personalists or negativists prefer a conditionally maximum critical level (for example the utility of the most prefered state), which is close to negative utilitarianism and the person-affecting view. (I will not go into the conditionality part here) When we create new people, they can be either totalists or personalists (or something else, but that seems to be a minority. Or they can be in a morally uncertain, undecided superposition between totalists and personalists, but then we are allowed to choose for them their critical levels. If we make a choice for a situation where a totalist with a positive utility (well-being) is created, that positive utility counts as a benefit or a gratitude regarding our choice. If we caused the existence of a personalist (or negativist), we did not create a benefit. And if that personalist complains against our choice because it prefers another situation, we actually harmed that person. Now we have to add all benefits and harms (all gratitudes and complaints) for everyone who will exist in the choice that we will make. Concerning the far future and existential risks, we need to know how many totalists and personalists there will be in the future. Studying the current distribution of totalists and personalists can give us a good estimate. This might be related to the N-ratios of people. Totalists have low N-ratios, personalists/negativists have high N-ratios

Hi Stijn. You mention that people tend to fall into these two categories mostly - totalist view and person-affecting view. Can you elaborate on how you obtained this impression? Did you already run a survey of some kind, or is the impression based on conversations with people, or from the comments on your blog? Does it reflect the intuitions of primarily EAs, or philosophy students, or the general population?

Interesting, yeah, thx for the pointer!

First, consider the individual level. Imagine a participant answered with “infinite” in twenty dilemmas. Further assume that the average equivalence number of this participant in the remaining ten dilemmas was also extremely high, say, one trillion. Unless this person has an unreasonably high E-ratio (i.e. is unreasonably optimistic about the future), this person should, ceteris paribus, prioritize interventions that reduce s-risks over, say, interventions that primarily reduce risks of extinction but which might also increase s-risks (such as, perhaps, building disaster shelters11); especially so if they learn that most respondents with lower average equivalence numbers do the same.

That's not descriptive ethics though, that's regular moral philosophy.

For the 2nd point, moral compromise on a movement level makes sense but not in any unique way for population ethics. It's no more or less true than it is for other moral issues relevant to cause prioritization.

That's not descriptive ethics though, that's regular moral philosophy.

Fair enough. I was trying to express the following point: One of the advantages of descriptive ethics, especially if done via a well-designed questionnaire/survey, is that participants will engage in some moral reflection/philosophy, potentially illuminating their ethical views and their implications for cause prioritization.

For the 2nd point, moral compromise on a movement level makes sense but not in any unique way for population ethics. It's no more or less true than it is for other moral issues relevant to cause prioritization.

I agree that there are other issues, including moral ones, besides views on population ethics (one’s N-ratios and E-ratios, specifically) that are relevant for cause prioritization. It seems to me, however, that the latter are comparatively important and worth reflecting on, at least for people who spent at most a very limited amount of time doing so.

Imagine you could create a new world inhabited by X humans living in a utopian civilization free of involuntary suffering, and where everyone is extremely kind, intelligent, and compassionate. In this world, however, there also exist 100 humans who experience extreme suffering. What’s the smallest value of X for which you would want to create this world?

The 100 suffering humans could, at a stretch, describe a particularly unfortunate group of early Homo sapiens (say 100,000 years ago) stuck in an inhospitable place and niche. If this new world also contains other species of animals and plants then even if X is zero, I would think it good to create such a world. The existence of the other species would make it worthwhile.