The following is an excerpt from some comments I wrote to Will MacAskill about a pre-publication draft of What We Owe the Future. It is in response to the chapter on population ethics.

Chapter 8 presented some interesting ideas and did so clearly, I learned a lot from it.

That said, I couldn’t shake the feeling that there was something bizarre about the entire enterprise of trying to rate and rank different worlds and populations. I wonder if the attempt is misguided, and if that’s where some of the paradoxes come from.

When I encounter questions like “is a world where we add X many people with Y level of happiness better or worse?” or “if we flatten the happiness of a population to its average, is that better or worse?”—my reaction is to reject the question.

First, I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds. Second, if I consider realistic, analogous scenarios, there are always major considerations that guide my choices other than an abstract, top-down decision about overall world-values.

For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.

Similarly, if I were to consider whether I should make the lives of some people worse, in order to make the lives of some less-well-off people better, my first thought is: by what means, and what right do I have to do so? If it were by force or conquest, I would reject the idea, not necessarily because of the end, but because I don’t believe that the ends justify the means.

There seems to be an implicit framework to a lot of this along the lines of: “in order to figure out what to do, we need to first decide which worlds are better than which other worlds, and then we can work towards better worlds or avoiding worse worlds.”

This is fairly abstract, centralized, and top-down. World-states are assigned value without considering, to whom and for what? The world-states are presumed to be universal, the same for everyone. And it provides no guidance about what means are acceptable to work towards world-states.

An approach that makes more sense to me is something like: “The goal of ethics is to guide action. But actions are taken by individuals, who are ultimately sovereign entities. Further, they have differing goals and even unique perspectives and preferences. Ethics should help individuals decide what goals they want to pursue, and should give guidance for how they do so, including principles for how they interact with others in society. This can ultimately include concepts of what kind of society and world we want to live in, but these world-level values must be built bottom-up, grounded in the values and preferences of individuals. Ultimately, world-states must be understood as an emergent property of individuals pursuing their own life-courses, rather than something that we can always evaluate top-down.”

I wonder if, in that framework, a lot of the paradoxes in the book would dissolve. (Although perhaps, of course, new ones would be created!) Rather than asking whether a world-state is desirable or not, we would consider the path by which it came about. Was it the result of a population of individuals pursuing good (if not convergent) goals, according to good principles (like honesty and integrity), in the context of good laws and institutions that respect rights and prohibit oppression? If so, then how can anyone say that a different world-state would have been better, especially without explaining how it might have come about?

I’m not sure that this alternate framework is compatible with EA—indeed, it seems perhaps not even compatible with altruism as such. It’s more of an individualist / enlightened-egoism framework, and I admit that it represents my personal biases and background. It also may be full of holes and problems itself—but I hope it’s useful for you to consider it, if only to throw light on some implicit assumptions.

Incidentally, aside from all this, my intuition about the Repugnant Conclusion is that Non-Anti-Egalitarianism is wrong. The very reason that the Conclusion is repugnant is the idea that there’s some nonlinearity to happiness: a single thriving, flourishing life is better than the same amount of happiness spread thin over many lives. But if that’s the case, then it’s wrong to average out a more-happy population with a less-happy population. I suppose this makes me an anti-egalitarian, which is OK with me. (But again, I prefer to analyze this in terms of the path to the outcome and how it relates to the choices and preferences of the individuals involved.)

7

0
0

Reactions

0
0

More posts like this

Comments21
Sorted by Click to highlight new comments since:

It sounds like your story is similar to the one that Bernard Williams would tell.

Williams was in critical dialog with Peter Singer and Derek Parfit for much of his career.

This lead to a book: Philosophy as a Humanistic Discipline.

If you're curious:

"When I encounter questions like “is a world where we add X many people with Y level of happiness better or worse?” or “if we flatten the happiness of a population to its average, is that better or worse?”—my reaction is to reject the question.

First, I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds."

You're a Senator. A policy analyst points out a new proposed tax reform will boost birth rates- good or bad?

You're an advice columnist- people write you questions about starting a family. All else equal, do you encourage them?

You're a pastor. A church member asks you: "Are children a blessing?"

You're a redditor. On AITA, someone asks: "Is it wrong to ask my children when they plan on starting a family?"

These are good examples. But I would not decide any of these questions with regard to some notion of whether the world was better or worse with more people in it.

  • Senator case: I think social engineering through the tax code is a bad idea, and I wouldn't do it. I would not decide on the tax reform based on its effect on birth rates. (If I had to decide separately whether such effects would be good, I would ask what is the nature of the extra births? Is the tax reform going to make hospitals and daycare cheaper, or is it going to make contraception and abortion more expensive? Those are very different things.)
  • Advice columnist: I would advise people to start a family if they want kids and can afford them. I might encourage it in general, but only because I think parenting is great, not because I think the world is better with more people in it.
  • Pastor: I would realize that I'm in the wrong profession as an atheist, and quit. Modulo that, this is the same as the advice columnist.
  • Redditor: I don't think people should put pressure on their kids, or anyone else, to have children, because it's a very personal decision.

All of this is about the personal decision of the parents (and whether they can reasonably afford and take care of children). None of it is about general world-states or the abstract/impersonal value of extra people.

For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.

Would you take into account the wellbeing of the child you are choosing to have? 

For example, if you knew the child was going to have a devastating illness such that they would live a life full of intense suffering, would you take that into account and perhaps rethink having that child (for the child's sake)? If you think this is a relevant consideration you're essentially engaging in population ethics.

I agree with Eric Neyman and ColdButtonIssues here:  EA likes finding levers to pull, and it doesn't seem plausible that it could find some pretty large ones around demographics (e.g., increased immigration, policies to accelerate or decelerate the demographic transition in countries which may undergo it, etc.)

Yes, but I don't see why we have to evaluate any of those things on the basis of arguments or thinking like the population ethics thought experiments.

Increased immigration is good because it gives people freedom to improve their lives, increasing their agency.

The demographic transition (including falling fertility rates) is good because it results from increased wealth and education, which indicates that it is about women becoming better-informed and better able to control their own reproduction. If in the future fertility rates rise because people become wealthy enough to make child-rearing less of a burden, that would also be good. In each case people have more information and ability to make choices for themselves and create the life they want. That is what is good, not the number of people or whether the world is better in some impersonal sense with or without them.

Policies to accelerate or decelerate the demographic transition could be good or bad depending on how they operate. If they increase agency, they could be good; if they decrease it, they are bad (e.g., China's “one child” policy; or bans on abortion or contraception).

We don't need the premises or the framework of population ethics to address these questions.

FWIW, to me it does seem that you are using some notion of aggregate welfare across a population when considering these cases,  rather than purely deontological reasoning

I'm not using purely deontological reasoning, that is true. I have issues with deontological ethics as well.

Policies to accelerate or decelerate the demographic transition could be good or bad depending on how they operate. If they increase agency, they could be good; if they decrease it, they are bad (e.g., China's “one child” policy; or bans on abortion or contraception).

Seems underspecified. E.g., not sure how you would judge a ban or nudge against cousin marriage.

The demographic transition (including falling fertility rates) is good because it results from increased wealth and education, which indicates that it is about women becoming better-informed and better able to control their own reproduction.

I've also seen the explanation that as child mortality dwindles, people choose to invest more of their resources into fewer children.

It is hard for me to imagine how you'd derive ethics in the way you describe.  I can't imagine a way to guide my actions in a normative sense without thinking about whether the future states my actions bring about are preferable or not.

For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.

For me, that is just describing egoism. Of course many people de facto think about their preferences when making a decision and they often give that a lot of weight, but I see ethics as standing outside of that and the 'altruism' in 'Effective altruism' can be partially explained by EAs generally having a low level of egoism which leads to more a selfless, generalizable ethic.

I can't imagine a way to guide my actions in a normative sense without thinking about whether the future states my actions bring about are preferable or not.

Preferable to whom? Obviously you could think about whether they are preferable to yourself. I'm against the notion that there is such as thing as “preferable” to no one in particular.

Of course many people de facto think about their preferences when making a decision and they often give that a lot of weight, but I see ethics as standing outside of that…

Hmm, I don't. I see egoism as an alternative ethical framework, rather than as non-ethical.

Preferable to whom? Obviously you could think about whether they are preferable to yourself. I'm against the notion that there is such as thing as “preferable” to no one in particular.

Preferable to people in general. I don't think no one in particular means no one. When people set speed limits on roads they are for no one in particular, but it seems reasonable to assume people don't want to die in car accidents and legislate accordingly.

Hmm, I don't. I see egoism as an alternative ethical framework, rather than as non-ethical.

I know that egoism is technically an ethical framework, but I don't see how it could ever get meaningful rules to come out of it that I think we'd agree we'd want as a society. It would be hard to even come up with rules like "You shouldn't murder others" if your starting point is your own ego and maximizing your own self interest. But I don't know much about egoism so I am probably missing something here.


 

I wouldn't say speed limits are for no one in particular; I'd say they are for everyone in general, because they are a case where a preference (not dying in car accidents) is universal. But many preferences are not universal.

I know that egoism is technically an ethical framework, but I don't see how it could ever get meaningful rules to come out of it that I think we'd agree we'd want as a society. It would be hard to even come up with rules like "You shouldn't murder others" if your starting point is your own ego and maximizing your own self interest.

Thanks… I would like to write more about this sometime. As a starting point, think through in vivid detail what would actually happen to you and your life if you committed murder. Would things go well for you after that? Does it seem like a path to happiness and success in life? Would you advise a friend to do it? If not, then I think you have egoistic reasons against murder.

I think my crux with this argument is "actions are taken by individuals". This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they're taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: "What is the algorithm that we would like legislators to use to decide which legislation to support?". And as I see it, there's no way around answering questions like this one, when decisions have significant trade-offs in terms of which people benefit.

And often these trade-offs need to deal with population ethics. Imagine, as a simplified example, that China is about to deploy an AI that has a 50% chance of killing everyone and a 50% chance of creating a flourishing future of many lives like the one many longtermists like to imagine. The U.S. is considering deploying its own "conservative" AI, which we're pretty confident is safe, and which will prevent any other AGIs from being built but won't do much else (so humans might be destined for a future that looks like a moderately improved version of the present). Should the U.S. deploy this AI? It seems like we need to grapple with population ethics to answer this question.

(And so I also disagree with "I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds", insofar as you'll have an effect on what we choose, either by voting or more directly than that.)

Maybe you'd dispute that this is a plausible scenario? I think that's a reasonable position, though my example is meant to point at a cluster of scenarios involving AI development. (Abortion policy is a less fanciful example: I think any opinion on the question built on consequentialist grounds needs to either make an empirical claim about counterfactual worlds with different abortion laws, or else wrestle with difficult questions of population ethics.)

“What is the algorithm that we would like legislators to use to decide which legislation to support?”

I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.

Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn't make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival.

Abortion policy is a good example. I don't see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.

Hey, you might enjoy this post ("Population Ethics Without Axiology") I published just two weeks ago – I think it has some similar themes.

What you describe sounds like a more radical version of my post where, in your account, ethics is all about individuals pursuing their personal life goals while being civil towards one another. I think a part of ethics is about that, but those of us who are motivated to dedicate our lives to helping others can still ask "What would it entail to do what's best for others?" – that's where consequentialism ("care morality") comes into play.

I agree with you that ranking populations according to the value they contain, in a sense that's meant to be independent of the preferences of people within that population (how they want the future to go), seems quite strange. Admittedly, I think there's a reason many people, and effective altruists in particular, are interested in coming up with such rankings. Namely, if we're motivated to go beyond "don't be a jerk" and want to dedicate our lives to altruism or even the ambitious goal of "doing the most moral/altruistic thing," we need to form views on welfare tradeoffs and things like whether it's good to bring people into existence who would be grateful to be alive. That said, I think such rankings (at least for population-ethical contexts where the number of people/beings isn't fixed or where it isn't fixed what types of interests/goals a new mind is going to have) always contain a subjective element. I think it's misguided to assume that there's a correct world ranking, an "objective axiology," for population ethics. Even so, individual people may want to form informed views on the matter because there's a sense in which we can't avoid forming opinions on this. (Not forming an opinion just means "anything goes" / "this whole topic isn't important" – which doesn't seem true, either.)

Anyway, I recommend my post for more thoughts!

I think such rankings (at least for population-ethical contexts where the number of people/beings isn't fixed or where it isn't fixed what types of interests/goals a new mind is going to have) always contain a subjective element. I think it's misguided to assume that there's a correct world ranking, an "objective axiology," for population ethics.

Several days per week, I find myself thinking that fully impartial axiology isn't really a thing.

Interesting to hear you're also into this view—thanks for sharing. I'll read your post sometime soon.

Is this an argument against consequentialism or population ethics?

Not sure, maybe both? I am at least somewhat sympathetic to consequentialism though

Curated and popular this week
Relevant opportunities