Hide table of contents

[H]uman brains [...] had become such copious and irresponsible generators of suggestions as to what might be done with life, that they made acting for the benefit of future generations seem one of many arbitrary games which might be played by narrow enthusiasts—like poker or polo or the bond market, or the writing of science-fiction novels.

― Kurt Vonnegut, Galápagos (1985)

Outline

In this post, I introduce a framework for thinking about ethics. The framework revolves around the notion of life goals. They make up what matters to people by their own lights.

First, I’ll introduce life goals and justify their relevance. Then, I’ll argue that life goals differ between people.

This post works well as a standalone piece. That said, the preceding sequence posts are helpful in proactively addressing objections to the life-goals framework. Specifically, one criticism is that people’s life goals are beside the point and that we should be thinking about the life goals we ought to have if moral realism is true. I’ve argued against different variants of moral realism in my previous posts.

Life goals

Definitions

I want to introduce a concept (“life goals”) that describes what matters to a person, subjectively, in the fashion endorsed by the person.[1]

Life goal: A terminal objective toward which someone has successfully adopted an optimizing mindset.

An “objective” can be anything someone wants to achieve. Typically, objectives are about affecting the world outside one’s thoughts or conforming to a specific role or ideal.

An objective is “terminal” if someone wants to achieve it for its own sake, not as a means to an end.

By “optimizing mindset,” I mean:

  • Caring about preventing changes to one’s objective (or the intention to pursue it)
  • Caring about the objective with a global scope of action (as opposed to, e.g., caring about it only during work hours and within the constraints of a narrow role or context)
  • Looking out for opportunities to pursue the objective more efficiently, e.g., working on improving one’s skills or remodeling aspects of one’s psychology[2]

In the definition, I use the word “successfully” (“has successfully adopted an optimizing mindset”) because humans are not automatically strategic and because of the possibility of self-deception (Kurzban, 2010; Hanson & Simler, 2018). Suppose someone’s cognition is too prone to self-serving rationalizations. In that case, instead of having a life goal, the better description of their behavior is “optimizing for keeping the belief of caring about some objective, while subconsciously pursuing basic needs.”[3]

Life goals vs. life plans

We can compare life goals to what I’ll call life plans. Life plans are objectives people pursue without an optimizing mindset, and they’re usually instrumental in fulfilling particular human needs. (E.g., the life plan of having a successful career in academia may satisfy someone’s needs for status and intellectual curiosity.) People can achieve impressive feats in the context of their life plans, but the caring happens in a limited or not very strategic fashion, without a global scope of action. (For instance, a doctor may care a great deal about saving lives within their role at a hospital but not engage in long-term planning to optimize for saving lives throughout their career or situations outside their hospital role, such as donating to global poverty reduction.) Alternatively, people may care about their life plans superficially (e.g., primarily liking the thought of caring about the objective in question, or being seen as someone who does, but falling short of genuinely trying).

When someone changes their life plans, it is easier to interpret this change as a purposeful progression in their personal-development journey. By contrast, changes to people’s life goals likely leave ruptures in their biography and self-understanding (their identity was tied up in the life goal). For instance, later on, people may describe these periods with statements like, “Thinking back, my past self feels like a different person.”[4]

Replacing an old life goal with a new one is possible (i.e., it does happen), but only in ways that constitute a failure at goal preservation from the previous goal’s perspective.

For an objective to become a life goal, it takes two things:

First, the person needs to understand (at least on an intuitive, implicit level) what’s entailed by “being strategic in one’s thinking about objectives.” For instance, this type of mindset is conveyed (in a theoretical, explicit way) in Eliezer Yudkowsky’s Lesswrong sequences.[5] However, people can also intuitively and implicitly understand this (“street smarts over book smarts”).

Second, the person has to decide that an objective is worthy to orient their life around – we can no longer view it as instrumental in satisfying needs. (This may not feel like a “decision” in the typical sense. Instead, someone may feel unable to contemplate choosing/acting differently.)

For instance, on the most idealistic interpretations of marriage (“marriage as a life-long bond”), the marriage aims to turn a less committed relationship into a joint life goal.[6] Also, many people have adopted some form of altruism or “doing good” as their life goal. (It’s possible to have more than one life goal.)

Psychological implementation

The difference between life goals and life plans isn’t clear-cut because we implement both via the same psychological mechanisms. Life goals are held more strongly than life plans, but even the most dedicated pursuers may abandon their life goals under sufficiently adverse circumstances.[7]

One of many takeaways I got from reading Kaj Sotala’s multi-agent models of mind sequence (as well as comments by him) is that we can model people as pursuers of deep-seated needs. In particular, we have subsystems (or “subagents”) in our minds devoted to various needs-meeting strategies. The subsystems contribute behavioral strategies and responses to help maneuver us toward states where our brain predicts our needs will be satisfied. We can view many of our beliefs, emotional reactions, and even our self-concept/identity as part of this set of strategies. Like life plans, life goals are “merely” components of people’s needs-meeting machinery.[8]

Still, as far as components of needs-meeting machinery go, life goals are pretty unusual. Having life goals means to care about an objective enough to (do one’s best to) disentangle success on it from the reasons we adopted said objective in the first place. The objective takes on a life of its own, and the two aims (meeting one’s needs vs. progressing toward the objective) come apart. Having a life goal means having a particular kind of mental organization so that “we” – particularly the rational, planning parts of our brain – come to identify with the goal more so than with our human needs.[9]

To form a life goal, an objective needs to resonate with someone’s self-concept and activate (or get tied to) mental concepts like instrumental rationality and consequentialism. Some life goals may appeal to a person’s systematizing tendencies and intuitions for consistency. Scrupulosity or sacredness intuitions may also play a role, overriding the felt sense that other drives or desires (objectives other than the life goal) are of comparable importance.

Whether someone forms a life goal may also depend on whether the life-goal identity is reinforced (at least initially) around the time of the first adoption or when the person initially contemplates what it could be like to adopt the life goal. If assuming a given identity was instantly detrimental to our needs, we’d be less likely to power up the mental machinery to make it stable / protect it from goal drift.

Types of life goals

Among life goals, we can make helpful distinctions, such as the following:

  • Self-oriented vs. other-regarding. Self-oriented life goals concern objectives such as optimizing one’s well-being or achievements. By contrast, other-regarding life goals are about doing things for others.
    • Morality-inspired life goals: Of particular interest among other-regarding life goals are life goals inspired by morality. Building on the motivation to act morally, they are about doing what’s good for others from an “impartial point of view.”
  • Directly specified vs. indirectly specified. Directly specified life goals target an objective we already understand, whereas indirectly specified life goals give a recipe for figuring out the objective. For instance, an indirectly specified life goal might be “Act by what you’d conclude after thinking at length (under circumstances ideal for philosophical reflection) about your goals.” Indirectly specified life goals are placeholders; they are instrumental strategies to approximate what one believes to be a “more informed and authoritative” life-goal objective than anything one could come up with on the spot.
    • Deferring to moral reflection: Of particular interest among indirectly specified life goals is deferring to moral reflection. (See my next post on this topic.)[10]
  • Outcome-focused vs. trajectory-based. Having an outcome-focused life goal means to care about optimizing desired or undesired “outcomes” (measured in, e.g., days of happiness or suffering). Life goals don’t have to be outcome-focused. I’m introducing the term “trajectory-based life goals” for an alternative way of deeply caring. Their defining feature is that they are (at least partly) about the journey (“trajectory”).[11]

Trajectory-based life goals

(Disclaimer: I’m not entirely satisfied with my account of trajectory-based life goals. I think I’m gesturing in a meaningful direction, but I don’t feel like I’ve yet managed to distill the idea particularly well.)

Adopting an optimization mindset toward outcomes inevitably leads to a kind of instrumentalization of everything “near term.” For example, suppose your life goal is about maximizing the number of your happy days. The rational way to go about your life probably implies treating the next decades as “instrumental only.” On a first approximation, the only thing that matters is optimizing the chances of obtaining indefinite life extension (potentially leading to more happy days). Through adopting an outcome-focused optimizing mindset, seemingly self-oriented concerns such as wanting to maximize the number of happiness moments turn into an almost “other-regarding” endeavor. After all, only one’s far-away future selves get to enjoy the benefits – which can feel essentially like living for someone else.[12]

Trajectory-based life goals provide an alternative (at least, they constitute my attempt at describing one). In trajectory-based life goals, the optimizing mindset targets maintaining a state from which we derive personal meaning. Perhaps that state could be defined as character cultivation, adhering to a particular role or ideal.

For example, the Greek hero Achilles arguably had “being the bravest warrior” as a trajectory-based life goal. Instead of explicitly planning which type of fighting he should engage in to shape his legacy, Achilles would jump into any battle without hesitation. If he had an outcome-focused optimizing mindset, that behavior wouldn’t make sense. To optimize the chances of acquiring fame, Achilles would have to be reasonably confident to survive enough battles to make a name for himself. While there’s something to be gained from taking extraordinary risks, he’d at least want to think about it for a minute or two! However, suppose we model Achilles as having in his mind an image or role model of “the bravest warrior” whose conduct he’s trying to approximate. In that case, it becomes clear why “contemplate whether a given fight is worth the risk” isn’t something he’d do – it’s not part of the “bravest warrior” archetype.

Other examples of trajectory-based life goals include being a good partner or a good parent. While these contain outcome-focused elements like proactively benefiting particular individuals, we cannot express them in terms of scoring the maximum number of points on some outcome metric. Instead, it’s more about imagining a judge of character (or judge for one’s role) and then wanting to act in ways that the evaluator approves of, day-to-day.[13] So the evaluator may look at outcome metrics, but they also evaluate reasoning procedures and character attributes. (And maybe the options for judgment are closer to “pass vs. fail” instead of a continuous scale?)

Presumably, we wouldn’t want to say that someone holds a trajectory-based life goal if they try their best to fulfill the ideal, “a lazy person with akrasia.” To qualify as a life goal, the ideal behind the goal needs to be sufficiently ambitious. So, perhaps the role model must care about real-world objectives (objectives outside the role model’s thoughts).

Indirectly specified life goals

We can view indirectly specified life goals as placeholders. For instance, as an indirectly specified life goal, “deferring to moral reflection” isn’t itself a terminal value, not in the typical sense. Instead, people defer to moral reflection mean to value this instrumentally as a strategy for getting at one’s “idealized values.” (“Idealized” in the sense of being better than what we could come up with on the spot, in virtue of having undergone a suitable moral reflection procedure.)

However, there’s a sense that “deferring to moral reflection” acts as a terminal value. Namely, someone who values moral reflection, as opposed to having a directly specified life goal, “locks in” a particular way of reasoning about their values. They approach thinking about their goals in the sense of what Joe Carlsmith has called idealizing subjectivism – the view that one’s better-informed self has authority over what to value. Arguably, this metaethical commitment is not just an empirical belief but a normative stance in itself. Deferring fully to moral reflection involves always acting as though values are for us to be discovered, not something we have to (also) create.

The idea of “discovery” is the following. “Idealized values” might exist in a fixed, well-specified sense, in the for-me salient features in the option space (“the space of all possible life goals,” or even “the space of all approaches for thinking about one’s values – the life-goals framework being one of them”). In other words, the idea is that if we had a clear grasp of the option space (and made sure to avoid mistakes or biases), we could discover these “idealized values” – they would stand out as that which we’d feel comfortable choosing, that which feels right.[14]

However, note that we can’t know in advance how the option space will appear to us inside whatever suitable moral reflection procedure we hold in mind. It is perfectly conceivable that we’ll continue to feel as though there are a lot of options for values to potentially adopt, and for none of those options to clearly “stand out.” In that case, someone’s reflection outcome (their “idealized values”) would remain under-defined and somewhat arbitrary – subtle differences to the initial conditions could change the result.

In my next post, The “Moral Uncertainty” Rabbit Hole, Fully Excavated, I discuss these possibilities at greater length and note that it isn’t necessarily bad if one’s idealized values turn out to be under-defined. Still, knowing about the possibility of an under-defined reflection outcome could change the way someone approaches moral reasoning.

Life goals ≠ value lock-ins

Effective altruist discourse regards well-specified and fixed objectives (“value lock-ins”) as potentially problematic.

It is important to note that life goals don’t necessarily equate to value lock-ins, not in the typical sense. Only directly specified life goals correspond to value lock-ins – they are “fixed” in the strong understanding of the word. Indirectly specified life goals are “fixed” only in a weaker sense (they include a well-specified recipe/methodology/reflection procedure for constructing the objective). That sense in which they are also “fixed” is unavoidable. If we refuse to “lock in” anything, this doesn’t somehow give us more option value and objectivity. Instead, it would render our aims (or reasoning about aims) under-defined.

To summarize, the stance “value lock-ins are problematic” only pushes against adopting directly specified life goals; it doesn’t discourage indirectly specified ones, such as valuing moral reflection. For the context of this post, I’m only presenting a framework. I’m not yet saying anything on the merits or drawbacks of directly specified vs. indirectly specified life goals.[15]

Ill-inspired life goals

We can conceive of clearly ill-inspired life goals – for instance, life goals held because of wrong beliefs or flawed philosophical arguments. In some situations, we might want to say that a life goal is ill-inspired, but it might be hard to draw the lines (on what's probably a continuum) and explain precisely why.

Consider the case of a person with agoraphobia who adopted a life goal centered around spending happy moments in the proximity of her house. (The life goal in this example doesn’t have to be expressed in these oddly specific terms, but it could have those particular practical implications.) We may want to say that this person appears (quite literally) “stuck” in an unhealthily limiting identity, which they adopted as a coping strategy, and not because it is what they truly want. (Their life goal looks suspiciously like an instrumental rather than a terminal objective.) However, explaining what it means precisely for someone’s identity to be “unhealthily limiting” seems tricky. Is the Bride’s identity in the movie Kill Bill – her obsession with killing her ex, Bill (stylistically, insisting on open combat with ancient weapons) – “unhealthily limiting?” Arranging one’s life around revenge certainly seems extreme and unhealthy, so maybe the answer is “yes.” Still, Bill had betrayed her, shot her in the face, and killed her loved ones!

Ultimately, the difference between instrumental and terminal objectives can depend on how we view them. For instance, a personal hedonist would say they seek a meaningful relationship because of their happiness benefits. In contrast, a non-hedonist could see themselves as caring terminally about the specific relationship.[16]

What about the identity of people who are unbothered by the possibility of dying a natural death sometime this century? Is their identity “unhealthily limiting,” and should they be pursuing extreme longevity instead?

Whether to respect specific life goals less than maximally – for the sake of the person behind the goal (not enabling self-limiting identities or ill-inspired pursuits and unhealthy obsessions) – makes for an important topic to reflect on. I tend to lean heavily toward always respecting people’s life goals. Still, suppose someone hasn’t thoroughly considered their alternatives and couldn’t pass anything close to an Ideological Turing Test on why others may caution against their views. In that case, I find myself questioning whether to take a person’s life goals at face value, especially when the particular goal causes distress to the person or prevents them (and others) from flourishing.[17]

From a societal point of view, it also pays off to discourage the adoption of socially unusual or extreme life goals and encourage valuing moral reflection instead. Goal uniformity and overlap between life goals facilitate better societal coordination and more considerable gains from trade.

Why have life goals?

Life goals can benefit the life satisfaction of the bearer, but they add responsibility, introduce the possibility of failure, and make things more stressful day-to-day.

Outcome-focused life goals, for which natural selection didn’t select human minds, may alienate people from their needs and desires.[18] On a societal level, the optimizing mindset behind them can increase the stakes and make it more challenging to find and attain gains from trade in cases where life goals differ between people. Outcome-focused life goals can even lead to extremism when people consider themselves justified to override others’ life goals.[19]

On the positive side, life goals create meaning. They add depth on top of all the shallowness that otherwise makes up human behavior. Without people with life goals, the world would roll in the direction specified by crude incentives – there’d be no actual “steering.” Morality-inspired life goals, in particular, add something inspiring and noble. (For further discussion, see this post by my friend David Althaus.)

Why life goals differ between people

If life goals were the same across all people, it would mean that individual differences in people’s psychology, their formative experiences, or the social connections they have built, couldn’t matter for how to best lead one’s life. This seems wrong. Intuitively, it seems that these things should matter. We are not only “type: human,” but also our lived experiences (and genetics).[20]

Also, when we consider how people form life goals, it is no surprise that we’d find a diverse range of life goals. After all, life plans differ significantly between people, which illustrates that people find meaning and motivation in different places. Life goals are too similar to life plans for a fundamental difference here. (See also the above subsection, “psychological implementation.”)

Planning mode

To elaborate on the last point from the paragraph above (how life plans and life goals are formed the same way), I’ll now present an introspection-based account of how people make time-allocation decisions of various degrees of significance. I call it “planning mode,” and I’ll eventually argue that life goals, too, are a product of it.

Activity planning

When I engage in activity planning, I consider only a limited set of options – the ones that happen to be salient to me. In deciding, I may go through the pros and cons of these activities, but there’s no official list of criteria or objective scoring board. So instead, I make my decisions based on criteria I make up on the spot, or I decide in an intuition-driven, holistic way.

Suppose it’s winter, and I want to decide between spending the weekend skiing vs. spending it cozily at home. To decide, I’d visualize two possible futures. I’d try to imagine how I’d feel at various times and how satisfied I’d expect to be with each choice overall. For the option “stay at home,” I might envision the warmth indoors and how carefree and easy the weekend would be. I’d also consider the fear of missing out worrying that I’m not active enough. For the option “go skiing,” I’d picture all the fun on the slopes, but also the prospect of being cold and waiting in queues (and perhaps the awful smell of sweat and cheese in overpacked restaurants). Lastly, I’d try to factor in that the skiing trip might give me lasting happy memories or contribute to my identity as someone who leads a non-boring life.

There’s a normative component to something as mundane as choosing leisure activities. In the weekend example, I’m not just trying to assess the answer to empirical questions like “Which activity would contain fewer seconds of suffering/happiness” or “Which activity would provide me with lasting happy memories.” I probably already know the answer to those questions. What’s difficult about deciding is that some of my internal motivations conflict. For example, is it more important to be comfortable, or do I want to lead an active life? When I make up my mind in these dilemma situations, I tend to reframe my options until the decision seems straightforward. I know I’ve found the right decision when there’s no lingering fear that the currently-favored option wouldn’t be mine, no fear that I’m caving to social pressures or acting (too much) out of akrasia, impulsivity or some other perceived weakness of character.[21]

We tend to have a lot of freedom in how we frame our decision options. We use this freedom, this reframing capacity, to become comfortable with the choices we are about to make. In case skiing wins out, then “warm and cozy” becomes “lazy and boring,” and “cold and tired” becomes “an opportunity to train resilience / apply Stoicism.” This reframing ability is a double-edged sword: it enables rationalizing, but it also allows us to stick to our beliefs and values when we’re facing temptations and other difficulties.

Whether a given motivational pull – such as the need for adventure, or (e.g.,) the desire to have children – is a bias or a fundamental value is not set in stone; it depends on our other motivational pulls and the overarching self-concept we’ve formed.

Career planning

Deciding one’s career arguably works similarly to mundane activity planning for the weekend, though the decision has a larger scope. Consider lifestyle/career choices like the following:

  • Earning an adequate living with as few work hours as possible to maximally enjoy hobbies or quality time with one’s family
  • Focusing on maximizing earnings to (eventually) afford a luxurious lifestyle (perhaps deriving more satisfaction from having cool things than from actually making use of them)
  • Pursuing a career focused on personal satisfaction: e.g., researching mysteries of the cosmos (or animal kingdom), writing a novel, or working at a charity with an inspiring mission
  • Pursuing a career focused on some perceived moral good (e.g., effective altruism)

Which career path people end up pursuing also depends on which options they make salient to themselves. (And on the options that are open to them – sadly, many people don’t have the luxury even to consider lots of options. If you’re struggling to make ends meet, it’s extremely costly to invest in the self-actualization stage in Maslow’s needs hierarchy.) Besides, people will choose different options based on differences in their psychology, life situation (e.g., whether they have a family or are in a relationship), etc. Just consider the differences in character traits and general outlooks between people attracted to the following careers: Novelist, social worker, Navy SEAL.

All the above seems obvious enough: there’s no “objectively correct career or lifestyle.” Only once we determine the evaluation criteria can we reason objectively about careers. (E.g., 80,000 Hours seek to research and objectively reason about which careers have the most impact on some specified impact metric and general philosophy.)

Visualizing the future with one life goal vs. another

Lastly, we also use “planning mode” to choose between life goals. A life goal is a part of our identity – just like one’s career or lifestyle (but it’s even more serious).

We can frame choosing between life goals as choosing between “My future with life goal A” and “My future with life goal B” (or “My future without a life goal”). (Note how this is relevantly similar to “My future on career path A” and “My future on career path B.”)

Consider morality-inspired life goals. For moral reflection to move from an abstract hobby to something that guides us, we have to move beyond contemplating how strangers should behave in thought experiments. At some point, we also have to envision ourselves adopting an identity of “wanting to do good.”

It’s important to note that choosing a life goal doesn’t necessarily mean that we predict ourselves to have the highest life satisfaction (let alone the most increased moment-to-moment well-being) with that life goal in the future. Instead, it means that we feel the most satisfied about the particular decision (to adopt the life goal) in the present, when we commit to the given plan, thinking about our future. Life goals inspired by moral considerations (e.g., altruism inspired by Peter Singer’s drowning child argument) are appealing despite their demandingness – they can provide a sense of purpose and responsibility.

Under-defined attractors

People often base their goals on categories like “good from a self-oriented point of view” or “altruism/doing good impartially.” (For instance, figuring out how to operationalize these ideas could be a goal behind someone’s interest to reflect further on their values.) However, within any such attractor concept, I’ll argue that there are multiple interpretations for us to potentially adopt. In other words, those notions are under-defined.

Under-definedness doesn’t mean that there are no wrong answers (clearly, “altruism/doing good impartially” has little to do with sorting pebbles or putting cheese on the moon). Instead, it means that there’s more than one “defensibly correct” answer.[22]

“Good from a self-oriented point of view” is under-defined

In the narrow contexts of everyday life (people living a natural lifespan on earth, pursuing conventional careers and lifestyles), it hardly comes up that “good from a self-oriented point of view” is under-defined. However, in contexts relevant to longtermism and effective altruism, the under-definedness becomes more salient.[23]

For illustration, consider the following choices:

  • Should I optimize my life to increase the chance of surviving for a billion years (at the cost of near-term conveniences)? Or should I make the most of enjoying my life as it goes, not being too concerned about the prospect of dying this century?
  • Should I enter the experience machine (e.g., in the thought experiment described here) or stay outside?

When making up our minds on choices like those above, it’s not enough to go through abstract thought experiments. To adopt life goals in practice (including taking action on their implications), we also have to envision what it would be like to live our lives with one goal vs. another. In the same way different people feel the most satisfied with different lifestyles or careers, people’s intuitions may differ concerning how they’d feel with the type of identity (or mindset) implied by a given life goal.

Using two examples from above, here's how one could reason:[24]

  • For the objective “valuing longevity,” it’s worth noting how life-altering it would be to adopt the corresponding optimizing mindset. Instead of trusting your gut about how well life is going, you’d have to regularly remind yourself that perceived happiness over the next decades is entirely irrelevant in the grand scheme of things. What matters most is that you do your best to optimize your probability of survival. People with naturally high degrees of foresight and agency (or those with somewhat of a “prepper mentality”) may actively enjoy that type of mindset – even though it conflicts with common sense notions of living a fulfilled life. By contrast, the people who are happiest when they enjoy their lives moment-by-moment may find the future-focused optimizing mindset off-putting.
  • For the objective “personal hedonism,” it could be worth contemplating how a hedonist identity relates to one’s thinking about interpersonal relationships. It implies that any relationships the person currently has are placeholders (“until something better comes along”). That conclusion may be fine for many, but it isn’t for everyone.

Earlier on, I wrote the following about how we choose leisure activities:

[...] [W]e tend to have a lot of freedom in how we frame our decision options. We use this freedom, this reframing capacity, to become comfortable with the choices we are about to make. In case skiing wins out, then “warm and cozy” becomes “lazy and boring,” and “cold and tired” becomes “an opportunity to train resilience / apply Stoicism.” This reframing ability is a double-edged sword: it enables rationalizing, but it also allows us to stick to our beliefs and values when we’re facing temptations and other difficulties.

The same applies to how we choose self-oriented life goals. On one side, there’s the appeal of the potential life-goal objective (e.g., “how good it would be to live forever” or “how meaningful it would be to have children”). On the other side, there are all the ways how the corresponding optimizing mindset would make our lives more complicated and demanding. Human psychology seems somewhat dynamic here because the reflective equilibrium can end up on opposite sides depending on each side’s momentum. Option one – by committing to the life goal in question, “complicated and demanding” can become “difficult but meaningful.” Alternatively, there’s option two. By deciding that we don’t care about the particular life-goal objective, we can focus on how much we value the freedom that comes with it. In turn, that freedom can become part of our terminal values. (For example, adopting a Buddhist/Epicurean stance toward personal death can feel liberating, and the same goes for some other major life choices, such as not wanting children.)

“Altruism/doing good impartially” is under-defined

Peter Railton (1986) gave what I consider the best argument for why there might be an unambiguously correct way to systematize “altruism/doing good impartially.” In the paper Moral realism, Railton starts with a concession: he concedes that “goodness” for a given person is subjective – it’s up to each individual. Then, Railton points out that we can think of “the moral point of view,” or “doing good impartiality,” as an extension of subjective goodness. According to Railton, morality asks us to impartially consider the aggregate of all that’s subjectively good for individuals.

In this sequence’s first post, I explained why I consider Railton’s moral realism “too weak to qualify.” I objected that what Railton calls “a moral point of view that is impartial” has under-defined implications. I’ll now expand on that objection.

Since “good from a self-oriented point of view” is under-defined, and “altruism/doing good impartially” consists of aggregated self-oriented goodness, it’s easy to see why “altruism/doing good impartially” is under-defined as well. In other words, because people subjectively value different types of lives, there’s no uniquely correct way to make the world good for everyone.

Railton (or someone with a similar view) could give one possible reply here. He could reply that it doesn’t matter if self-oriented goodness consists of “different types of lives.” Instead, what matters is that people get what they value. Taking this route, Railton could advocate what I’d call “preference utilitarianism as moral realism.” Preference utilitarianism can incorporate the idea that subjective goodness (here: fulfilled preferences) looks different from person to person.[25]

Preference utilitarianism is a highly useful and arguably underrated[26] framework. Still, I see several reasons why it doesn’t provide a well-specified account of “altruism/doing good impartially:”

1. “What counts as a preference” is under-defined

2. Preference utilitarianism doesn’t touch on population ethics

3. Preference utilitarianism is arguably about cooperation/coordination, not care/altruism

1. “What counts as a preference” is under-defined

Because humans aren’t unified agents, the sense in which we have “preferences” will be open to interpretation. A well-specified account of preference utilitarianism has to state how we can determine or weigh different types of preferences (e.g., for humans, it has to say something about what to count as someone’s preference if they don’t have a well-defined life goal).[27] While it is conceivable, in theory, that philosophically sophisticated reasoners under ideal reasoning conditions would come to agree on what constitutes a “preference,” we have no particular reason to expect such convergence.

2. Preference utilitarianism doesn’t touch on population ethics

Preference utilitarianism follows from the principle “equal consideration of interests (Singer, 1993(1979).” It is the obvious solution for altruistically dealing with a fixed set of individuals and fixed preferences. When it comes to population ethics, however, preference utilitarianism appears under-defined.[28] Philosophers have advanced different population-ethically “complete” versions of preference utilitarianism.[29]

There’s no consensus on the right approach. Moreover, there’s no reason to expect there to be a “right” approach. Instead, it seems that “equal consideration of interests” leaves under-defined how to treat cases of varying population size or cases where it’s up to existing people/minds which “preference bundles” to bring into existence (e.g., is it permissible to create minds that prefer constant misery over never having been born?).

3. Preference utilitarianism is arguably about cooperation/coordination, not care/altruism

Self-oriented life goals describe what’s good for an individual. If everyone’s life goals were self-oriented, fulfilling life goals would be equivalent to helping people flourish in the ways they prefer for themselves. However, because people can have other-regarding life goals (e.g., consider effective altruism as someone’s sole life goal), we can’t interpret “fulfilling someone’s preferences (or life goals)” as “helping that person flourish.” Therefore, I’d find it slightly strained to interpret “preference utilitarianism” as “altruism/doing good impartially.”

I’m not saying that adopting preference utilitarianism as a morality-inspired life goal is indefensible. It certainly seems like “altruism/doing good impartially” of sorts. However, I find there to be another defensible interpretation.

I think of preference utilitarianism as an answer (or “answer attempt”) to problems of cooperation/coordination among people with different life goals.[30] On this interpretation, preference utilitarianism resembles contractualism more than its utilitarian cousins, such as classical hedonistic utilitarianism or negative utilitarianism. (Contractualism also seems like a proposal for within-group coordination, but – depending on the formulation – it may place weaker demands on people than preference utilitarianism.)

To summarize, “preference utilitarianism as moral realism” seems unpersuasive to me because a person’s preferences may not say much about what benefits that person in terms of well-being. (And for my understanding of “morality,” it’s essential that a “moral view” tells me how to benefit morally relevant others.)

“Moral reflection” is under-defined

Lastly, even the notion of moral reflection is under-defined. We have to determine what counts as a suitable reflection procedure and what sort of answers we’d accept as a legitimate reflection outcome (see my next and final post). As I will argue, differences in the initial moral-reflection setup may change where a person’s views end up.[31] More fundamentally, “Bob’s moral reflection” could end up in a completely different place than Alice’s, even if Bob and Alice favor the same reflection procedure. Because the target (something like systematizing “altruism/doing good impartially”) is under-defined,[32] Alice and Bob won’t be answering the same question. They both will have to make judgment calls that rely on personal intuitions. Alice will be accepting solutions that meet her evaluation criteria, while Bob will be looking for ones that meet his.

Justifying the life-goals framework

I have given descriptive accounts of how people reason about personal goals. Someone may ask, “Where are the normative arguments in this post?”

In short, my reply is that “what’s good for someone” has to appeal to a person by their own lights. In previous posts, I’ve rejected other ways morality could function, so this (life goals) is what we’re left with, in my view.

There aren’t any direct normative arguments in this post. Instead, I try to outline what sort of considerations might be relevant to people when they think about what matters to them.

I subscribe to the Wittgensteinian view of philosophy (summarized in the Stanford Encyclopedia of Philosophy):

[...] that philosophers do not—or should not—supply a theory, neither do they provide explanations. “Philosophy just puts everything before us, and neither explains nor deduces anything. Since everything lies open to view there is nothing to explain (PI 126).”

Per this perspective, I see the aim of moral philosophy as to accurately and usefully describe our option space – the different questions worth asking and how we can reason about them.

I favor the life-goal framework because it is relevant, complete, and clear.

  • Relevant: Life goals matter to us by definition.
  • Complete: The life-goals framework allows us to ask any (prescriptive)[33] ethics-related questions we might be interested in, as long as these questions are clear/meaningful. (In the appendix, I’ll sketch what this could look like for a broad range of questions. Of course, there’s a class of questions that don’t fit into the framework. As I have argued in previous posts, questions about irreducible normativity don’t seem meaningful.)
  • Clear: The life-goals framework doesn’t contain confused terminology. Some features may still be vague or left under-defined, but the questions and thoughts we can express within the framework are (so I hope) intelligible.

I expect most of the objections against the life-goals framework to be of the form “The framework isn’t complete because it fails to capture [insert: some facet of moral realism].”

I expect other objections against the life-goals framework to be of the form “Your description misses [insert some appeal or consideration that singles out a specific subset of life goals.]”

Usually, my response to such objections will be “That looks like a valuable addition to the list of relevant considerations! Still, it seems to me that your point can be incorporated into the life-goals framework. It sounds like you’re saying that, when we reason about our life goals, it makes sense to pay attention to considerations x and y. To the degree that people find these considerations appealing, they will do as you suggest. When we consult people about moral reasoning, we should mention these points so people are aware.”

Please see my previous posts and my next (and final) post for further arguments against such objections.

Summary

I hope to have convinced the reader that the life-goals framework is useful, complete (in terms of what it allows us to ask and think about, not in the limited issues I’ve discussed in this post), and clear. The way I intend it, the framework enables us to make progress on moral matters by substituting under-defined questions and concepts with an illuminated option space.

Summary:

  • Life goals are defined to be what matters most to someone – “a terminal objective toward which someone has successfully adopted an optimizing mindset.”
  • Having life goals is optional; people without them only have life plans.
    • Life goals can be psychologically demanding.
    • On the plus side, they add meaning and depth to our lives. In their absence, people will act in ways that primarily benefit their short-term needs satisfaction.
  • Both life goals and life plans are objectives people adopt as part of some needs-meeting strategy. Life goals become disentangled from that strategy when the part of ourselves responsible for rational planning comes to identify more with the long-term goal than with the underlying needs-meeting machinery.
  • Adopting life goals isn’t particularly natural for us (they require high degrees of commitment, foresight, or strategic thinking).
  • Under sufficiently adverse circumstances, even the most ardent pursuers of life goals may suffer from failures of goal preservation.
  • Life goals come in different types and can differ on various dimensions, such as:
    • self-oriented vs. other-regarding
    • directly specified vs. indirectly specified
    • outcome-focused vs. trajectory-based
  • Morality-inspired life goals are particularly relevant to effective altruism (as a subtype of other-regarding life goals). Relatedly, indirectly specified life goals about deferring to moral reflection incorporate metaethical uncertainty and the desire to better understand the moral option space (see the detailed discussion in my next post).
  • Life goals are not the same as value lock-ins; in particular, people can value further reflection on their goals’ explicit contents (deferring authority to their “idealized values”).
  • Just as life plans (obviously) differ between people, so do life goals.
  • We arguably choose life goals the same way we choose lifestyles or careers: we contemplate possible futures side-by-side in “planning mode” and decide by adjusting our framings and evaluative criteria on the go, seeking reflective equilibrium.
  • Life goals may fall into certain attractors (“self-oriented goodness,” “altruism/doing good impartially,” etc.), but these attractor concepts tend to be under-defined.
  • Under-definedness doesn’t mean “anything goes” – instead, it means we have to make judgment calls to specify meanings.
  • I consider the life-goals framework superior to the standard ways philosophers reason about morality because it’s relevant, complete, and clear.
    • See this post’s appendix for how I’d use the life-goals framework to reason about concepts in standard moral philosophy while still adopting/incorporating as much as possible from the array of arguments and insights in the normative ethics tradition.
  • Life goals don’t have to be outcome-focused; instead, it’s possible to pursue trajectory-based life goals (example: flourishing and carrying responsibility in one’s interpersonal relationships).
  • People can adopt ill-inspired life goals.
  • There are cases where it’s tricky to draw the boundaries between ill-inspired life goals and merely unusual ones.

Appendix: Ethical reasoning with the life-goals framework

In this appendix, I want to illustrate with a rough sketch how (and to what degree) we can translate standard concepts and insights from moral philosophy into the life-goals framework.

Standard moral philosophy concept:

Moral realism based on irreducible normativity

Translation into the life-goals framework:

Expressing irreducibly normative concepts within the life-goals framework is impossible. In previous posts (here and here), I explained why I think this doesn’t count against the framework.

We can, however, formulate a wager for acting according to irreducible normativity. See my post, Why the Irreducible Normativity Wager (Mostly) Fails, which argues that the wager only works for individuals who have “pursue irreducible normativity” locked in as their life goal. (In the subsequent post, Metaethical Fanaticism, I describe why I caution against this stance.)

***

Standard moral philosophy concept:

Naturalist moral realism

Translation into the life-goals framework:

In the post What Is Moral Realism? I explained how some descriptions of naturalist moral realism don’t have action-guiding implications for effective altruists. Therefore, I don’t count all of them as moral realism worthy of the name.

According to my terminology, naturalist moral realism is true if one of the following conditions applies:

  1. Philosophically sophisticated reasoners contemplating their life goals under ideal conditions would converge on the same answer.
  2. Philosophically sophisticated reasoners asked to decide (under ideal conditions) on a life goal that systematizes “altruism/doing good impartially” would converge on the same, well-specified answer. (As opposed to, e.g., rejecting the question or deeming several incompatible solutions to be equally defensible.)

In scenario (1), moral judgments are always motivating, whereas they aren’t in (2). (See the SEP entry on “moral motivation” on this distinction, but note that I don’t consider the difference particularly important.) So, note that I’d acknowledge a position as “moral realism” even if no philosophically sophisticated reasoners were motivated to act on it. What matters is that they recognize the position as the moral option (“that which a person would choose if they were morally motivated”).

In short, naturalist moral realism claims that there are salient features in the moral option space (features that would appeal to all philosophically sophisticated and perfectly informed reasoners). As such, the concept is relevant to people's life goals (if they are motivated by moral considerations). In particular, the possiblity that naturalist moral realism might be true speaks in favor of valuing moral reflection instead of (perhaps prematurely) locking in some directly specified life goal.

***

Standard moral philosophy concept:

Normative ethics (“What’s the correct moral theory?”)

Translation into the life-goals framework:

The folk concept “morality” encompasses distinct clusters. We can find wisdom in several of these clusters, depending on what interests us. Once we specify the questions we’re asking (our “evaluation criteria”), we can find answers tailored to those questions. (Luke Muehlhauser coined the term “Pluralistic Moral Reductionism” for the approach I’m describing.)

Using the life-goals framework, here’s how I’d describe two main clusters in moral folk discourse:

  1. Partly, morality is about the content of one’s life goals. (Whether they be self-oriented or other-regarding and possible formulations of that.)
  2. Then, morality is also about reacting to other people’s life goals being different from one’s own.

(I’m not claiming that the above distinction captures everything noteworthy about moral discourse!)

(1)

Moral philosophy from an anti-realist perspective resembles existentialism. We can ask ourselves questions like, “What’s my life goal? Why do I want to get up in the morning?”

We may ask whether (and to what degree) we want to pursue self-oriented life goals vs. something “greater than ourselves” – perhaps something “altruistic/other-regarding/impartially good.” We may then go through various thought experiments and arguments for “moral theories,” “axiologies,” etc.

We may seek to understand the option space, formulate questions that feel important to us, and see if a particular answer resonates deeply with us or if we have residual uncertainty or feel indecisive. (As I will argue in my next post, uncertainty and indecisiveness aren’t always distinguishable.)

Relatedly, we may contemplate whether life goals are chosen or discovered. If they are “discovered,” that would mean that there’s something I can be wrong about. I could be wrong about what I value “deep down” or “if I were perfectly informed.” (The philosophical terminology for this view is idealizing subjectivism – see this post for an excellent discussion of it.) By contrast, if we choose our life goals, I can only be wrong about my life goals in the particular instances where my life-goal-informing beliefs are contradictory or nonsensical. (In my next post, I will argue that a significant degree of “choosing” goes into life goals, but that there are also possible elements of “discovery.”)

(2)

People don’t all share the same life goals. So to what degree (and under which circumstances) should we try to benefit others’ life goals? We can view Kantian moral philosophy, preference utilitarianism, and contractualism as proposed answers to questions of societal coordination or cooperation among people with different life goals.

Moral realists who confidently endorse a particular ethical theory (I’ve argued elsewhere that this is a requirement of justified belief in moral realism) might come to believe that the true morality warrants overriding others’ life goals. By contrast, for moral anti-realists, a decision to thwart others’ life goals (by violating the ethics of cooperation/coordination) is always a further question, separate from “What are my life goals?” Under moral anti-realism, morality-inspired life goals (including, e.g., views on population ethics) aren’t meant to apply to everyone, so there's no logical link from having a certain life goal to considering oneself justified to override others' life goals.

To give an analogy, just because someone politically self-identifies as a Democrat doesn’t mean that they endorse poisoning the teacups of Republican voters. It’s perfectly possible – and indeed the decent thing – for people to respect the overarching political process (despite caring a great deal about their preferred policies).

Likewise, it’s perfectly possible to hold a personal life goal and respect the ethics of cooperation/coordination. Thwarting others’ life goals is an uncooperative, anti-social stance precisely because moral anti-realism is true.

***
Standard moral philosophy concept:

Moral uncertainty

Translation into the life-goals framework:

In my previous post,Moral Uncertainty and Moral Realism Are in Tension, I argued that there’s something under-defined or incongruent about moral uncertainty as a concept.

To fit it into the life-goals framework and to acknowledge the possibility of moral anti-realism, I propose to replace “moral uncertainty” with three related but more precisely explained concepts:

  • Deferring to moral reflection (and uncertainty over one’s “idealized values”)
  • Having under-defined values (deliberately or by accident)
  • Metaethical uncertainty (and wagering on moral realism)

My next post is all about this topic, so I won’t go into further detail here.

Acknowledgments

Many thanks to Adriano Mannino and Lydia Ward for their detailed comments on this post.

References

Hanson, R. and K. Simler. (2018). The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford: Oxford University Press.

Kurzban, R. (2010). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Princeton: Princeton University Press.

Orwell, G. (1949). Nineteen Eighty-Four. London: Secker & Warburg.

Railton, P. (1986). Moral Realism. The Philosophical Review, 95(2):163-207.

Singer, P. (1993(1979)). Practical Ethics (2nd edition). Cambridge: Cambridge University Press.

Sotala, K. (2016). Defining Human Values for Value Learners. AAAI-16 AI, Society and Ethics (workshop).

Vonnegut, K. (1985). Galápagos. New York: Delacorte Press.

Williams, B. (1973). A Critique of Utilitarianism. In J.J.C. Smart and B. Williams (eds.), Utilitarianism: For and Against. Cambridge: Cambridge University Press.


  1. I chose to introduce a new definition because terms like “preferences” or “idealized values” are already philosophically loaded. The primary way life goals differ from these related concepts is that not everyone has life goals. ↩︎

  2. Whether someone has a life goal lies on a spectrum. For instance, it seems common for many people to initially lack the self-efficacy or ambition to pursue their goals effectively. The intent behind having a goal already counts as a distinguishing feature, even if someone may lack the willpower to act efficiently. See the post, Your agency failures do not imply that your ideals are fake. ↩︎

  3. We can also label this as a situation with inauthentic objectives. That said, if there’s some sense in which the person notices and regrets their failings, as opposed to being inauthentic throughout, we may want to honor the intention (wanting a life goal), even if the person struggles with the optimizing mindset or with self-deception. Which features to weigh or pay attention to will depend on the reasons we are interested in determining someone’s life goals. ↩︎

  4. See also the literature on transformative experiences. ↩︎

  5. The Lesswrong sequences present a view of rationality where conforming to VNM axioms is essential. I don’t want to claim that “being strategic in one’s thinking about objectives” entails conforming to VNM axioms. I’d have to think longer about specific examples, but judging from my current state of thinking, I could imagine that there are life-goal objectives for which these axioms don’t (obviously) hold. ↩︎

  6. Note that the optimizing mindset behind life goals need not be applied fanatically to a crude objective such as “never giving up on the relationship.” If one’s significant other is 99.99% likely to have died in a plane crash, a life goal about the relationship doesn’t necessarily imply spending the rest of one’s life searching islands for castaways. Instead, we can think of life-goal objectives in nuanced and pragmatic ways, with fallback goals like “living the rest of one’s life to make one’s memory of the other person proud.” See also the notion of “trajectory-based life goals,” which I’ll introduce further below. ↩︎

  7. For instance, people may abandon their life goals if they suffer burnout from neglecting their needs for prolonged periods. Similarly, they may abandon life goals under torture. To quote from Orwell’s Nineteen Eighty-Four:
    [...] for everyone there is something unendurable – something that cannot be contemplated. Courage and cowardice are not involved. If you are falling from a height it is not cowardly to clutch at a rope. If you have come up from deep water it is not cowardly to fill your lungs with air. It is merely an instinct which cannot be destroyed. [...]
    (Orwell, 1949) ↩︎

  8. Quoting from Kaj Sotala’s (2016) paper, Defining Human Values for Value Learners
    I suggest that human values are concepts which abstract over situations in which we’ve previously received rewards, making those concepts and the situations associated with them valued for their own sake. ↩︎

  9. A person’s needs may still be part of their life goal – for instance, the person could aim to pursue their needs in a systematic, forward-looking fashion. In that case, the person would have a self-oriented life goal. ↩︎

  10. Whether “deferring to moral reflection” is an intrinsic objective, or merely a means to an end (a strategy for discovering one’s “idealized values”) is something I intend to discuss at length in my next post. In short, the answer is that it’s generally intended as an instrumental strategy, but it can act as a terminal value of sorts. See also my discussion in the section “Indirectly specified life goals” below. ↩︎

  11. Consider the cliché, “What matters most is the friends we made along the way.” ↩︎

  12. This points at another line of argument (in addition to the ones I gave in my previous post) to show why hedonist axiology isn’t universally compelling:
    To be a good hedonist, someone has to disentangle the part of their brain that cares about short-term pleasure from the part of them that does long-term planning. In doing so, they prove they’re capable of caring about something other than their pleasure. It is now an open question whether they use this disentanglement capability for maximizing pleasure or for something else that motivates them to act on long-term plans. ↩︎

  13. References to religious beliefs come to mind. That’s not unintentional – it seems to be a deep-seated need to want one’s character to be favorably evaluated. ↩︎

  14. If the same elements were salient to everyone, that could mean those salient features are a part of a moral reality in the moral realist sense; otherwise, if different features stand out for other people, individuals could at least learn/discover something relevant about their personal values. ↩︎

  15. E.g., it would be comparable with my framework to think that directly specified life goals tend to be “epistemically immature” and therefore something the effective altruism community should discourage. That said, I will argue in my next post that the situation is more nuanced. ↩︎

  16. In practice, these two situations could be very similar, but there’s a difference in how the hedonist and non-hedonist interpret their attitude about the relationship when they think about life goals. ↩︎

  17. Someone might say that the term “flourishing” introduces a moral dimension separate from life goals. I have sympathies with that objection, and insofar as I still hold some credence for naturalist moral realism worthy of the name, this is one place where I want to do more thinking. Ultimately, I doubt it’s possible to draw a clear-cut distinction between well-formed life goals and ill-inspired ones. Still, it would be highly relevant for our moral reasoning approaches if we could say more about what constitutes adequate conditions to form life goals. ↩︎

  18. In detective movies, a common trope is that the best police officers have some of the worst private lives. At the expense of a balanced life, they obsess about actually solving the crime, as opposed to merely playing the part of a police officer (“pretending to try instead of actually trying”). ↩︎

  19. For instance, in the TV series The Americans, two Russian spies “pose” as an American married couple and go all the way to having children and acting their part perfectly, except that they also go on spy missions. For their Communist convictions, they become killers and have to lie and deceive their children and the people they get close to. ↩︎

  20. Bernard Williams (1973) describes a related intuition in his essay, A Critique of Utilitarianism. On the far-reaching demands utilitarianism places on how individuals ought to live their lives, Williams wrote: It is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his actions and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity. ↩︎

  21. Something like goal factoring seems essential here – my discussion of skiing vs. coziness at home is somewhat simplistic in that regard. ↩︎

  22. Richard Ngo makes similar points in his post Arguments for moral indefinability. ↩︎

  23. That the concept’s boundaries were hardly challenged in the ancestral environment may explain why human self-orientedness is under-defined. Modern life offers a wide range of life paths; for the first time (minus a few historical exceptions, probably), we have people reflect on humanity’s long-term future and contemplate plans on unprecedented timescales. Evolution doesn’t seem to have equipped people with fixed, consciously accessible goals. Instead, our values seem malleable to some degree, and they can function as placeholders. For example, many people seem to desire to be satisfied with themselves, with their identity, wanting to “be good” or “do good,” when they reflect on how they’re doing in life. But the criteria for success, especially with regard to one's highest ideals, appear to be malleable – they will depend on how a given person construes their identity. ↩︎

  24. For yet another example, similar considerations apply to the decision of having children (whether that’s a self-oriented or other-regarding choice is up for discussion). ↩︎

  25. Arguably, I centered my discussion here too much on utilitarianism. For a thorough analysis, we should also discuss contractualist or Kantian accounts. Insofar as there isn’t an obvious pick among those theories, and insofar as we don’t expect all these theories to make the same claims, moral realism appears less likely. ↩︎

  26. Many people with credentials in philosophy may be skeptical about this, but I for one would not object to viewing utilitarian aggregation as the “probably correct way to do interpersonal comparisons of utility.” See this post for a discussion. ↩︎

  27. People with life goals seem to “care more” in some arguably relevant sense. Should we take people’s life goals at face value but reinterpret life plans or more weakly held convictions in some other way? Someone without life goals will have life plans that are heavily influenced by short-term needs satisfaction – it’s unclear how to extract systematic long-term “preferences” from that. Relatedly, for beings that are incapable of forming life goals (e.g., nonhuman animals), what are their “preferences” in the morally relevant sense, and do they count at all? ↩︎

  28. We can see this in the writings of preference utilitarians. Back when Peter Singer endorsed preference utilitarianism, he described a “prior existence view,” according to which the creation of new happy people is neutral (Singer, 1993(1979), pp.103-105). Singer himself pointed out that this leads to various problems or paradoxes, but he acknowledged he didn’t have a better proposal. ↩︎

  29. For instance: a) In Practical Ethics, Peter Singer (1993(1979)) also considered a ”moral-ledger model” of preference utilitarianism. b) Roger Chao suggested NAPU – negative average preference utilitarianism. c) Carl Shulman explored the implications of giving full weight to potential people. ↩︎

  30. Preference utilitarianism isn’t the only contender that looks suitable here. The moral-parliament approach with variance normalization arguably appears superior in some contexts. In any case, the general theme with moral anti-realism is that “the best way to do x” is usually under-defined, especially if “x” is some folk concept rather than a concept stipulated by philosophers. ↩︎

  31. Outcomes of people’s moral reflection could be sensitive to ordering effects in the arguments people encounter. They may also depend on social features of the reflection procedure (e.g., whether the reflection happens in discussion groups, is embedded in social context, or is done solitarily) and on the persuasiveness settings of AI assistants used within the procedure. Lastly, there’s a risk that unusual features of the reflection procedure, such as spending inordinate amounts of time doing philosophy in virtual reality, could cause one’s thinking to go off the rails. ↩︎

  32. We arguably have to make judgment calls at the first step already, between utilitarianism and other theories. Kantians or contractualists could argue that the utilitarian way of taking an “impartial stance” is misguided because it leads to situations where people receive zero care/concern if they aren’t among the group of individuals where helping is most effective. ↩︎

  33. The life-goals framework isn’t meant as a descriptive explanation for how people use moral terminology. I make no claims about that. ↩︎

Comments17
Sorted by Click to highlight new comments since:

My impression is that this is already happening – lots of people in the EA movement self-identify as moral anti-realists.

Even in the very beginnings of this movement, realism wasn't necessarily the default. Admittedly, the Oxford-based origins of EA are influenced by moral realism (e.g., I think Toby Ord was a moral realist or at least was convinced that acting as though moral realism is true is the prudent thing to do, and may still think so, for all I know.). However, Peter Singer, at the time he wrote Practical Ethics, used to be a moral anti-realist.(He wrote a great essay on the triviality of the is-ought distinction and his chapter on "Why act morally" in Practical Ethics doesn't rely on moral realism.) Similarly, Holden Karnofsky, who co-founded GiveWell, isn't a moral realist (in a post published on Lesswrong today, he calls himself a "moral quasi-realist," which sounds pretty similar to what I think of as moral anti-realism ["quasi-realism" also has a technical meaning in metaethics, but that's not what Holden meant, as I understand it]). Eliezer Yudkowsky and Luke Muehlhauser wrote entire sequences on metaethics with anti-realist takes. All those people were important in establishing the early effective altruism movement.

For what it's worth, I agree with you about the importance of a strong core. But I don't see why anti-realists can't be incredibly dedicated. I already mentioned examples of highly-dedicated anti-realists above. There are a lot more examples. Brian Tomasik is an anti-realist – I've yet to see a person think his contributions to EA are at risk of watering down the movement. Richard Ngo is an anti-realist (here and here) and Joe Carlsmith too (or at least has strong sympathies?), by the looks of his posts. Paul Christiano, in the context of his AI alignment research, wrote two accounts for normativity/human values/human judgment that illustrate how "AI makes philosophy honest" – both strike me as decidedly anti-realist in their approach.

To summarize, I think you're going off a mistaken impression of EA demographics.

Perhaps you were primarily commenting about all-out utilitarianism (in the sense of particularly high levels of altruistic dedication ) vs. something closer to "EA on the side." I think that's a spectrum and we have to find a good balance. Julia Wise has a couple of great posts (e.g., here or here) arguing against too much fanaticism and self-sacrificing life goals. I've written a similar post, so I think these sorts of posts were steering things in a good/needed direction on the margin.

To summarize, I think you're going off a mistaken impression of EA demographics.

To the degree that these are common beliefs, it may suggest that there's something problematic with how some people communicate about effective altruism. After all, as I'm arguing in this sequence, I think moral realism (worthy of the name) is almost certainly wrong. If that's true, we wouldn't want people to believe that effective altruists are predominantly committed to moral realism.

As a non-moral-realist, my reaction to this post is "ahhh that's the good stuff, keep it coming!"


I'm curious why "Caring about preventing changes to one’s objective (or the intention to pursue it)" is a necessary component of life goals in your view. 

You didn't have to mention it at all, since it's a convergent instrumental goal and thus follows from conditions 2 and 3. Yet you chose to emphasize it, and even said it first!

I by contrast would have done the exact opposite: Instead of having a clause about how you must care about preventing changes to one's objective, I'd have a clause about how contra convergent instrumental goals, it's OK for you to not care about certain kinds of changes to one's objective. That is, you still count as having a life goal even if you are totally fine with the possibility of e.g. falling in love with someone who then has lots of non-manipulative, honest, interesting conversations with you that result in you changing your goal.
 

Thanks for the comment!

I by contrast would have done the exact opposite: Instead of having a clause about how you must care about preventing changes to one's objective, I'd have a clause about how contra convergent instrumental goals, it's OK for you to not care about certain kinds of changes to one's objective.

That makes sense to me.The book Human Compatible has a good phrasing: "Which preference-change processes do you endorse?"

There were two main options I had in mind how changes to the life-goal objectives can come about without constituting a "failure of goal preservation" in an irrationality-implying sense:

  • An indirectly specified life goal around valuing reflection (I think of this as a widespread type!). Note that people with indirectly specified life goals would express non-total confidence in their best-guess formulation of what they want in life. So this probably isn't quite what you were talking about.
  • A life goal with a stable core and "optional/flexible" parts. For instance, imagine a person who's fanatically utilitarian in their life goals. They could have a stance that says "if I ever fall in love, it's okay to start caring partly about something other than utilitarianism."

On the second bullet point, I guess that leads to an increased risk of failure of goal preservation also for the utilitarian part of their goal. So you're right that this setup would go against convergent drives.

In a reply to Michael below, you point out that this (e.g., something like what I describe in my second bullet point) seems like a "clunky workaround." I can see what you mean. I think having a life goal that includes a clause like "I'm okay with particular changes to my life goal, but only brought about in common-sense reasonable ways" would still constitute a life goal. You could think of it as a barrier against (too much) fanaticism, perhaps.

Side note: It's interesting how, in discussions on "value drift" on the EA forum, you can see people at both extremes of the spectrum. Some consider value drift to be typically bad, while others caution that people may have truer values as they get more experienced.

You could endorse changing your mind under certain circumstances only (subjectively chosen, not necessarily ahead of time) as a specific potentially overriding life goal. EDIT: Or otherwise indirectly specified and flexible life goals that allow for you to change your mind about some things, as discussed in the post, e.g. wanting to act according to the ethical views you'd endorse if more informed.

Sure. But I think my point/question still stands. I think most people who have life goals--or rather, what we'd intuitively think of as life goals, and indeed what we'd intuitively think of as having an optimizing mindset towards--wouldn't mind if further reflection of various benign kinds caused them to change said goals, and while we COULD say that this is because there is an extremely widespread meta-life-goal of being the kind of person who deliberates and reflects and changes their goals sometimes... it seems like a clunky workaround, an unnatural way of describing the situation.

Maybe we should just allow some slack/flexibility in life goals. From a footnote:

Note that the optimizing mindset behind life goals need not be applied fanatically to a crude objective such as “never giving up on the relationship.” If one’s significant other is 99.99% likely to have died in a plane crash, a life goal about the relationship doesn’t necessarily imply spending the rest of one’s life searching islands for castaways. Instead, we can think of life-goal objectives in nuanced and pragmatic ways, with fallback goals like “living the rest of one’s life to make one’s memory of the other person proud.” See also the notion of “trajectory-based life goals,” which I’ll introduce further below.

You might want life goals to implicitly have conditions for when it's appropriate to abandon them, change them or replace them, e.g. reflection. Some conditions can turn life goals into unambitious whims and no longer really terminal objective at all, and hence not life goals, e.g. "Pursue X until I don't feel like it anymore". That being said, I expect it to be difficult to draw sharp lines.

Maybe adding these conditions in the specific life goals themselves is also clunky, and as you suggest, it's the definition of life goal that needs to be a bit more flexible? When can we say that we still value something "terminally", if we're allowing whether we value it at all to change under some circumstances?

I'm not sure only caring about indirectly specified life goals or trying to reformulate each directly specified life goal in indirect terms will do what you want. Even "being the bravest warrior" for Achilles is trajectory-based and indirectly specified, but what if Achilles decided it was no longer a worthy goal, either because it was "misguided", or because he found something else far more important?

Thanks for writing this! I have a some thoughts/questions:

I think the framing as "goals" to "achieve" suggests that the individual who holds them has to be involved in achieving them, possibly with help. Is this intended? If so, doesn't this make all life goals at least a little self-oriented, in a sense, by requiring personal involvement, and "optimizing for achievement"? On the other hand, we may prefer others to be better off, even if we're personally involved in ensuring it, and choose this over having them help us achieve our own life goals, doing less good.

I also often think of deontological constraints against instrumental harm similarly: they seem like a preoccupation with keeping one's own hands clean, rather than doing what's best for those involved. A life goal to minimize the harm you personally cause seems more self-oriented than a life goal to minimize harm generally, and even more so than the preference for there to be less harm generally. 

Similarly, "be a good person" seems both self-oriented and other-regarding, and these are the terms virtue ethicists think in.

This leads to my next point:

Self-oriented life goals describe what’s good for an individual. If everyone’s life goals were self-oriented, fulfilling life goals would be equivalent to helping people flourish in the ways they prefer for themselves. However, because people can have other-regarding life goals (e.g., consider effective altruism as someone’s sole life goal), we can’t interpret “fulfilling someone’s preferences (or life goals)” as “helping that person flourish.” Therefore, I’d find it slightly strained to interpret “preference utilitarianism” as “altruism/doing good impartially.”

I don't think it's that weird to consider that helping someone achieve their life goal to do good (e.g. effective altruism) does in fact help them flourish. Maybe this is more strongly the case if their life goal is to "be a good person" rather than "do good".

On the other hand, I agree it's a little weird to say that you've helped someone by further satisfying their preferences for others to be better off, if that person doesn't even know about it or otherwise was not involved. And helping them achieve their other-regarding life goals rather than just doing more good can be worse in their eyes.

I think the framing as "goals" to "achieve" suggests that the individual who holds them has to be involved in achieving them, possibly with help. Is this intended?

I think you're saying that the word "achieve" has the connotation of actively doing something (and "earning credit for it")? That's not the meaning I intended. There are conceivable circumstances where "achieving your life goals" (for specific life goals) implies getting out of the way so others can do something better. (I'm reminded of the recent post here titled I want to be replaced.)

Similarly, "be a good person" seems both self-oriented and other-regarding, and these are the terms virtue ethicists think in.

I agree!

I don't think it's that weird to consider that helping someone achieve their life goal to do good (e.g. effective altruism) does in fact help them flourish. Maybe this is more strongly the case if their life goal is to "be a good person" rather than "do good".

There could be a situation where the best way to benefit Alice's life goal is by doing something that leads to Alice becoming depressed. E.g., if Alice thinks she's the best person for some role with a mission that's in line with her life goal, but you're confident she's not, you'd vote against her. I think there's still a sense in which we can defensibly interpret this as "doing something (ultimately) good for Alice" because there's something to living with one's eyes open and not deluding oneself, etc. But my point is that it's not necessarily the most natural or the only natural interpretation.

Maybe one way to think about torture causing us to abandon other life goals is that it induces in us a strong and urgent life goal to end the torture. Of course, it's a bit weird to call it a life goal, given how much it focuses on its immediate achievement. Would you still consider this a life goal, anyway? It seems like it might often not meet "being strategic in one’s thinking about objectives", if the individual is too desperate to think strategically.

I'd guess this isn't what happens for burnout, although people do want to not feel miserable and burned out, but they might not take an "optimizing mindset" at all to it.

Of course, it doesn't need to be the case that every life goal that is abandoned is abandoned because a conflicting life goal was adopted and prioritized.

However,, if torture (or depression, etc.) is meant to be bad even if it doesn't frustrate life goals, then I'm guessing you wouldn't intend for life goals to account for all that we should care about on behalf of someone.

Of course, it doesn't need to be the case that every life goal that is abandoned is abandoned because a conflicting life goal was adopted and prioritized.

Yeah, in the case of torture or burnout, I find it more natural to think of it as the person's needs-meeting machinery rebelling against the long-term planning parts of the brain/self. That said, transformative experiences like torture or suffering through disillusionment or burnout could induce changes that lead to the adoption of other life goals, perhaps ones that put a lot of weight on avoiding suffering or ones that give more room to personal needs. (Though allowing oneself to care about personal needs can also be viewed as an instrumental adjustment to make the original life goal – the one that led to burnout – sustainable again.)

However, if torture (or depression, etc.) is meant to be bad even if it doesn't frustrate life goals, then I'm guessing you wouldn't intend for life goals to account for all that we should care about on behalf of someone.

Indeed! I don't mean for life goals to be the fundamental building blocks in a moral theory like preference utilitarianism. I do think of them as "building blocks" or "key concepts" in my moral reasoning repertoire, but mostly in the sense of "if someone does have a life goal, that seems clearly relevant somehow." Other things can matter too. For instance, reducing suffering unambiguously falls into the care/altruism dimension of morality. Non-human animals don't have life goals, but it still seems "good from an impartial perspective" to help them.

Likewise, when a person doesn't have a life goal (picture someone who has no responsibilities and plays video games as much as they can), we still want to care for that person. Obviously, they shouldn't suffer, but there seem to be degrees of freedom after that (basically all the writing in AI alignment on how it's difficult to define "human values").

It's not obvious to me that nonhuman animals never have life goals, depending on how we draw lines. I think ensuring the welfare of one's children (or other kin or bonds) or being high in the social hierarchy could be a life goal in nonhuman animals. This could apply fairly generally to social animals, child-rearing animals, or, depending on the cognitive requirements, only in a more limited way to just the smartest animals, e.g. subsets of primates, cetaceans, cephalopods (some octopuses even live in groups), corvids, parrots or elephants. I quote and respond to the requirements you proposed in turn.

 

Life goal: A terminal objective toward which someone has successfully adopted an optimizing mindset.

An “objective” can be anything someone wants to achieve. Typically, objectives are about affecting the world outside one’s thoughts or conforming to a specific role or ideal.

An objective is “terminal” if someone wants to achieve it for its own sake, not as a means to an end.

Parenting and the welfare of one's offspring is the thing that's reinforcing for parenting animals. If we said that it's egoistic hedonism first, then that might get things backwards (although I'm not sure): their reinforcement is determined by their psychology that induces emotional contagion and other parenting instincts and orients them towards parenting in the first place. Saying that I only care about others because I feel bad for their suffering leaves out the explanation for why I feel bad for their suffering, and we might be able to locate a terminal objective there, although maybe this is a stretch, and what happens at this level in particular doesn't meet the other requirements.

This is speculative, but smarter animals may even have beliefs about parenting as a goal in itself, separately from the reinforcement. I recall a case of a grandmother (nonhuman) primate berating her daughter for not caring for the grandchild. If they have an intuitive sense that others have responsibilities towards their own offspring, and are self-aware, they may have an intuitive sense that they themselves have responsibilities towards their own offspring, and fulfilling these responsibilities could amount to a life goal.

 

By “optimizing mindset,” I mean:

  • Caring about preventing changes to one’s objective (or the intention to pursue it)

I suspect this is relatively rare among nonhuman animals. Parenting animals may feel distress when their offspring are taken from them or killed, but this is not the same as asking whether they would want to not care at all, which I suspect few or no nonhuman animals are capable of.

 

  • Caring about the objective with a global scope of action (as opposed to, e.g., caring about it only during work hours and within the constraints of a narrow role or context)

Parenting in nonhuman animals plausibly meets this. They may continue to feel distressed when separated from their offspring in a way that is irregular/unplanned for long, although the mechanism could be fairly simple here: separation anxiety. Cows, for example, dislike being separated from their calves. Other animals gather or hunt for food for their offspring when they are away from them, although it's plausible they forget that their offspring exist when they're away.

 

  • Looking out for opportunities to pursue the objective more efficiently, e.g., working on improving one’s skills or remodeling aspects of one’s psychology[2]

Many animals will adopt more efficient approaches if they come across them and happen to try them, just through fairly simple exploration and learning. I'd guess none of them would actively try to think of ways to be more efficient, though.

 

First, the person needs to understand (at least on an intuitive, implicit level) what’s entailed by “being strategic in one’s thinking about objectives.” For instance, this type of mindset is conveyed (in a theoretical, explicit way) in Eliezer Yudkowsky’s Lesswrong sequences.[5] However, people can also intuitively and implicitly understand this (“street smarts over book smarts”).

There are multiple places we could draw lines. I think many nonhuman animals are capable of at least limited planning, which could mean they are "strategic". At least, they also pursue options that they've learned are better through experience (including for many, through social learning), even if they don't reason about them. Their goals determine how outcomes are reinforced for them, so that they may respond to their offspring's distress with their own distress (e.g. chickens), and learn to prevent outcomes causing distress to their offspring. Some nonhuman animals who are capable of somewhat general causal and abstract reasoning even if it's nonsymbolic, e.g. primates and corvids, could plausibly meet a higher bar for reasoning.

 

Second, the person has to decide that an objective is worthy to orient their life around – we can no longer view it as instrumental in satisfying needs. (This may not feel like a “decision” in the typical sense. Instead, someone may feel unable to contemplate choosing/acting differently.)

I think nonhuman animals can meet the parenthetical (probably not explicit decisions). I suspect for a lot of people, not becoming a parent or not taking adequate care of your children was never really an option, in some cases something they never even considered explicitly, and I would not want to exclude those as life goals.

Self-oriented vs. other-regarding. Self-oriented life goals concern objectives such as optimizing one’s well-being or achievements. By contrast, other-regarding life goals are about doing things for others.

Are these meant to be exhaustive? Can a life goal just be oriented towards an object? Or would you just consider that self-oriented?

 

Morality-inspired life goals: Of particular interest among other-regarding life goals are life goals inspired by morality. Building on the motivation to act morally, they are about doing what’s good for others from an “impartial point of view.”

Given that some people care morally about things that aren't "others", e.g. nature or beauty, should morality-inspired life goals necessarily be other-regarding?

There are also ways to care morally about others that aren't impartial:

  1. They can be partial, e.g. family- or community-focused, as people may believe they have special moral duties to particular individuals.
  2. Neither impartial nor partial could really be applicable, e.g. someone could care about the survival of species (including humans), but not because of the value of particular individuals.

Are these meant to be exhaustive?

I didn't explicitly intend them to be exhaustive, but we can make the following argument for why it seems pretty exhaustive:

  • As you point out in another comment, there's a sense in which everything we do is "self-oriented" in some way
  • Still, "altruism" remains a meaningful concept. People are "altruistic" if the thing that gives them personal meaning includes helping others

So, life goals would be self-oriented by default, but some life goals are self-oriented and other-regarding.

Life goals and life plans seem to me to sit somewhere between Heidegger's Sorge (both feel to be like aspects of Sorge) and general notions of axiology (life goals and life plans seem like a model of how axiology gets implemented). Curious if that resonates with what you mean by life goals and life plans.

I'm not familiar enough with Heidegger to comment on his concepts, but I can imagine similarities between life goals and existentialist thinking! Regarding axiology, I usually encounter this in moral realist contexts where an axiology tells us what's good/valuable in a universal sense.

It’s an interesting framework. I also would likely term myself an anti-realist, but I take a more neurological approach. Morals are essentially evolved predisopositions humans have (like racism, and anger), that served as heuristics to enable societal harmony and consequent gene reproduction. But i argue that there’s an empathetic neurological base that is more unwavering than moral beliefs and can be dissected and used to build consensus. I argue that there are, in a sense, implicit life goals that are already programmed into all of us (humans mostly, without a deep understanding of other primates/eukaryotes). Namely (as I define these terms in my book the Upsilon Factor), the requirement to remain below the tolerance threshold (Omega) of suffering, the need to remain above the survival-choosing threshold (Alpha) of Joy, the impetus to reduce global empathy-weighted suffering (Upsilon), and the desire to maximize self Joy (Zeta) - in that order. People differ in the relative empathy weights but this order is i believe fairly universal and enables consensus more than moral beliefs.

Curated and popular this week
Relevant opportunities