This is a special post for quick takes by Jordan Arel. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

Welcome to my short-forms! These are fragments of pieces, pieces that are still in process, or brief summaries of pieces I hope to write in the future.

Sorted by Click to highlight new quick takes since:

Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?

Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”

But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.”

OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized.

Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.

I don't know if it helps, but your "logical" conclusions are far more likely to be wildly wrong than your "emotional" responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like "hey, these creatures that are similar to me are suffering, and that sucks!". 

Perhaps the dissonance is less irrational than it seems...

Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.

I relate to that a lot, and I want to share how I resolved some of this tension. You currently allow your heart to only say “I want to reduce suffering and increase happiness” and then your brain takes over and optimizes, ignoring everything else your heart is saying. But it’s an arbitrary choice to only listen to the most abstract version of what the heart is saying. You could also allow your heart to be more specific like “I want to help all the animals!”, or even “I want to help this specific animal!” and then let your brain figure out the best way to do that. The way I see it, there is no objectively correct choice here. So I alternate on how specific I allow my heart to be. 

In practice, it can look like splitting your donations between charities that give you a warm, fuzzy feeling, and charities that seem most cost-effective when you coldly calculate, as advised in Purchase Fuzzies and Utilons SeparatelyHere is an example of someone doing this. Unfortunately, it can be much more difficult to do this when you contribute with work rather than donations.

Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.

Opportunity Cost Ethics

“Every man is guilty of all the good he did not do.”

~Voltaire

Opportunity Cost Ethics is a term I invented to capture the ethical view that failing to do good ought to carry the same moral weight as doing harm.

You could say that in Opportunity Cost Ethics, sins of omission are equivalent to sins of commission.

In this view, if you walk by a child drowning in a pond, and there is zero cost to you to saving the child, it would be equally morally inexcusable for you not to save the child from drowning as it would be to take a non-drowning child and drown them in the pond.

Moreover, if you have good reasons to suspect there are massive opportunities to do good, such as a reasonable probability you could save millions of lives, Opportunity Cost Ethics states that the moral thing to do is to seek out such opportunities and find the best possible opportunities, after accounting for search costs.

Opportunity Cost Ethics is a consequentialist ethical theory, as it states the only morally relevant fact is the ultimate effect of our actions or non-actions.

The strongest version of Opportunity Cost Ethics, which I hold, states that:

"The best moral action at any given time requires considering as many actions as possible, until marginal search costs exceed the expected value of finding better options; at which point an optimal stopping point has been reached. Considerations of proximity also need to be taken into account."

We could call the above "Strong Opportunity Cost Ethics."

Possibly the most important implication of Opportunity Cost Ethics is that we should spend a massive amount of time, energy, and resources trying to determine what the best possible actions we can take are.

It is very difficult to know how much time we should spend trying to figure how good various actions are, and how certain we should be before we act. Opportunity Cost Ethics states that it is morally right to continue searching for the best action you can take until you predict the marginal cost of searching for a better action outweighs the amount of additional good that the better action would achieve.

Due to its obvious importance I assume something like this already exists, if so please point me to it, thank!

edit: Forum user tc pointed out that the “doing / allowing” distinction in ethics, whether causing harm is equivalent to preventing harm, which inspired trolley dilemmas, is highly relevant.

This is the kind of thing I was looking for! I am, however, uncertain if it fully captures what I am saying, which is more about the potential to do a massive amount of good, and the ethical responsibility to seek out such opportunities if we suspect they are available and the search costs do not cancel out the good.

At some point I hope to extend this short-form into a more well-formed robust theory.

Hi, Jordan.

I thought over opportunity cost ethics in the context of longtermism, and formed the questions:

  1. how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine?
  2. if I can be ethically but not personally obliged, then what makes an ethical action obligatory?
  3. am I morally accountable for the absence of consequences of actions that I could have taken?
  4. who decides what the consequences of my absent actions would have been?
  5. how do I compare the altruism of consequences of one choice of action versus another?
  6. do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter?
  7. what do choices that emphasize either selfish or ethical interest mean about my character?

I came up with examples and thought experiments to satisfy my own questions, but since you're taking such a radically different direction, I recommend the same questions to you, and wonder what your answers will be.

I will offer that shaming or rewarding of behavior in terms of what it means about supposed traits of character (for example, selfishness, kind-heartedness, cruelness) has impact in society, even in our post-modern era of subjectivity-assuming and meta-perspective seeking. We don't like to be thought of in terms of unfavorable character traits. Beyond personality, if character traits show through that others admire or trust (or dislike or distrust), that makes a big difference to one's perceived sense of one's own ethics, regardless of how fair, rational, or applicable the ethics actually are.

As an exercise in meta-cognition, I can see that my own proposals for ethical systems that I comfortably employ will plausibly lack value to others. I take a safe route, equating altruism with the service of other's interests, and selfishness with the service of my own. Conceptually consistent, lacking in discussion of character traits, avoiding discussion of ethical obligations of any sort.

While I enjoy the mental clarity that confidence in one's own beliefs provides, I fear a strong mismatch of my beliefs with the world or with the rest of my beliefs. It's hard to stay clearheaded while also being internally consistent with regard to goals, values, etc. As a practical matter, getting much better at practicing ethics than:

  • distinguishing selfish from altruistic interests
  • determining the consequences of actions

seems too difficult for me.

My actual decisions don't necessarily come from a calculus of ethical weights alongside a parallel set of personal weights. Their actual functioning is suspect. I doubt their utility to me, frankly.

Accordingly, I ignore the concept of ethical obligation and consign pursuit of positive character traits to the realm of personal beliefs. I even go so far as to treat ideas of alternative futures as mere beliefs. By doing so, I reconcile the subjective world of beliefs with a real world of nondeterministic outcomes, on the cheap. There's just one pathway through time that we know anything about, and it's the one we take. Everything else is merely believed to have been an alternative. But in a world of actual obligations and intrinsically valuable character traits, everything looks different. I mostly ignore that world when discussing ethics. I suspect that you don't.

Please let me know your thoughts on these topics, if you like. Thanks!

Wow Noah! I think this is the longest comment I’ve had on any post, despite it being my shortest post haha!

First of all some context. The reason I wrote this shortform was actually just so I could link to it in a post I’m finishing which estimates how many lives longtermist save per minute. Here is the current version of the section in which I link to it, I think it may answer some of your questions:

 

“The Proximity Principle

The take-away from this post is not that you should agonize over the trillions of trillions of trillions of men, women, and children you are thoughtlessly murdering each time you splurge on a Starbucks pumpkin spice latte or watch cat videos on tik-tok — or in anyway whatsoever commit the ethical sin of making non-optimal use of your time.[[[this is where I link to the ”Opportunity Cost Ethics shortform]]]

The point of this post is not to create a longtermist “dead children currency” analogue. Instead it is meant to be motivating background information, giving us all the more good reason to be thoughtful about our self-care and productivity.

I call the principle of strategically caring for yourself and those closest to you “The Proximity Principle,” something I discovered after several failed attempts to be perfectly purely altruistic. It roughly states that:

  1. It is easiest to affect those closest to you (in time, space, and relatedness)
  2. Taking care of yourself and those closest to you is high leverage for multiplying your own effectiveness in the future

To account for proximity, perhaps in addition to conversion rates for time and money into lives saved, we also need conversion rates for time and money into increases in personal productivity, personal health & wellbeing, mental health, self-development, personal relationships, and EA community culture.

These things may be hard to quantify, but probably less hard than we think, and seem like fruitful research directions for social-science oriented EAs. I think these areas are highly valuable relative to time and money, even if only valued instrumentally.

In general, for those who feel compelled to over-work to a point that feels unhealthy, have had a tendency to burn out in the past, or think this may be a problem for them, I would suggest erring on the side of over-compensating.

This means finding self-care activities that make you feel happy, energized, refreshed, and a sense of existential hope — and, furthermore, doing these activities regularly, more than the minimum you feel you need to in order to work optimally.

I like to think if this as keeping my tank nearly full, rather than perpetually halfway full or nearly empty. From a systems theory perspective, you are creating a continuous inflow and keeping your energy stocks high, rather than waiting til they are fully depleted and panic mode alerts you to refill.

For me, daily meditation, daily exercise, healthy diet, and good sleep habits are most essential. But each person is different, so find what works for you.

Remember, if you want to change the future, you need to be at your best. You are your most valuable asset. Invest in yourself.”


 

I will try to answer each question as I understand, let me know if this makes sense

  1. how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine? ANSWER: Opportinity Cost Ethics itself states what is ethical but NOT what is morally obligatory. In the strong version, you are  always obliged do what is altruistic. But this means taking care of yourself “selfishly” in-so-far as it is helpful in making you more productive. I call this taking “proximity” into account
  2. if I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.
  3. am I morally accountable for the absence of consequences of actions that I could have taken? ANSWR: YES! This is exactly the point. If you could have saved a child from drowning but don’t (and there are no additional costs to the action), that is precisely equivalent to murdering that child, and all the consequences that follow.
  4. who decides what the consequences of my absent actions would have been? ANSWER: this is something that must be estimated with expected value and best-guesses. As longtermists, we know that our actions could have extremely positive consequences. In the piece I quoted above, I calculated that the average longtermist can conservatively expect to save about a trillion trillion trillion lives per minute of work or dollar donated.
  5. how do I compare the altruism of consequences of one choice of action versus another? ANSWER: again best guesses, using expected value. Possibly the most important implication of Opportunity Cost Ethics is that we should spend a massive amount of time, energy, and resources trying to determine what the best possible actions we can take are.
  6. do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter? ANSWER: this is related to search costs and optimal stopping which I mentioned in the short-form. It is very difficult to know how much time you should spend trying to figure how good various actions are and whether or not they may be harmful, and how certain you should be before you proceed. In this framework, it is morally right to continue searching for the best action you can take until you predict the marginal cost of searching for a better action to outweigh the amount of additional good that the better action would achieve. If negative consequences follow, despite you having done your best to follow this process, then the action was morally wrong because, according to consequentialism, the consequences were bad yet your process and the intentions that led to the action may still have been as morally good as they could have been.
  7. what do choices that emphasize either selfish or ethical interest mean about my character? ANSWER: Since I am using an over-arching consequentialist framework rather than character ethics, I am not sure I have an answer to this. My guess would be that good character would be correlated with altruism, after accounting for proximity.

 

On your other comments, I do not tend to think in terms of character traits much, except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences. To me, good character traits are simply a short-hand which implies the person has habits or heuristics of action that tend to lead to good consequences within their context.

I also don’t tend to think in terms of obligations. I think obligations may be a useful abstraction for some, in that it helps encourage them to take actions with good consequences. Perhaps in a sense, I see  moral obligation as a statement that actions which achieve the most good possible are the best actions to take, and we “should” always take the best action we can take; “should” in this context means it would be most good for us to do so.

So it is all basically one big self-referential tautology. You can choose to be moral or not, morality is good, because real people’s lives, happiness, suffering etc. is at stake, but you are free to choose to do what is good or not. I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself. 

I also think enlightened self-interest, by which I mean helping others and living a life of purpose makes me much happier than I would otherwise be, plays an important role in my worldview. So does open individualism, the view that all consciousness shares the same identity, which even an extremely, extremely small credence in implies that longtermist work is likely the highest expected value selfish action. When you add The Proximity Principle, for me, selfishness and altruism are largely  convergent and capable of integration — not to say there aren’t sometimes extremely difficult trade-offs.

Thanks for all your questions! This brought out a lot of good points I would like to add to a more thorough piece later. Let me know if what I said made sense, and I’d be very curious to hear your impressions of what I said.

Well, your formulation, as stated, would lead me to multiple conceptual difficulties, but the most practical one for me is how to conceptualize altruism. How do you know when you are being altruistic?

When you throw in the concepts of "enlightened self-interest" and "open individualism" to justify longtermism, it appears as though you have partial credence in fundamental beliefs that support your choice of ethical system. But you claim that there is only one correct ethical system. Would you clarify for me ?

You wrote:

  • "On your other comments, I do not tend to think in terms of character traits much except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences."
  • "If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this."
  • "I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself. "

From how you write, you seem like a kind, well-meaning, and thoughtful person. Your efforts to develop good character traits seem to be paying off for you.

You wrote:

  • "If I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this."

To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.

If you hold yourself ethically accountable but not personally accountable, then you have ethical obligations but not personal obligations, and your ethical system becomes an accounting system, rather than a set of rules to follow. Your actions are weighted separately in terms of their altruistic and personal consequences, with different weights (or scores) for each, and you make decisions however you do. At some point(s) in time, you check the balance sheet of your altruistic and personal consequences and whatever you believe it shows, to decide whether you are in fact a moral person.

I think it's a mistake to discuss your selfish interests as being in service to your altruistic ones. It's a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.

Interesting points.

Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.

While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me and others to, in practical terms, actually achieve those ethical goals.

I suppose I see it as somewhat of an inner psychological battle, I try to be as altruistic as possible, but I am a weak and imperfect human who is not able to be perfectly altruistic, and often end up acting selfishly.

In addition to this, if I fail to account for proximity I actually become less effective because not sufficiently meeting my own needs makes me less effective in the future, hence some degree of what on the surface appears selfish is actually the best thing I can do altruistically.

You say:

“To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.”

In such a situation the ethical thing to do is whatever achieves the most good. If taking care of yourself right now means that in the future you will be 10% more efficient, and it only takes up 5% of your time or other resources, then the best thing is to help yourself now so that you can better help others in the future.

Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.

I would say ethics could be seen as an accounting system or a set of guidelines of how to live. Maybe you could say ex ante ethics are guidelines, and ex post they are an accounting system.

When I am psychologically able, I will hopefully use ethics as guidelines. If the accounts show that I or others are consistently failing to do good, then that is an indication that part of the ethical system (or something else about how we do good) is broken and in need of repair, so this accounting is useful for the practical project of ethical behavior.

Your last paragraph:

“I think it's a mistake to discuss your selfish interests as being in service to your altruistic ones. It's a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.”

Hm I’m not sure this is accurate. I read a book that mentioned studies that show happiness and person effectiveness seem to be correlated. I can’t see how not meeting your basic needs allows you to altruistically do more good, or why this wouldn’t extend to optimizing your productivity, which likely includes having relatively high levels of personal physical, mental, and emotional health. No doubt, you shouldn’t spend 100% of your resources maximizing these things, but I think effectiveness requires a relatively high level of personal well-being. This is seems empirical and testable, either high levels of well-being cause greater levels of altruistic success or they don’t. You could believe all of this in purely altruistic framing, without ever introducing selfishness — indeed this is why I use the term proximity, to distinguish it from selfish selfishness. You could say proximity is altruistically strategic selfishness. But I don’t really think the terminology is as important as the empirical claim that taking care of yourself helps you help others more effectively.
 

You wrote:

"Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it."

OK, a literal interpretation could work for you. So, while your ethics might oblige you to an action X, you yourself are not personally obliged to perform action X. Why are you not personally obliged? Because of how you consider your ethics. Your ethics are subject to limitations due to self-care, enlightened self-interest, or the proximity principle. You also use them as guidelines, is that right? Your ethics, as you describe them, are not a literal description of how you live or a do-or-die set of rules. Instead, they're more like a perspective, maybe a valuable one incorporating information about how to get along in the world, or how to treat people better, but only a description of what actions you can take in terms of their consequences. You then go on to choose actions however you do and can evaluate your actions from your ethical perspective at any time. I understand that you do not directly say this but it is what I conclude based on what you have written. Your ethics as rules for action appear to me to be aspirational.

I wouldn't choose consequentialism as an aspirational ethic. I have not shared my ethical rules or heuristics on this forum for a reason. They are somewhat opaque to me. That said, I do follow a lot of personal rules, simple ones, and they align with what you would typically expect from a good person in my current circumstances. But am I a consequentialist? No, but a consequentialist perspective is informative about consequences of my actions, and those concern me in general, whatever my goals.

In a submission to the Red Team Contest a few months back, I wrote up my thoughts on beliefs and altruistic decision-making.

I also wrote up some quick thoughts about longtermism in longtermists should self-efface.

I've seen several good posts here about longtermism, and one that caught my eye is A Case Against Strong Longtermism

In case you're wondering, I am not a strong longtermist.

Thanks for the discussion, let me know your feedback and comments on the links I shared if you like.

The Proximity Principle

Excerpt from my upcoming book, "Ways To Save The World"

As was noted earlier, this does not mean we should necessarily pursue absolute perfect selflessness, if such a thing is even possible. We might conceive that this would include such activities as not taking any medicine so that those who are more sick can have it, not eating food and giving all of your food away to those who are starving, never sleeping but instead continually working for those who are less fortunate than yourself. As is obvious, all of these would lead imminently to death and so would ultimately not really be that good in the grand scheme of things.

Instead, the most effective way to be selfless is to be intelligently selfish in such a way that enables you to maximize your own capacity to serve others. This I call the “proximity principle,” which says that the most efficient way to do good for others is to prioritize that which is closest in proximity, both because by helping yourself and those closest to you, you are most likely to maximize your power to be effective in the future, and also because, all else being equal, it is simply easiest and most efficient to help those who are closest to you in time and space, rather than those who are on the opposite side of the world or billions of years in the future.

For example, it might be best to educate yourself, become financially independent, have a family and friends that meets your emotional needs, take care of yourself psychologically, and give these much higher prioritization than helping others, at least insofar as is practicable, since if you are falling apart in one or several of these ways, it might make it much more difficult for you to be effective in helping others. Once you have your own house in order, you can be exponentially more effective in helping the world at large and creating long-term impact.

 

For reference, I originally created this short-form so I could reference in an estimate of lives saved by longtermists. Here is the original context in which I mention it:

The Proximity Principle

The take-away from this post is not that you should agonize over the trillions of trillions of trillions of men, women, and children you are thoughtlessly murdering each time you splurge on a Starbucks pumpkin spice latte or watch cat videos on tik-tok — or in anyway whatsoever commit the ethical sin of making non-optimal use of your time.

The point of this post is not to create a longtermist “dead children currency” analogue. Instead it is meant to be motivating background information, giving us all the more good reason to be thoughtful about our self-care and productivity.

I call the principle of strategically caring for yourself and those closest to you “The Proximity Principle,” something I discovered after several failed attempts to be perfectly purely altruistic. It roughly states that:

  1. It is easiest to affect those closest to you (in time, space, and relatedness)
  2. Taking care of yourself and those closest to you is high leverage for multiplying your own effectiveness in the future

To account for proximity, perhaps in addition to conversion rates for time and money into lives saved, we also need conversion rates for time and money into increases in personal productivity, personal health & wellbeing, mental health, self-development, personal relationships, and EA community culture.

These things may be hard to quantify, but probably less hard than we think, and seem like fruitful research directions for social-science oriented EAs. I think these areas are highly valuable relative to time and money, even if only valued instrumentally.

In general, for those who feel compelled to over-work to a point that feels unhealthy, have had a tendency to burn out in the past, or think this may be a problem for them, I would suggest erring on the side of over-compensating.

This means finding self-care activities that make you feel happy, energized, refreshed, and a sense of existential hope — and, furthermore, doing these activities regularly, more than the minimum you feel you need to in order to work optimally.

I like to think if this as keeping my tank nearly full, rather than perpetually halfway full or nearly empty. From a systems theory perspective, you are creating a continuous inflow and keeping your energy stocks high, rather than waiting til they are fully depleted and panic mode alerts you to refill.

For me, daily meditation, daily exercise, healthy diet, and good sleep habits are most essential. But each person is different, so find what works for you.

Remember, if you want to change the future, you need to be at your best. You are your most valuable asset. Invest in yourself.

Highly Pessimistic to Pessimistic-Moderate Estimates of Lives Saved by X-Risk Work

This short-form supplements a post estimating how many lives x-risk work saves on average.

Following are four alternative pessimistic scenarios, two of which are highly pessimistic, and two of which fall between pessimistic and moderate.

Except where stated, each has the same assumptions as the original pessimistic estimate, and is adjusted from the baseline estimates of 10^16 lives possible and one life saved per hour of work or $100 donated.

  1. It is 100% impossible to prevent existential risk, or it is 100% impossible to accurately predict what will reduce X-risk. In this case, we get an estimate that in expectation, on average, x-risk work may extremely pessimistically have zero positive impact and have the negative impact of wasting resources. I think it is somewhat unreasonable to conclude with absolutely certainty an existential catastrophe is inevitable or unpredictable, but others may disagree.
  2. Humanity lasts as long as the typical mammalian species, ~1 million years. This would lead to three orders of magnitude reduction in expected value from the pessimistic estimate, giving an estimate that over the next 10,000 years, in expectation, on average, x-risk work will very pessimistically save one life for every 1,000 hours of work or every $100,000 donated. *Because humanity goes extinct in a relatively short amount of time in this scenario, x-risk work has not technically sustainably prevented existential risk, but this estimate has the benefit of using other species to give an outside view.
  3. Digital minds are possible, but interstellar travel is impossible. This estimate is highly speculative. My understanding is that Bostrom estimated 15 additional orders of magnitude if digital minds are possible, given that we are able to inhabit other star systems. I have no idea if anything like this holds up if we only inhabit earth. But if it does, assuming a 1/10 chance digital minds are possible, the possibility of digital minds gives a 14 orders of magnitude increase from the original pessimistic estimate so that, over the next 10,000 years, in expectation, on average, x-risk work will moderately pessimistically save approximately one trillion lives per minute of work or per dollar donated.
  4. Interstellar travel is possible, but digital minds are impossible. Nick Bostrom estimates that if emulations are not possible and so humans must remain in biological form, there could be 10^37 biological human lives at 100 years per life, or 21 orders of magnitude greater than the original pessimistic estimate. Assuming a 1/10 chance interstellar travel is possible, this adds 20 orders of magnitude so that, over the next 10,000 years, in expectation, on average, x-risk work will moderately pessimistically save approximately a billion billion (10^18) lives per minute of work or per dollar donated.

How Happy Could Future People Be?

This short-form is a minimally edited outtake from a piece estimating how many lives x-risk work saves on average. It conservatively estimates how much better future lives might be with pessimistic, moderate, and optimistic assumptions.

A large amount of text is in footnotes because it was highly peripheral to the original post.

Pessimistic

(human remain on earth, digital minds are impossible)

We could pessimistically assume that future lives can only be approximately as good as present lives for some unknown reasons concerning the limits of well-being, or well-being in the long-term future.

Moderate

(digital minds are possible)

It seems likely future lives will have significantly greater well-being or happiness than present lives, especially if brain emulations are possible.

If humans become digital people, we can edit our source code such that we are much happier. Additionally, by designing our digital environment we will be better able to achieve our desires, experience pleasure and avoid pain, and have the good things in life that support happiness.

Because this is speculative, let’s conservatively assume future people could be 10 times happier than present people, leading to one order of magnitude increase in expected value.[1] I will come back to why I think this is conservative and a higher estimate in the optimistic section.

Optimistic

(digital minds are possible)

In the moderate estimate we assumed future lives would have ten times greater well-being or happiness than the average present human life. Yet there isn’t any reason in principle to assume future people couldn’t be many orders of magnitude happier.[2]

In the moments before a seizure, Dostoyevsky reported feeling:

“A happiness unthinkable in the normal state and unimaginable for anyone who hasn’t experienced it… I am then in perfect harmony with myself and the entire universe”

And he wrote this experience into one of his characters, who stated,

“I would give my whole life for this one instant”

Taken literally, if we assume this experience lasted 10 seconds, that would make it over 100 million times better than his average experience.

I can say that I have personally experienced altered states of consciousness which were many orders of magnitude better than my average experience, and far better than even other very good experiences.[3]

Most people have occasional peak experiences that are extremely good when compared with average experience. We may postulate maximum happiness is a Bostromian utopia of unimaginable ecstasy, (maximized hedonic well-being) or a Karnofskian future of maximum choice and freedom (“‘meta’ option”,) (maximized preference satisfaction), or perhaps gradients of bliss so that we can freely navigate choices while also being in unimaginable ecstasy.

In any case, humans are not currently optimized for happiness. We could optimistically estimate that if future minds were optimized for happiness there could be something like a minimum of one million times more happiness per unit of time than we currently experience, leading to 6 orders of magnitude increase in expected value.[4]

  1. ^

    Because I am using a utilitarian expected value framework, I will assume that an individual being 10 times happier is equivalent to 10 times as many well-being adjusted lives;

    In other words,

    Average happiness * number of lives = well-being adjusted lives

  2. ^

    I will not give special attention to the impact on suffering in this analysis.

    Although I lean toward prioritarianism (utilitarianism which prioritizes reducing suffering over increasing positive wellbeing,) my naive intuition it that it is insufficiently likely there will be a large enough amount of suffering in the far-future for there to be much impact on expected value (which was the focus of the piece this short-form was originally embedded in.) Admittedly, I have not spent much time studying s-risks and would appreciate other perspectives.

  3. ^

    Additionally, I once briefly dated a woman and remember distinctly thinking to myself multiple times “I would trade 1 minute of kissing this woman for every other kiss, maybe even every other sexual experience I have ever had,” and this still feels accurate. I have repeatedly tried (unsuccessfully) to capture this experience in a poem.

    Assuming this intuition was accurate, if I had previously had an average of 12 minutes of kissing/sexual experience per day over about 2 aggregated years of dating by that point in my life, that 1 minute was about 10,000 times better than my average kissing/sexual experience - and these, in turn, were worth perhaps 100 average minutes in a day (I like kissing and sex quite a lot), making a minute of kissing this woman one million times better than my average experience, or about 3 years of my life. While this seems absurd, I doubt it is off by more than ~two orders of magnitude. 

  4. ^

    Concretely, this means that if the happiest minute of your life was worth 100 days of normal experience, this would be one order of magnitude greater happiness than that.

    Such a state may seem difficult to maintain long-term. But considering we can already induce and sustain profoundly intense states of wellbeing over several hours with certain chemical substances, it seems plausible this could be the case.

    This section is highly speculative, as it deals with aspects of phenomenology that we are not yet able to empirically test. It is possible that optimal experience requires ups and downs, complexity, moving from a worse past to a better future, or can’t be sustained indefinitely.

    That said, this could also be underestimating the real possible happiness by many orders of magnitude. Complete scientific understanding of phenomenology and powerful consciousness engineering techniques could unlock levels of happiness we currently have no way to conceive.

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities