Comment author: Jeffhe  (EA Profile) 16 March 2018 03:36:34AM *  -1 points [-]

Imagine you have 5 headaches, each 1 minutes long, that occur just 10 seconds apart of each other. From imagining this, you will have an imagined sense of what it's like to go through those 5 headaches.

And, of course, you can imagine yourself in the shoes of 5 different friends, who we can suppose each has a single 1-minute long headache of the same kind as above. From imagining this, you will again have an imagined sense of what it's like to go through 5 headaches.

If that's what you mean when you say that "the clear experiential sense is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people", then I agree.

But when you imagine yourself in the shoes of those 5 friends, what is going on is that one subject-of-experience (i.e. you), takes on the independent what-it's-likes (i.e. experiences) associated with your 5 friends, and IN DOING SO, LINKS THOSE what-it's-likes - which in reality would be experimentally independent of each other - TOGETHER IN YOU. So ultimately, when you imagine yourself in the shoes of your 5 friends, you are, in effect, imagining what it's like to go through 5 headaches. But in reality, there would be no such what-it's-like among your 5 friends. The only what-it's-like that would be present would be the what-it's-like-of-going-through-1-headache, which each of your friend would experience. No one would experience the what it's like of going through 5 headaches. But that is what is needed for it to be the case that 5 such headaches can be worse than a headache that is worse than any one of them.

Please refer to my conversation with Michael_S for more info.

Comment author: Telofy  (EA Profile) 16 March 2018 10:43:32PM 1 point [-]

Argh, sorry, I haven’t had time to read through the other conversation yet, but to clarify, my prior was the other one – not that there is something linking the experiences of the five people but that there is very little, and nothing that seems very morally relevant – that links the experiences of the one person. Generally, people talk about continuity, intentions, and memories linking the person moments of a person such that we think of them as the same one even though all the atoms of their bodies may’ve been exchanged for different ones.

In your first reply to Michael, you indicate that the third one, memories, is important to you, but in themselves I don’t feel that they confer moral importance in this sense. What you mean, though, may be that five repeated headaches are more than five times as bad as one because of some sort of exhaustion or exasperation that sets in. I certainly feel that, in my case especially with itches, and I think I’ve read that some estimates of DALY disability weights also take that into account.

But I model that as some sort of ability of a person to “bear” some suffering, which gets worn down over time by repeated suffering without sufficient recovery in between or by too extreme suffering. That leads to a threshold that makes suffering below and above seem morally very different to me. (But I recognize several such thresholds in my moral intuitions, so I seem to be some sort of multilevel prioritarian.)

So when I imagine what it is like to suffer headaches as bad as five people suffering one headache each, I imagine them far apart with plenty of time to recover, no regularity to them, etc. I’ve had more than five headaches in my life but no connection and nothing pathological, so I don’t even need to rely on my imagination. (Having five attacks of a frequently recurring migraine must be noticeably worse.)

Comment author: Jeffhe  (EA Profile) 13 March 2018 10:43:44PM *  0 points [-]

Hi Telofy,

Thanks for your comment, and quoting oneself is always cool (haha)/

In response, if I understand you correctly, you are saying that if I don't prefer saving many similar, though distinct, people each from a certain pain than another person from the same pain, then I have no reason to prefer saving myself from many of those pains than just one of them.

I certainly wouldn't agree with that. Were I to suffer many pains, I (just me) suffers all of them in such a way that there is a very clear sense how they, cumulatively, are worse to endure than just one of them. Thus, I find intra-personal aggregation of pains intelligible. I mean, when an old man reminiscing about his past says to us, "The single worst pain I had was that one time when I got shot in the foot, but if you asked me whether I'd go through that again or all those damn'ed headaches I had over my life, I would certainly ask for the bullet.", we get it. Anyways, I think the clear sense I mentioned supports the intra-personal aggregation of pains and if pains intra-personally aggregate, then more instances of the same pain will be worse than just one instance, and so I have reason to prefer saving myself from more of them.

However, in the case of many vs one other (call him "C"), the pains are spread across distinct people rather than aggregate in one person, so they cannot in the same sense be worse than the pain that C goes through. And so even if I show no preference in this case, I still have reason to show preference in the former case.

Comment author: Telofy  (EA Profile) 15 March 2018 12:32:26PM 0 points [-]

Okay, curious. What is to you a “clear experiential sense” is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people.

It would be interesting if there’s some systematic correlation between cultural aspects and someone’s moral intuitions on this issue – say, more collectivist culture leading to more strongly discounted aggregation and more individualist culture leading to more linear aggregation… or something of the sort. The other person I know who has this intuition is from a eastern European country, hence that hypothesis.


Current Thinking on Prioritization 2018

Summary: This article documents my current thoughts on how to make the most out of my experiment with earning to give. It draws together a number of texts by other authors that have influenced my thinking and adds some more ideas of my own for a bundle of heuristics that... Read More
Comment author: Telofy  (EA Profile) 13 March 2018 07:20:08PM 2 points [-]

I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.

What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):

If you don’t care much more about several very similar beings suffering than one of them suffering, then you would also not care more about them, when they’re your own person moments, right? You’re extremely similar to your version a month or several months ago, probably more similar than you are to any other person in the whole world. So if you’re suffering for just a moment, it would be no better than being suffering for an hour, a day, a month, or any longer multiple of that moment. And if you’ve been happy for just a moment sufficiently recently, then close to nothing more can be done for you for a long time.

I imagine that fundamental things like that are up to the subjectivity of moral feelings – so close to the axioms, it’s hard to argue with even more fundamental axioms. But I for one have trouble empathizing with a nonaggregative axiology at least.

Comment author: MichaelPlant 02 February 2018 12:09:09PM *  4 points [-]

Hello Lukas,

I'm struggling to wrap my head around the difference between upside and downside focused morality. I tried to read the rest of the document, but I kept thinking "hold on, I don't understand the original motivation" and going back to the start.

I’m using the term downside-focused to refer to value systems that in practice (given what we know about the world) primarily recommend working on interventions that make bad things less likely.

If I understand it, the project is something like "how do your priorities differ if you focus on reducing bad things over promoting good things?" but I don't see how you can on to draw anything conclusions about that because downside (as well as upside) morality covers so many different things.

Here are 4 different ways you might come to the conclusion you should work on making bad things less likely. Quoting Ord:

"Absolute Negative Utilitarianism (NU). Only suffering counts.

Lexical NU. Suffering and happiness both count, but no amount of happiness (regardless of how great) can outweigh any amount of suffering (no matter how small).

Lexical Threshold NU. Suffering and happiness both count, but there is some amount of suffering that no amount of happiness can outweigh.

Weak NU. Suffering and happiness both count, but suffering counts more. There is an exchange rate between suffering and happiness or perhaps some nonlinear function which shows how much happiness would be required to outweigh any given amount of suffering."

This would lead you to give more weight to suffering at the theoretical level. Or, fifth, you could be a classical utilitarian - happiness and suffering count equally - and decide, for practical reasons, to focus on reducing suffering.

As I see it, the problem is that all of them will and do recommend different priorities. A lexical or absolute NU should, perhaps, really be trying to blow up the world. Weak NU and classical U will be interested in promoting happiness too and might want humanity to survive and conquer the stars. It doesn't seem useful or possible to conduct analysis along the lines of "this is what you should do if you're more interested in reducing bad things" because the views within downside focused morality won't agree with what you should do or why you should do it.

More broadly, this division seems unhelpful. Suppose we we have four people in a room, a lexical NU, a very weak NU, a classical U, and a lexical positive utilitarian (any happiness outweighs all suffer). It seems like, on your view, the first two should be downside focused and the latter two upside focused. However, it could be both the classical U and the very weak NU agree that the best way to do good is focusing suffering reduction, so they're downside. Or they could agree the best way is happiness promotion, so they're upside. In fact, the weak NU and classical U have much more in common with each other - they will nearly always agree on the value of states of affairs - than either of them do with the lexical NU or lexical PU. Hence they should really stick together and it doesn't seem trying to force views into those that, practically speaking, focus on producing good or reducing bad, is a category that helps our analysis.

It might be useful to hear you say why you think this is a useful distinction.

Comment author: Telofy  (EA Profile) 07 February 2018 02:02:59PM 2 points [-]
Comment author: Paul_Christiano 05 January 2018 09:58:00AM 1 point [-]

It's on my blog. I don't think the scheme works, and in general it seems any scheme introduces incentives to not look like a beneficiary. If I were to do this now, I would just run a prediction market on the total # of donations, have the match success level go from 50% to 100% over the spread, and use a small fraction of proceeds to place N buy and sell orders against the final book.

Comment author: Telofy  (EA Profile) 12 January 2018 12:07:41PM 0 points [-]


Comment author: VinceB 31 December 2017 08:24:13PM 0 points [-]

a shot at banning factory farming

What would this entail? should we start a new thread for this? sounds great but small-medium farmers could get their butts whipped if its implemented too broadly no?

Comment author: Telofy  (EA Profile) 12 January 2018 12:01:32PM 1 point [-]

Sophie and Meret will know more, but from what I’ve heard, they’re pretty much on board with it because it will shift demand toward them. I can point Sophie to this thread if you’d like a more detailed or reliable answer than mine. ;-)

Comment author: Telofy  (EA Profile) 04 January 2018 08:16:45AM 0 points [-]

What happened to this post? Is there another place where it is being discussed? It sounds very interesting. Thanks!

Comment author: Telofy  (EA Profile) 12 December 2017 12:16:32PM *  11 points [-]

That’s an awesome selection! I’m also planning to support WASR in 2018 and perhaps longer, and I’m about to donate CHF 5k from my 2018 budget (for tax reasons) to their fundraiser.

I’m particularly optimistic about the field of welfare biology because it can draw on enormous resources in terms of institutions, biology and ecology research, and scientific methodology to generate break-throughs in an area that has been greatly neglected so far. The situation may be similar to that of medicine in the early days (1800s or so) when the foundations for systematic inquiry into health had finally been laid and then just needed to be applied to generate invaluable new insights.

Surely many animals in the wild have net positive lives, but so do many humans around the world. I think it’s valuable to research how we can improve the well-being of humans who suffer – perhaps even to the point of having net negative lives, but not necessarily – and so I value the same even more for wild animals who are so much more numerous and still live under worse conditions at much higher rates.

There’s also a Sentience Politics initiative going on in Switzerland (automatic translation) that has a shot at banning factory farming in the whole country via a popular vote. I see this in the same reference class as, for example, the ban on battery cages in California, though on a smaller scale because of the lower population size. Import of factory-farmed products may be more difficult than in the case of California, though, which is a big plus for the initiative. And they’re also far short of their fundraising goals.

Comment author: Telofy  (EA Profile) 03 December 2017 08:17:20PM 0 points [-]

I'm impressed by how perceptive you are even in such an unaccustomed environment!

View more: Next