Jul 13 20161 min read 59

-4

In many ways, most EAs are extraordinarily smart, but in one way EAs are naive. The most well known EAs have stated that the goal of EA is to minimize suffering. I can't explain this well at all, but I'm certain that is not the cause or effect of altruism as I understand it.

Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?

Considering this, my immediate reaction is to restate the goal of EA as maximizing the difference between happiness and suffering. This still seems naive. Happiness and suffering are so interwoven, I'm not sure this can be done. The disappointment from being rejected by a girl may help you come to terms with reality. The empty feeling in the pit of your stomach when your fantasy world crumbles motivates you to find something more fulfilling.

It's difficult to say. Maybe one of you can restate it more plainly. This isn't an argument against EA. This is an argument that while we probably do agree on what actions are altruistic--the criteria used to explain it are overly simplified.

I don't know if there is much to be gained by having criteria to explain altruism, but I am tired of "reducing suffering." I like to think about it more as doing what I can to positively impact the world--and using EA to maximize that positivity where possible. Because altruism isn't always as simple as where to send your money.

-4

0
0

Reactions

0
0
Comments59
Sorted by Click to highlight new comments since: Today at 4:08 PM

As a ‘well-known’ EA, I would say that you can reasonably say that EA has one of two goals: a) to ‘do the most good’ (leaving what ‘goodness’ is undefined); b) to promote the wellbeing of all (accepting that EA is about altruism in that it’s always ultimately about the lives of sentient creatures, but not coming down on a specific view of what wellbeing consists in). I prefer the latter definition (for various reasons; I think it’s a more honest representation of how EAs behave and what they believe), though think that as the term is currently used either is reasonable. Although reducing suffering is an important component of EA under either framing, under neither is the goal simply to minimize suffering, and I don’t think that Peter Singer, Toby Ord or Holden Karnofsky (etc) would object to me saying that they don’t think of this as the only goal either.

Thanks Will. I also prefer the latter definition.

I'm hopeful that the few times I've heard "minimize/reduce suffering" were out of context or misconstrued (or was said by lesser-known EAs than I thought).

Hi Will. I would be very interested to hear the various reasons you have for preferring the latter definition? I prefer the first of the two definitions that you give, primarily leaning towards it because it makes less assumptions about what it means to do good and I have a strong intuition that EA benefits by being open to all forms of doing good.

The most well known EAs have stated that the goal of EA is to minimize suffering.

I don't think this is true. There's a segment of negative utilitarians in the community (particularly in Switzerland?), but I think mainstream EAs generally value well-being as well as the avoidance of suffering. See Toby Ord's Why I'm not a negative utilitarian.

Also, it's not clear how this translates into altruism being "as simple as where to send your money." Regardless of whether you're trying to promote flourishing or minimizing suffering, direct work or advocacy are good options for many people.

Thanks Julia.

I'm still not satisfied with the addition of 'maximize happiness.' I suspect altruism is more than even that--though the word 'well-being' is a step towards compromise.

I can't speak for other EAs, but I suspect altruists generally have more in common with activists than they do with philanthropists. The former also rejects social norms and seeks to change the world, while the latter is generally accepted within their social circles because they have so much excessive wealth.

Activists are motivated to change the world based on things they dislike about it, and I suspect the same is true for (some) altruists. I am grateful to know that many EAs come from a more positive place, but this was not my experience. It sounds negative and I prefer Will's definition, but I ended up in EA seeking to make the world more fair. This is only slightly less subjective than being concerned with the "well being" of sentient creatures, and far less appealing. Maybe you could expand on it a bit more?

Many altruists are activists (and vice versa) and many altruists are philanthropists (and vice versa) and some activists are philanthropists. These are not mutually exclusive categories. I also disagree with several claims.

The former also rejects social norms and seeks to change the world, while the latter is generally accepted within their social circles because they have so much excessive wealth.

I think most philanthropists want to change the world (for the better). I think activists vary a lot in how much they accept and reject social norms, and which ones they accept and reject.

I didn't realize before, but this is actually most interesting to me now that multiple people have challenged the OP. In other words, I agree with the point a few people have made that few active EAs would define EA so narrowly. I must have misconstrued something somewhere.

Admittedly, I know very little about philanthropists, but I imagine they want to change the world to a common degree. Their intentions are pure, but their motivation is minimal. This is a guess, but Warren Buffet stated that the opportunity cost for spending his money elsewhere is extremely low. Generally, I believe that activists tend to be more impassioned.

I identify as an EA, and I certainly relate to activists more than philanthropists, and I had thought that EA marketed itself towards these sorts of people. Regardless, I definitely agree that there is overlap between all three groups.

I am really curious what you think about altruistic motivations v. activist motivations. I know we've talked about it before, and I expect you have a different view.

Also, I'm unsure what you meant in the last paragraph. I think we were both saying the same thing. Maybe you missed the "n't'?

I mean that the movement isn't claiming that altruism is as simple as where to send your money (though I think we sometimes wrongly simplify the message to be only about donation). Saying it's not that simple implies that someone else said it was that simple.

I'm surprised how personally you took that. I was just speaking generally, though you did say it better.

Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?

I think generalizing from these examples (and especially from fictional examples in general) is dangerous for a few reasons.

Fiction is not designed to be maximally truth-revealing. Its function is as art and entertainment, to move the audience, persuade them, woo them, etc. Doing this can and often does involve revealing important truths, but doesn't necessarily. Sometimes, fiction is effective because it affirms cultural beliefs/mores especially well (which makes it seem very true and noble). But that means it's often (though certainly not always) a reflection of its time (it's often easy, for example, to see how fiction from the past affirmed now-outdated beliefs about gender and race). So messages in fiction are not always true.

Fiction has a lot of qualities that bias the audience in specific useful ways that don't relate to truth. For example, it's often beautiful, high-status, and designed to play on emotions. That means that relative to a similar non-fictional but true thing, it may seem more convincing, even when the reasoning is equally or less sound. So messages in fiction are especially powerful.

For example, I think the Giver reflect the predominant (but implicit) belief of our time and culture, that intense happiness is necessarily linked to suffering, and that attempts to build utopias generally fail in obvious ways by arbitrarily excluding our most important values. Iirc, the folks in the Giver can't love. Love is one of our society's highest values; not loving is a clear sign they've gone wrong. But the story doesn't explain why love had to be eliminated to create peace, it just establishes a connection in the readers' minds without providing any real evidence.

Consider further that if it was true that extreme bad wasn't a necessary cost of extreme good, we would probably still not have a lot of fiction reflecting that truth. This is simply because fiction about everything going exceedingly well for extended periods of time would likely be very boring for the reader (wonderful for the characters, if they experienced it). People would not read that fiction. Perhaps if you made them do so they would project their own boredom onto the story, and say the story is bad because it bored them. This is a fine policy for picking your entertainment, but a dangerous habit to establish if you're going to be deciding real-world policy on others' behalf.

I agree that it's dangerous to generalize from fictional evidence, BUT I think it's important not to fall into the opposite extreme, which I will now explain...

Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare "This is what morality is!" and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).

The danger, which I think is the opposite danger to the one you identified, is that people "bite the bullet" and say "I'm sticking with my principles. I guess what seems abhorrent isn't abhorrent after all; I guess what seems good isn't good after all."

In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.

In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it's almost certainly not simple enough to fit on a t-shirt.

A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don't hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.

[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn't generalizing based on fictional evidence, it was rejecting a generalization.]

I disagree that biting the bullet is "almost always a mistake". In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.

Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people's current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people's intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.

And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you're a moral realist, it almost always doesn't make sense to change your principles if you believe that your principles are true.

I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they've usually thought about the situation more carefully than people who just go with their intuition. I'm not saying people should just go with their intuition.

I'm saying that we don't have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn't lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won't work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts--this bias is not irrational.)

Makes sense. Ethics---like spirituality---seems far too complicated too have a simple set of rules.

You see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things.

Now people can say "but look at Clockwork Orange!" as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it's in line with their preexisting beliefs and ethics.

I had fun talking with you, so I googled your username. :O

Thank you for all the inspirational work you do for EA! You're a real-life superhero! I feel like a little kid meeting Batman. I can't believe you took the time to talk to me!

That's deeply kind of you to say, and the most uplifting thing I've heard in a while. Thank you very much.

Touche'. I concede, but I just want to reiterate that fiction "can and often does involve revealing important truths," so that I am not haunted by the ghost of Joseph Campbell.

Happiness and suffering are so interwoven

No they're not. I don't understand why people always say this. You can be happy without suffering. On "the disappointment from being rejected by a girl may help you come to terms with reality"--the only value of the rejection is the knowledge you gain, not the unpleasant feeling. The suffering has no inherent value, only instrumental value.

If animal brains are biologically incapable of feeling happiness without also feeling suffering (which I doubt), we can just modify brains so they don't work that way. (Obviously this is a long term goal since we can't do it right now.)

You (and the 5 people who agreed) are blowing my mind right now.

Based on the last paragraph, it sounds like you would support a world full of opiate users--provided there was a sustainable supply of opiates.

The first paragraph is what's blowing my mind though. When I was a baby, I'm pretty sure I would have told you that a room with toys and sweets would maximize my happiness. I guess you could argue that I'd eventually find out that it would not sustain my long term happiness, but I really do think some amount of suffering ensures happiness in the future. Perhaps this is overly simple, but I'm sure you have fasted at some point (intentionally or not) and that you greatly appreciated your next meal as a result.

Lastly, you separate knowledge and feelings from suffering, but I'm not sure this can be done. My parents told me not to do X because it would hurt, but I did not learn until I experienced X firsthand.

I'm amazed that so many EAs apparently think this way. I don't want to be mean, but I'm curious as to what altruistic actions you have taken in your life? Really looking forward to your reply.

I think your argument is actually two: 1) It is not obvious how to maximize happiness, and some obvious-seeming strategies to maximize happiness will not in fact maximize happiness. 2) you shouldn't maximize happiness

(1) is true, I think most EAs agree with it, most people in general agree with it, I agree with it, and it's pretty unrelated to (2). It means maximizing happiness might be difficult, but says nothing about whether it's theoretically the best thing to do.

Relatedly, I think a lot of EAs agree that it is sometimes indeed the fact that to maximize happiness, we must incur some suffering. To obtain good things, we must endure some bad. Not realizing that and always avoiding suffering would indeed have bad consequences. But the fact that that is true, and important, says nothing about whether it is good. It is the case now that eating the food I like most would make me sick, but doesn't tell me whether I should modify myself to enjoy healthier foods more, if I was able to do so.

Put differently, is the fact that we must endure suffering to get happiness sometimes good in itself, or is it an inconvenient truth we should (remember, but) change, if possible? That's a hard question, and I think it's easy to slip into the trap of telling people they are ignoring a fact about the world to avoid hard ethical questions about whether the world can and should be changed.

I agree that points 1 and 2 are unrelated, but I think most people outside EA would agree that a universe of happy bricks is bad. (As I argued in a previous post, it's pretty indistinguishable from a universe of paperclips.) This is one problem that I (and possibly others) have with EA.

I second this! I'm one of the many people who think that maximizing happiness would be terrible. (I mean, there would be worse things you could do, but compared to what a normal, decent person would do, it's terrible.)

The reason is simple: when you maximize something, by definition that means being willing to sacrifice everything else for the sake of that thing. Depending on the situation you are in, you might not need to sacrifice anything else; in fact, depending on the situation, maximizing that one thing might lead to lots of other things as a bonus--but in principle, if you are maximizing something, then you are willing to sacrifice everything else for the sake of it. Justice. Beauty. Fairness. Equality. Friendship. Art. Wisdom. Knowledge. Adventure. The list goes on and on. If maximizing happiness required sacrificing all of those things, such that the world contained none of them, would you still think it was the right thing to do? I hope not.

(Moreover, based on the laws of physics as we currently understand them, maximizing happiness WILL require us to sacrifice all of the things mentioned above, except possibly Wisdom and Knowledge, and even they will be concentrated in one being or kind of being.)

This is a problem with utilitarianism, not EA, but EA is currently dominated by utilitarians.

I suspect that happiness and well-being are uncorrelated. Just a guess. I am biased as I believe I have grown as a result of changes which were the result of suffering. Your point is valid though---if we could control our environment would altruists seek to create an opiate-type effect on all people? I guess it's a question that doesn't need an answer anytime soon.

I suspect that happiness and well-being are uncorrelated.

How are you defining wellbeing such that it's uncorrelated with happiness?

I am biased as I believe I have grown as a result of changes which were the result of suffering.

Perhaps you misunderstand me. I believe you. I think that probably every human and most animals have, at some point, learned something useful from an experience that involved suffering. I have, you have, all EAs have, everyone has. Negative subjective wellbeing arising from maladaptive behavior is evolutionarily useful. Natural selection favored those that responded to negative experiences, and did so by learning.

I just think it's sad and shitty that the world is that way. I would very much prefer a world where we could all have equally or more intense and diverse positive experiences without suffering for them. I know that is not possible (or close to it) right now, but I refuse to let the limitations of my capabilities drive me to self-deception.

(my views are my own, not my employer's)

I think I understand your point. Opiates have a lot of negative connotations. Maybe a nervous system whose pleasure sensors are constantly triggered is a better example. I should have said that I am biased by the fact that I live in an environment where this isn't possible. You explained it more simply.

Well-being is very tricky to define, isn't it? I like it a lot more than 'maximizing happiness' or 'minimizing suffering,' which was mostly what inspired the OP. I guess we don't know enough about it to define it perfectly, but as Bill said, do we need to?

I actually would distribute the opiates, simply because doing so would currently be the only way to discontinue intolerably severe levels of suffering being endured daily by humans and non-human animals. If eliminating harms of such magnitude can be done in one-fell-swoop fashion, rather than by stretching out the process for decades, centuries or longer, that's a distinction that EAs should not just handwave. In a hypothetical world where no organism suffers in unspeakable ways, then sure, the opiates shouldn't be viewed as the go-to solution. But it's conditional.

Consider what the unconditional refusal to distribute the opiates entails, in the world as it actually is. It entails tolerating tradeoffs wherein the worst off are left to deal with more of the same (including the same gradual slow-paced reductions) so that others' positive desires can be realized, which wouldn't have been realized had the opiates package-deal kicked in.

Are you more concerned with the latter group's interests because the latter group makes up (arguably) a larger segment of the total population? Keep in mind that they are (we are) not enduring anything remotely close to famine & the like. If the shoe were on the other foot and we were in the worst-off category, we'd want the torture-level harms ended for us even if it could only be ended by way of opiates for all. That's how potent the suffering of the worst off is. Until this changes, I'm not prepared to approve any "Desires > Aversions" tradeoff and I don't think the average EA should be either.

Of course, this doesn't mean that crude hedonism should be viewed as the appropriate theory of wellbeing for humans. But that's because you can still get "Aversions > Desires" type priorities under non-hedonic preferentism.

Have you read The Giver? This is exactly the case they make. I tend to agree with the main character. I would rather have the beauty AND suffering as cause and effect than a world full of nothing but happiness. I'm not sure the latter is possible, but it also sounds incredibly depressing. Then again, the author was obviously biased when he wrote the story.

"I would rather have the beauty AND suffering as cause and effect"

If you're interested, here's a video that makes a strong case for why preserving the package-deal is an unconscionable view in a world like the one we find ourselves in, where nothing is guaranteed and where no limitations exist on the magnitudes of suffering: https://www.youtube.com/watch?v=RyA_eF7W02s

If you had endured any of the video's "Warning: Graphic Content" bits that other individuals endure routinely, I somehow doubt that you'd be as taken in by the lessons on display in the 'The Giver'.

Ideally, let's say that you, as an individual, get to forward-program your own misery-to-happiness ratio over the course of your life, ensuring that some suffering would still exist in the universe (as per your preference). If this were possible to do, would you still think it necessary to program other individuals' ratios? If everyone else picked total non-stop bliss for themselves, do you think it's morally appropriate to forcefully alter their preferences, because 'The Giver' has a certain (non-moral) charm to it?

Interesting. I watched 14 minutes of the video and I wonder if its possible to separate the two at all. Then again, the imaginary, opiate-using world seems to do that. It's sort of like Brave New World isn't it? Some people choose the charm of the package deal and others are content with their controlled environment. I guess you and Claire make valid points.

The OP itself is confusing, but I agree that EA is very focused on a narrow interpretation of utilitarianism. I used to think that EA should change this, but then I realized that I was fighting a losing battle. There's nothing inherently valuable about the name "effective altruism". It's whatever people define it to be. When I stopped thinking of myself as part of this community, it was a great weight off my shoulders.

The thing that rubs me the wrong way is that it feels like a motte-bailey. "Effective altruism" is vague and appears self-evidently good, but in reality EAs are pushing for a very specific agenda and have very specific values. It would be better if they were more up-front about this.

EAs are pushing for a very specific agenda and have very specific values

Uh, what? Since when?

Yeah it's confusing because the general description is very vague: do the most good in the world. EAs are often reluctant to be more specific than that. But in practice EAs tend to make arguments from a utilitarian perspective, and the cause areas have been well-defined for a long time: GiveWell recommended charities (typically global health), existential risk (particularly AI), factory farming, and self-improvement (e.g. CFAR). There's nothing terribly wrong with these causes, but I've become interested in violence and poor governance in the developing world. EA just doesn't have much to offer there.

EA is an evolving movement, but the reasons for prioritizing violence and poor governance in the developing world seem weak. It's certainly altruistic and the amount of suffering it addresses is enormous. However, the world is in such a sad state of affairs, that I don't think such a complex and unexplored will compete with charities addressing basic needs like alleviating poverty or even OpenPhil's current agenda of prison reform and factory farm suffering. That said, you could start the exploring. Isn't that how the other causes became mainstream within the EA movement?

I'd be happy if the EA movement became interested in this, just as I'd be happy if the Democratic Party did. But my point was, the label EA means nothing to me. I follow my own views, and it doesn't matter to me what this community thinks of it. Just as you're free to follow your own views, regardless of EA.

I've struggled with similar concerns. I think the things EA's push for are great, but I do think that we are more ideologically homogeneous than we should ideally be. My hope is that as more people join, it will become more "big tent" and useful to a wider range of people. (Some of it is already useful for a wide range of people, like the career advice.)

Interesting, though you seem a tad pessimistic in the last paragraph. If EAs claim they are motivated by altruism and reason, wouldn't defining EA based on that criteria, theoretically encourage participants to change or leave the movement?

Thank you cdc482 for raising the topic. I agree describing EA as having only the goal of minimizing suffering would be inaccurate. As would it be to say that it has the goal to “maximizing the difference between happiness and suffering.” Both would be inaccurate simply because EAs disagree about what the goal should be. William MacAskill’s (a) is reasonable: “to ‘do the most good’ (leaving what ‘goodness’ is undefined).” But ‘do the most good’ would need to be understood broadly or perhaps rephrased into something roughly like ‘make things as much better as possible’ to also cover views like ‘only reduce as much badness as possible.’

Julia Wise pointed to Toby Ord's essay “Why I'm not a negative utilitarian” related to negative utilitarianism in the EA community. Since I strongly disagree with that text, I want to share my thoughts on it: http://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

Summary: In 2013, Toby Ord published an essay called “Why I’m Not a Negative Utilitarian” on his website. One can regard the essay as an online text or blog post about his thinking about negative utilitarianism (NU) and his motives for not being NU. After all, the title is about why he is not NU. It is fine to publish such texts, and regarded in that way, it is an unusually thoughtful and well-structured text. In contrast, I will discuss the content of the essay regarded as statements about NU that can be illuminating or confusing, or true or false. Regarded in that way, the essay is an inadequate place for understanding NU and the pros and cons of NU.

The main reason is that the essay makes strong claims without making sufficient caveats or pointing the reader to existing publications that challenge the claims. For clarity and to avoid creating misconceptions, Ord should either have added caveats of the kind “I am not an expert on NU. This is my current thinking, but I haven’t looked into the topic thoroughly.” Or, if he was aware of the related literature, pointed the reader to it. (I also disagree with many of the statements and arguments that his essay presents, but that is a different question.)

[End of summary]

There are also other commentaries on or replies to Ord’s essay: Pearce, David. A response to Toby Ord's essay Why I Am Not A Negative Utilitarian Contestabile, Bruno. Why I’m (Not) a Negative Utilitarian – A Review of Toby Ord’s Essay

Have you tried opiates?

If not, it doesn't seem right to make a judgement on the matter!

I've had vicodin and china white and sometimes indulge in an oxy. They're quite good, but it hasn't really changed my views on morality. Despite my opiate experience, I'm much less utilitarian than the typical EA.

Interesting. Well, if opiates simply aren't that pleasurable, then it doesn't say anything about utilitarianism either way. If people experienced things which were really pleasurable but still felt like it would be bad to keep experiencing it, that would be a strike against utilitarianism. If people experienced total pleasure and preferred sticking with it after total reflection and introspection, then that would be a point in favor of utilitarianism.

My point was that opiates are extremely pleasurable but I wouldn't want to experience them all the time, even with no consequences. Just sometimes.

Hi cdc482,

I think you are touching on some of the limits of the utilitarian philosophy that the EA movement grew out of.

JS Mill attempted to rectify some of the problems you allude to by distinguishing between 'higher' and 'lower' pleasures (i.e. the opiates argument).

But Sidgwick later argued that psychology (pleasure / suffering) could not form a foundation for which utilitarian rules could be derived.

So yes, I think you are right that we can agree on some actions that are altruistic, but whether we can explain WHY they are altruistic using utilitarian ethics is another matter entirely :)

So yes, I think you are right that we can agree on some actions that are altruistic, but whether we can explain WHY they are altruistic using utilitarian ethics is another matter entirely :)

Good point. I wonder if there is any value to being able to explain the WHY. Would all rational people suddenly start behaving altruistically? Maybe altruists would be even more effective if they were motivated by reason instead of personal experiences. At the least, it would bridge the divide between the E's and the A's in this movement. And I guess...it could be useful for MIRI.

When people say that, they probably don't intend to mean it as the ultimate goal, but a heuristic. "Maximizing expected moral value computed from a probability distribution among several ethical theories tested against our best available science, reason and rationality" could come pretty close to inclusive theory-wise, but definitely not newcomer-wise.

What is naive about minimizing suffering? Considering a world where everyone was high on opiates all the time shifts the perspective towards opiates in the first place. So why do you assume people wouldn't suffer in regards to others? The balance would just be shifted.

By "reducing suffering" you are having a positive impact on the world. Getting others out of a state of suffering will enable them to reach more happiness. Whether you want to do it the other way around, and focus on happiness in order to reduce suffering, will lead to the same results. The difference is that suffering is easily measurable (happiness is not) and isn't this what the entire movement is about? Finding ways to approach inequality more effectively...

In the end, I believe it does not matter what viewpoint you take on EA as long as we agree about doing the most good we can do in order to sustain the betterment of all life and environment.

The difference is that suffering is easily measurable (happiness is not)

Why do you think so? They seem equally measurable to me.

How can you measure happiness? I'd say happiness is way more subjective than suffering. It seems to me to be easier to measure the amount of suffering, by for instance looking at health and security risks... It is hard to measure my happiness but it is easier to see when I am physically going backwards... Because that would be fairly, consistantly visible whereas a moment of happiness does not reflect ones entire state of being at all. Wouldn't you agree?

Furthermore, how are you donating to increase happiness? How do you measure such a thing? One can easily donate against "suffering" by for instance, donating for medication that will reduce pain or shorten a timespan of a desease... But how can you donate for happiness? Donating to education in order to give people increased values and make them think positively? How do you measure the value of the donation needed and how do you measure the actual results achieved? By measuring the amount of suffering decreased?

How can you measure happiness?

Pretty much all the same ways that you measure suffering.

by for instance looking at health and security risks...

What do you mean? A risk isn't a mental state. A risk is a possibility of something happening. There are many possibilities which clearly increase the chance of people suffering, but the same can be said for happiness.

It is hard to measure my happiness but it is easier to see when I am physically going backwards...

There are certain physical events that clearly cause suffering, but the same can be said about happiness.

Because that would be fairly, consistantly visible whereas a moment of happiness does not reflect ones entire state of being at all.

I'm not really sure what you mean.

Furthermore, how are you donating to increase happiness?

Well I don't. But avoiding existential risk is one way.

I believe I am talking about physical suffering and you are looking at it more as mental suffering. I thus stated that security risks increase physical suffering and are thus easier to measure than for instance, ones mental state and furthermore, happiness...

Avoiding existential risk is something that could make me personally more happy but I doubt that people, whom are not even thinking about these matters, would be more happy. (unless you argue that staying alive results in having the possibility to obtain happiness)

I meant to say that one moment, burst of happiness doesn't reflect on how one generally feels, consistantly day by day, year by year and is therefore hard to measure.

Curated and popular this week