Comment author: Gregory_Lewis 20 April 2018 09:26:54PM *  4 points [-]

1) Happiness levels seem to trend strongly positive, given things like the world values survey (in the most recent wave - 2014, only Egypt had <50% of people reporting being either 'happy' or 'very happy', although in fairness there were a lot of poorer countries with missing data. The association between wealth and happiness is there, but pretty weak (e.g. Zimbabwe gets 80+%, Bulgaria 55%). Given this (and when you throw in implied preferences, commonsensical intuitions whereby we don't wonder about whether we should jump in the pond to save the child as we're genuinely uncertain it is good for them to extent their life), it seems the average human takes themselves to have a life worth living. (q.v.)

2) My understanding from essays by Shulman and Tomasik is that even intensive factory farming plausibly leads to a net reduction in animal populations, given a greater reduction in wild animals due to habitat reduction. So if human extinction leads to another ~100M years of wildlife, this looks pretty bad by asymmetric views.

Of course, these estimates are highly non-resilient even with respect to sign. Yet the objective of the essay wasn't to show the result was robust to all reasonable moral considerations, but that the value of x-risk reduction isn't wholly ablated on a popular view of population ethics - somewhat akin to how Givewell analysis on cash transfers don't try and factor in poor meat eater considerations.

3) I neither 'tout' - nor even state - this is a finding that 'xrisk reduction is highly effective for person-affecting views'. Indeed, I say the opposite:

Although it seems unlikely x-risk reduction is the best buy from the lights of the [ed: typo - as context suggests, meant 'person-affecting'] total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

Comment author: tylermjohn 20 April 2018 09:36:45PM 0 points [-]

thanks for the clarification on (3), gregory. i exaggerated the strength of the valence on your post.

on (1), i think we should be skeptical about self-reports of well-being given the pollyanna principle (we may be evolutionarily hard-wired overestimate the value of our own lives).

on (2), my point was that extinction risks are rarely confined to only human beings, and events that cause human extinction will often also cause nonhuman extinction. but you're right that for risks of exclusively human extinction we must also consider the impact of human extinction on other animals, and that impact - whatever its valence - may also outside the impact of the event on human well-being.

Comment author: tylermjohn 20 April 2018 08:43:29PM 0 points [-]

thanks, gregory. it's valuable to have numbers on this but i have some concerns about this argument and the spirit in which it is made:

1) most arguments for x-risk reduction make the controversial assumption that the future is very positive in expectation. this argument makes the (to my mind even more) controversial assumption that an arbitrary life-year added to a presently-existing person is very positive, on average. while it might be that many relatively wealthy euro-american EAs have life-years that are very positive, on average, it's highly questionable whether the average human has life-years that are on average positive at all, let alone very positive.

2) many global catastrophic risks and extinction risks would affect not only humans but also many other sentient beings. insofar as these x-risks are risks of the extinction of not only humans but also nonhuman animals, to make a determination of the person-affecting value of deterring x-risks we must sum the value of preventing human death with the value of preventing nonhuman death. on the widely held assumption that farmed animals and wild animals have bad lives on average, and given the population of tens of billions of presently existing farmed animals and 10^13-10^22 presently existing wild animals, the value of the extinction of presently living nonhuman beings would likely swamp the (supposedly) negative value of the extinction of presently existing human beings. many of these animals would live a short period of time, sure, but their total life-years still vastly outnumber the remaining life-years of presently existing humans. moreover, most people who accept a largely person-affecting axiology also think that it is bad when we cause people with miserable lives to exist. so on most person-affecting axiologies, we would also need to sum the disvalue of the existence of future farmed and wild animals with the person-affecting value of human extinction. this may make the person-affecting value of preventing extinction extremely negative in expectation.

3) i'm concerned about this result being touted as a finding of a "highly effective" cause. $9,600/life-year is vanishingly small in comparison to many poverty interventions, let alone animal welfare interventions (where ACE estimates that this much money could save 100k+ animals from factory farming). why does $9,600/life-year suddenly make for a highly effective when we're talking about x-risk reduction, when it isn't highly effective when we're talking about other domains?

Comment author: KelseyPiper 26 October 2017 06:28:46PM 25 points [-]

I just want to quickly call attention to one point: "these are still pure benefits" seems like a mistaken way of thinking about this - or perhaps I'm just misinterpreting you. To me "pure benefits" suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful returns, but it requires significant commitments of time, energy, and often other resources. Writing up community standards about conduct tends to be emotionally exhausting work which demands weeks of time and effort from productive and deeply involved community members who are necessarily sidelining other EA projects in order to do it.

None of this is to say 'it isn't worth it'. I expect that some of these things have great returns to the health, epistemic standards, and resiliency of the community, as well as, like you mentioned, good returns for the reputation of EA (though from my experience in social justice communities, there will be articles criticizing any movement for failures of intersectionality, and the presence of those articles isn't very strong evidence that a movement is doing something unusually wrong). My goal is not to say 'this is too much work' but simply 'this is work' - because if we don't acknowledge that it requires work, then work probably will not get done (or will not be acknowledged and appreciated).

Once we acknowledge that these are suggestions which require varying amounts of time, energy and access to resources, and that they impose varying degrees of mental load, then we can start figuring out which ones are good priorities for people with limited amounts of all of the above. I've seen a lot of social justice communities suffer because they're unable to do this kind of prioritization and accordingly impose excessively high costs on members and lose good people who have limited resources.

So I think it's a bad idea to think in terms of 'pure benefit'. Here, like everywhere else, if we want to do the most good we need to keep in mind that not all actions are equally good or equally cheap so we can prioritize the effective and cheap ones.

I'm also curious why you think the magnitude of the current EA movement's contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans. To be clear about where I'm coming from, I think the most important thing the EA community can do is be a community that fosters fast progress on the most important things in the world. Obviously, this will include being a community that takes contributions seriously regardless of their origins and elicits contributions from everyone with good ideas, without making any of them feel excluded because of their background. But that makes diversity an instrumental goal, a thing that will make us better at figuring out how to improve the world and acting on the evidence. From your phrasing, I think you might believe that harmful societal structures in the western world are one of the things we can most effectively fix? Have you expanded on that anywhere, or is there anyone else who has argued for that who you can point me to?

Comment author: tylermjohn 27 October 2017 01:35:14AM 4 points [-]

Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this - I was talking in terms of these benefits as "pure" benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly's piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I've pointed out above are "pure" because they come along for free with that labor involved in making the EA community more inclusive, and don't require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting - even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.

I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you've characterized that. I don't think that we should be a community doing work that fosters "fast progress on the most important things," because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering "fast progress" on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered - unless we define "fosters fast progress" in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.

To your question as to why "the magnitude of the current EA movement's contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans," I unfortunately haven't written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don't think it's obvious that the EA movement's contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was "very small" compared to the positive effects we have. It's something more EAs should treat as non-negligible more often than they do.

Still, here are some of the basic reasons why I think that the EA movement's contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky - much stickier than norms favoring helping others except in certain particular cases (e.g. within one's own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I'm out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won't come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it's reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction - measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.

I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there's a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.

Comment author: tylermjohn 26 October 2017 03:49:22PM 2 points [-]

Thanks so much for this thoughtful and well-researched write-up, Kelly. The changes you recommend seem extremely promising and it's very helpful to have all of these recommendations in one place.

I think that there are some additional reasons that go beyond those stated in this post that increase the value of making the EA a more diverse and inclusive community. First, if the EA movement genuinely aspires to cause-neutrality, then we should care about benefits that accrue to others regardless of who these other people are and independent of what the causal route to these benefits is. As such, we should also care about the benefits that becoming a diverse and inclusive movement would have for women, people of color, and disabled and trans people in and outside of the community. If, as you argue and as is antecedently quite plausible, the EA movement is essentially engaging in the very same discriminatory practices in our movement-building as people tend to engage in everywhere else, then as a result we are artificially boosting the prestige, visibility, and status perception of white, cis, straight, able-bodied men, we are creating a community that is less sensitive to stereotype threat and to micro- and macroaggressions than it otherwise could be, and we are giving legitimacy to stereotypes and to business and nonprofit models which arbitrarily exclude many people. All of this causes harm or a reduction in the status or power of women, people of color, and disabled and trans people and advances their discrimination - which is a real and significant cost to organizing in this way.

Second, even if one thinks that this effect size will be very small compared to the good that the EA movement is doing (which is less obvious than EAs sometimes assume without argument), 1) these are still pure benefits, which strengthens the case for and the reasons favoring improving the EA community in the respects you argue, and 2) if the EA community fails to become more diverse and inclusive we'll suffer reputation costs in the media, in academia, among progressives, and in the nonprofit world for being a community that is exclusionary. This would come at a significant cost to our potential to build a large and sustainable movement and to create strong, elite networks and ties. And at this point, this worry is very far from a mere hypothetical:

I think we have our work cut out for us if we want to build a better reputation with the world outside of our (presently rather small) community, and that the courses of action you recommend will go quite a long way to getting us there.

Comment author: jessiesun 31 January 2015 09:11:17PM 4 points [-]

Thought I'd just chime in with a relevant reference, in case anyone was curious:

Diener, E., Kanazawa, S., Suh, E. M., & Oishi, S. (2014). Why People Are in a Generally Good Mood. Personality and Social Psychology Review. doi: 10.1177/1088868314544467

"Evidence shows that people feel mild positive moods when no strong emotional events are occurring, a phenomenon known as positive mood offset. We offer an evolutionary explanation of this characteristic, showing that it improves fertility, fecundity, and health, and abets other characteristics that were critical to reproductive success. We review research showing that positive mood offset is virtually universal in the nations of the world, even among people who live in extremely difficult circumstances. Positive moods increase the likelihood of the types of adaptive behaviors that likely characterized our Paleolithic ancestors, such as creativity, planning, mating, and sociality. Because of the ubiquity and apparent advantages of positive moods, it is a reasonable hypothesis that humans were selected for positivity offset in our evolutionary past. We outline additional evidence that is needed to help confirm that positive mood offset is an evolutionary adaptation in humans and we explore the research questions that the hypothesis generates."

Comment author: tylermjohn 03 February 2015 01:26:09PM 0 points [-]

Thanks for sharing! That's good to know.

Comment author: Evan_Gaensbauer 31 January 2015 03:28:01AM *  4 points [-]

I don't consider myself a consequentialist, but I do support effective altruism. I don't believe a set of ethics, e.g., consequentialism as a whole, has a truth-value, because I don't believe ethics corresponds to truth. It lacks truth-value because it lacks truth-function; to ask if consequentialism is 'true or false' is a category error. That's my perspective. I used to think this was moral anti-realism, but apparently some moral anti-realists also believe consequentialism could be true. That confuses me. Anyway, I allow the possibility that moral realism might be true, and hence, consequentialism, or another normative model of the world, could also be "true". While I'm open to changing my mind to such in the future, I literally can't fathom what that would mean, or what believing that would feel like. Note I respect positions holding ethics or morality can be a function of truth, but I'm not willing to debate them in these comments. I'd be at a loss of words for defending my position, while I doubt others could change my mind. Practically, I'll only change my mind by learning more on my own, which I intend to do.

On the other hand, I, uh, in the past have intuited on the foundations of morality more deeply than I would expect most others uneducated in philosophy do. I lack any formal education in philosophy. I have several friends who study philosophy formally or informally, and have received my knowledge of philosophy exclusively from Wikipedia, friends, LessWrong, and the Stanford Encyclopedia of Philosophy. Anyway, I realized at my core I feel it's unacceptable there would be a different morality for different people. That is, ideally, everyone who share the same morals. In practice, both out of shame and actual humility, I tend not to claim among others my morals are superior. I let others live with their values as I live with mine. A lot of this behavior on my part may be engendered and normalized being raised in a pluralistic, secular, Western, democratic, and politically correct culture.

My thoughts were requested, so here's my input. I expect my perspective on ethics is weird among supporters of effective altruism, and also the world at large. So, I'm an outlier among outliers whose opinion isn't likely worth putting much weight on.

Comment author: tylermjohn 31 January 2015 05:22:45PM 2 points [-]

I have a good friend who is a thorough-going hedonistic act utilitarian and a moral anti-realist (I might come to accept this conjunction myself). He's a Humean about the truth of utilitarianism. That is, he thinks that utilitarianism is what an infinite number of perfectly rational agents would converge upon given an infinite period of time. Basically, he thinks that it's the most rational way to act, because it's basically a universalization of what everyone wants.

Comment author: Gentzel 30 January 2015 06:17:50PM 2 points [-]

I have seen such literature, but you can get around some of the looking back bias problems by recording how you feel in the moment (provided you aren't pressured to answer dishonestly). I am sure a lot of people have miserable lives, but I do think that when I believe I have been fairly happy for the past 4 years, it is very unlikely the belief is false (because other people also thought I was happy to).

I do think the concern about accuracy of beliefs about experience warrants finding a better way to evaluate people's happiness in general though. It think such analysis could change the way people set up surveys to measure things like QALYs. I think it is quite likely that the value of years lived disabled or with old age are better than people think.

Comment author: tylermjohn 31 January 2015 05:16:11PM 0 points [-]

Yeah, I think you're all-around right. I'm less sure that my life over the past two years has been very good (my memory doesn't go back much father than that), and I'm very privileged and have a career that I enjoy. But that gives me little if any reason to doubt your own testimony.

Comment author: tomstocker 30 January 2015 01:52:32PM *  2 points [-]

"And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people." can you think about editing this please - its a view I'm worried doesn't deserve platform. It doesn't seem to be the result of consequentialist thinking, just vanity.

Explanation: If important was defined more precisely around specific questions, such as instrumental value to other people's welfare, it might be a way of thinking about how useful it is to spend time supporting current EA people compared to time supporting others (but even then, that's a dumb calculation because you want to be looking at a specific EA and a specific way of supporting them compared to the best available alternative). But as it stands I can't see how that's a useful thought - enlighten me if I'm wrong.

Comment author: tylermjohn 30 January 2015 02:05:29PM *  1 point [-]

I agree that the life of an EA isn't going to be more important, even if saving that EA has greater value than saving someone who isn't an EA.

And if we're giving animals any moral weight at all (as we obviously should), the same can be said about people who are vegan.

Edited (after Tom A's comment): Maybe part of the problem is we're not clear here about what we mean by "a life". In my mind, a life is more or less important depending on whether it contains more or less intrinsic goods. The fact that an EA might do more good than a non-EA doesn't make their life more valuable - it doesn't obviously add any intrinsic goods to it - it just makes saving them more valuable. On the other hand, if we mean to include all of someone's actions and the effects of these actions in someone's "life", then the way it's worded is unproblematic.

This is nit-picky, but I think it's right. Is this what you're getting at, Tom S?

Comment author: tomstocker 30 January 2015 01:38:31PM 1 point [-]

Interesting view - how did you come to it? What do you say to the millions/billions that report being very happy/satisfied with life?

Comment author: tylermjohn 30 January 2015 01:44:46PM 1 point [-]

I didn't mean to sound like I'm committed to the view. I'm merely sympathetic to it, in the sense that I think reasonable people could disagree about this. I don't yet know if I think it's right.

Have you seen any of the empirical psychology literature suggesting that humans have evolved to be highly optimistic and evaluate their lives as better than they actually are? That literature, combined with more common worries about evaluating happiness (I'm a hedonist) make me worried that most people don't have lives that are good on the whole.

Comment author: Tom_Ash  (EA Profile) 30 January 2015 11:18:37AM 0 points [-]

To answer my own question, I personally assign some weight to all of these positions. I find the fourth - that I owe special duties to my near and dear - particularly plausible. However I don’t find it plausible that I owe special duties to my fellow citizens (at least not to an extent that should stop me donating everything over a certain amount to the global poor). I also think that we should take the third sort of position extra seriously, and avoid taking actions that are actively wrong on popular non-consequentialist theories. An additional reasons for this is that there are often good but subtle consequentialist grounds for avoiding these actions, and in my experience some consequentialists are insufficiently sensitive to them.

Comment author: tylermjohn 30 January 2015 12:40:11PM *  2 points [-]

One thing to consider about (2) is that there are also non-consequentialist reasons to treat non-human animals better than we treat humans (relative to their interests). As one example, because humans have long treated animals unjustly, reasons of reciprocity require us to discount human interests relative to theirs. So that might push the opposite way as discounting animal interests due to moral uncertainty.

View more: Next