Comment author: RobBensinger 17 March 2017 10:43:57PM *  3 points [-]

I think wild animal suffering isn't a long-term issue except in scenarios where we go extinct for non-AGI-related reasons. The three likeliest scenarios are:

  1. Humans leverage AGI-related technologies in a way that promotes human welfare as well as (non-human) animal welfare.

  2. Humans leverage AGI-related technologies in a way that promotes human welfare and is effectively indifferent to animal welfare.

  3. Humans accidentally use AGI-related technologies in a way that is indifferent to human and animal welfare.

In all three scenarios, the decision-makers are likely to have "ambitious" goals that favor seizing more and more resources. In scenario 2, efficient resource use almost certainly implies that biological human bodies and brains get switched out for computing hardware running humans, and that wild animals are replaced with more computing hardware, energy/cooling infrastructure, etc. Even if biological humans who need food stick around for some reason, it's unlikely that the optimal way to efficiently grow food in the long run will be "grow entire animals, wasting lots of energy on processes that don't directly increase the quantity or quality of the food transmitted to humans".

In scenario 1, wild animals might be euthanized, or uploaded to a substrate where they can live whatever number of high-quality lives seems best. This is by far the best scenario, especially for people who think (actual or potential) non-human animals might have at least some experiences that are of positive value, or at least some positive preferences that are worth fulfilling. I would consider this extremely likely if non-human animals are moral patients at all, though scenario 1 is also strongly preferable if we're uncertain about this question and want to hedge our bets.

Scenario 3 has the same impact on wild animals as scenario 1, and for analogous reasons: resource limitations make it costly to keep wild animals around. 3 is much worse than 1 because human welfare matters so much; even if the average present-day human life turned out to be net-negative, this would be a contingent fact that could be addressed by improving global welfare.

I consider scenario 2 much less likely than scenarios 1 and 3; my point in highlighting it is to note that scenario 2 is similarly good for the purpose of preventing wild animal suffering. I also consider scenario 2 vastly more likely than "sadistic" scenarios where some agent is exerting deliberate effort to produce more suffering in the world, for non-instrumental reasons.

Comment author: Brian_Tomasik 22 March 2017 05:29:30PM *  0 points [-]

What's your probability that wild-animal suffering will be created in (instrumentally useful or intrinsically valued) simulations?

Comment author: Brian_Tomasik 10 March 2017 04:21:47AM *  4 points [-]

Thanks for the post. I agree that those who embrace the asymmetry should be concerned about risks of future suffering.

I would guess that few EAs have a pure time preference for the short term. Rather, I suspect that most short-term-focused EAs are uncertain of the tractability of far-future work (due to long, complex, hard-to-predict causal chains), and some (such as a coalition within my own moral parliament) may be risk-averse. You're right that these considerations also apply to non-suffering-focused utilitarians.

It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical.

As you mention, there are complexities that need to be accounted for. For example, one should think about how catastrophic risks (almost all of which would not cause human extinction) would affect the trajectory of the far future.

It's much easier to get people behind not spreading astronomical amounts of suffering in the future than behind eliminating all current humans, so a more moderate approach is probably better. (Of course, it's also difficult to steer humanity's future trajectory in ways that ensure that suffering-averting measures are actually carried out.)

In response to EA Funds Beta Launch
Comment author: Brian_Tomasik 02 March 2017 03:29:03AM *  13 points [-]

Open Phil currently tries to set an upper limit on the proportion of an organization’s budget they will provide, in order to avoid dependence on a single funder. In the case where EA Funds generates recurring donations from a large number of donors, Fund Managers may be able to fully fund an organization already identified, saving the organization from spending additional time raising funds from many small donors individually.

It seems like in practice, donations from EA Funds are extremely correlated with OPP's own donations. That is, if OPP decided to stop funding a charity, presumably the EA Funds fund would also stop donating, because the charity no longer looks sufficiently promising. So the risk involved in depending on getting fully funded by OPP + EA Funds is seemingly about as high as the risk of depending on getting fully funded by just OPP. In this case, either fully funding a charity isn't a good thing, or OPP should already be doing it.

This comment isn't very important -- just an observation about argument 1.3.

Comment author: tjmather  (EA Profile) 20 January 2017 02:29:45PM *  1 point [-]

I agree that contraceptives could increase wild-animal suffering in the short run. The challenge I've run into is how to balance the increase in short term wild-animal suffering against the rights of people to plan their pregnancies, as well as considerations around farm animal suffering. I feel a lot of uncertainty around this, and not sure we can definitively answer that question without having a better understanding of how much insects and other wild animals suffer.

I think what tips the balance for me is that I have the intuition that preventing unwanted pregnancies may increase world stability in the long run, which could lead to better outcomes in the future, since we'll have the luxury to be able to tackle stuff like wild animal suffering.

There is some evidence from a study in Europe that suggests that unwanted children have greater proneness to social problems and criminal activity. Another much more speculative consideration is whether there could be future conflicts related to resources such as water tables and topsoil being depleted around the world, depending if technology to produce food continues to keep up with the increasing demand for food.

In summary, I feel uncertain if contraceptives are a net positive or negative from a utilitarian point of view, but I do feel from a human rights point of view, that every pregnancy should be wanted.

Comment author: Brian_Tomasik 21 January 2017 02:36:02PM *  0 points [-]

Thanks for the reply. :)

People also have a right not to die, so perhaps one could claim that AMF is as good for human rights as family planning?

As far as future stability, it's plausible that family planning beats AMF, both because of resource shortages and because of the unwanted-children thing you mention. Of course, while future stability has many upsides, it also makes it more likely that (post-)humanity will spread suffering throughout the cosmos.

Comment author: MikeJohnson 20 January 2017 08:44:18PM 3 points [-]

I was worried about the same in reverse. I didn't find your comments rude. :)

Good! I’ll charge forward then. :)

As long as we're both using the same equations of physics to describe the phenomenon, it seems that exactly how we define "electricity" may not matter too much. The most popular interpretation of quantum mechanics is "shut up and calculate".

That is my favorite QM interpretation! But following this analogy, I’m offering a potential equation for electricity, but you’re saying that electricity doesn’t have an equation because it’s not ‘real’, so it doesn’t seem like you will ever be in a position to calculate.

“Well, blegblarg doesn’t have a crisp definition, it’s more of a you-know-it-when-you-see-it thing where there’s no ‘correct’ definition of blegblarg and we can each use our own moral compass to determine if something is blegblarg, but there’s definitely a lot of it out there and it’s clearly bad so we should definitely work to reduce it!”

Replace "blegblarg" with "obscenity", and you have an argument that many people suffering from religious viruses would endorse.

But that doesn’t address the concern: if you argue that something is bad and we should work to reduce it, but also say there’s no correct definition for it and no wrong definition for it, what are you really saying? You note elsewhere that “We can interpret any piece of matter as being conscious if we want to,” and imply something similar about suffering. I would say that a definition that allows for literally anything is not a definition; an ethics that says something is bad, but notes that it’s impossible to ever tell whether any particular thing is bad, is not an ethics.

An example I like to use is "justice". It's clear to many people that injustice is bad, even though there's no crisp, physics-based definition of injustice.

This doesn’t seem to match how you use the term ‘suffering’ in practice. E.g., we could claim that “protons oppress electrons” or “there’s injustice in fundamental physics” — but this is obviously nonsense, and from a Wittgensteinian “language game” point of view, what's happening is that we’re using perfectly good words in contexts where they break down. But you do want to say that there could be suffering in fundamental physics, and potentially the far future. It looks like you want to have your cake and eat it too, and say that (1) “suffering” is a fuzzy linguistic construct, like “injustice” is, but also that (2) we can apply this linguistic construct of “suffering” to arbitrary contexts without it losing meaning. This seems deeply inconsistent.

But you’re clearly not a moral nihilist

I am. :) At least by this definition: "Moral nihilists consider morality to be constructed, a complex set of rules and recommendations that may give a psychological, social, or economical advantage to its adherents, but is otherwise without universal or even relative truth in any sense."

That definition doesn’t seem to leave much room for ethical behavior (or foundational research!), merely selfish action. This ties into my notion above, that you seem to have one set of stated positions (extreme skepticism & constructivism about qualia & suffering, moral nihilism for the purpose of ‘psychological, social, or economical advantage’), but show different revealed preferences (which seem more altruistic, and seem to assume something close to moral realism).

The challenge in this space of consciousness/valence/suffering research is to be skeptical-yet-generative: to spot and explain the flaws in existing theories, yet also to constantly search for and/or build new theories which have the potential to avoid these flaws.

You have many amazing posts doing the former (I particularly enjoyed this piece ) but you seem to have given up on the latter, and at least in these replies, seem comfortable with extreme constructivism and moral nihilism. However, you also seem to implicitly lean on valence realism to avoid biting the bullet on full-out moral nihilism & constructivism— your revealed preferences seem to be that you still want meaning, you want to say suffering is actually bad, I assume you don’t think it’s 100% arbitrary whether we say something is suffering or not. But these things are not open to a full-blown moral nihilist.

Anyway, perhaps you would have very different interpretations on these things. I would expect so. :) I'm probing your argument to see what you do think. But in general, I agree with the sentiments of Scott Aaronson:

Yes, it’s possible that things like the hard problem of consciousness, or the measurement problem in quantum mechanics, will never have a satisfactory resolution. But even if so, building a complicated verbal edifice whose sole purpose is to tell people not even to look for a solution, to be satisfied with two “non-overlapping magisteria” and a lack of any explanation for how to reconcile them, never struck me as a substantive contribution to knowledge. It wasn’t when Niels Bohr did it, and it’s not when someone today does it either.

I want a future where we can tell each other to “shut up and calculate”. You may not like my solution for grounding what valence is (though I’m assuming you haven’t read Principia Qualia yet), but I hope you don’t stop looking for a solution.

Comment author: Brian_Tomasik 21 January 2017 02:21:28PM *  2 points [-]

But following this analogy, I’m offering a potential equation for electricity, but you’re saying that electricity doesn’t have an equation

I haven't read your main article (sorry!), so I may not be able to engage deeply here. If we're trying to model brain functioning, then there's not really any disagreement about what success looks like. Different neuroscientists will use different methods, some more biological, some more algorithmic, and some more mathematical. Insofar as your work is a form of neuroscience, perhaps from a different paradigm, that's cool. But I think we disagree more fundamentally in some way.

if you argue that something is bad and we should work to reduce it, but also say there’s no correct definition for it and no wrong definition for it, what are you really saying?

My point is that your objection is not an obstacle to practical implementation of my program, given that, e.g., anti-pornography activism exists.

If you want a more precise specification, you could define suffering as "whatever Brian says is suffering". See "Brian utilitarianism".

we could claim that “protons oppress electrons” or “there’s injustice in fundamental physics” — but this is obviously nonsense

It's not nonsense. :) If I cared about justice as my fundamental goal, I would wonder how far to extend it to simpler cases. I discuss an example with scheduling algorithms here. (Search for "justice" in that interview.)

we can apply this linguistic construct of “suffering” to arbitrary contexts without it losing meaning

We do lose much of the meaning when applying that concept to fundamental physics. The question is whether there's enough of the concept left over that our moral sympathies are still (ever so slightly) engaged.

That definition doesn’t seem to leave much room for ethical behavior

In my interpretation, altruism is part of "psychological advantage", e.g., helping others because you want to and because it makes you feel better to do so.

I assume you don’t think it’s 100% arbitrary whether we say something is suffering or not

I do think it's 100% arbitrary, depending how you define "arbitrary". But of course I deeply want people to care about reducing suffering. There's no contradiction here.

in accordance with new developments in the foundational physics, but we’re unlikely to chuck quantum field theory in favor of some idiosyncratic theory of crystal chakras. If we discover the universe’s equation for valence, we’re unlikely to find our definition of suffering at the mercy of intellectual fads.

Quantum field theory is instrumentally useful for any superintelligent agent. Preventing negative valence is not. Even if the knowledge of what valence is remains, caring about it may disappear.

But I think that, unambiguously, cats being lit on fire is an objectively bad thing.

I don't know what "objectively bad" means.

slide into a highly Darwinian/Malthusian/Molochian context, then I fear that could be the end of value.

I'm glad we roughly agree on this factual prediction, even if we interpret "value" differently.

Comment author: MikeJohnson 20 January 2017 03:01:53AM *  3 points [-]

Thanks for the thoughts! Here's my attempt at laying out a strong form of why I don't think constructivism as applied to ethics & suffering leads to productive areas:

Imagine someone arguing that electromagnetism was purely a matter of definitions- there’s no “correct” definition of electricity, so how one approaches the topic and which definition one uses is ultimately a subjective choice.

But now imagine they also want to build a transistor. Transistors are, in fact, possible, and so it turns out that there is a good definition of electricity, by way of quantum theory, and of course many bad ones that don’t ‘carve reality at the joints’.

So I would say very strongly that we can’t both say that electricity is subjective and everyone can have their own arbitrary poetic definition of what it is and how it works, but also do interesting and useful things with it.

Likewise, my claim is that we can be a subjectivist about qualia and about suffering and say that how we define them is rather arbitrary and ultimately subjective, or we can say that some qualia are better than others and we should work to promote more good qualia and less bad qualia. But I don’t think we can do both at the same time. If someone makes a strong assertion that something is bad and that we should work to reduce its prevalence, then they’re also implying it’s real in a non-trivial sense; if something is not real, then it cannot be bad in an actionable sense.

Imagine that tomorrow I write a strong denouncement of blegblarg on the EA forum. I state that blegblarg is a scourge upon the universe, and we should work to rid ourselves of it, and all right-thinking people should agree with me. People ask me, “Mike…. I thought your post was interesting, but…. what the heck is blegblarg??” - I respond that “Well, blegblarg doesn’t have a crisp definition, it’s more of a you-know-it-when-you-see-it thing where there’s no ‘correct’ definition of blegblarg and we can each use our own moral compass to determine if something is blegblarg, but there’s definitely a lot of it out there and it’s clearly bad so we should definitely work to reduce it!”

This story would have no happy ending. Blegblarg can’t be a good rallying cry, because I can’t explain what it is. I can’t say it’s good or bad in a specific actionable sense, for the same reason. One person’s blegblarg is another person’s blargbleg, you know? :)

I see a strict reading of the constructivist project as essentially claiming similar things about suffering, and ultimately leading to concluding that what is, and isn't, suffering, is fundamentally arbitrary-- i.e., it leads to post-modern moral nihilism. But you’re clearly not a moral nihilist, and FRI certainly doesn’t see itself as nihilist. In my admittedly biased view of the situation, I see you & FRI circling around moral realism without admitting it. :) Now, perhaps my flavor of moral realism isn’t to your liking- perhaps you might come to a completely different principled conclusion about what qualia & valence are. But I do hope you keep looking.

p.s. I tend to be very direct when speaking about these topics, and my apologies if anything I've said comes across as rude. I think we differ in an interesting way and there may be updates in this for both of us.

Comment author: Brian_Tomasik 20 January 2017 10:48:11AM *  2 points [-]

So I would say very strongly that we can’t both say that electricity is subjective and everyone can have their own arbitrary poetic definition of what it is and how it works, but also do interesting and useful things with it.

As long as we're both using the same equations of physics to describe the phenomenon, it seems that exactly how we define "electricity" may not matter too much. The most popular interpretation of quantum mechanics is "shut up and calculate".

As another analogy, "life" has a fuzzy, arbitrary boundary, but that doesn't prevent us from doing biology.

If someone makes a strong assertion that something is bad and that we should work to reduce its prevalence, then they’re also implying it’s real in a non-trivial sense; if something is not real, then it cannot be bad in an actionable sense.

An example I like to use is "justice". It's clear to many people that injustice is bad, even though there's no crisp, physics-based definition of injustice.

“Well, blegblarg doesn’t have a crisp definition, it’s more of a you-know-it-when-you-see-it thing where there’s no ‘correct’ definition of blegblarg and we can each use our own moral compass to determine if something is blegblarg, but there’s definitely a lot of it out there and it’s clearly bad so we should definitely work to reduce it!”

Replace "blegblarg" with "obscenity", and you have an argument that many people suffering from religious viruses would endorse.

But you’re clearly not a moral nihilist

I am. :) At least by this definition: "Moral nihilists consider morality to be constructed, a complex set of rules and recommendations that may give a psychological, social, or economical advantage to its adherents, but is otherwise without universal or even relative truth in any sense."

my apologies if anything I've said comes across as rude

I was worried about the same in reverse. I didn't find your comments rude. :)

Comment author: MikeJohnson 19 January 2017 05:24:28PM 2 points [-]

Hmm-- I'd suggest that if pleasure-ceptors are easy contextually habituated, they might not be pleasure-ceptors per se.

(Pleasure is easily habituated; pain is not. This is unfortunate but seems adaptive, at least in the AE...)

My intuition is that if an organism did have dedicated pleasure-ceptors, it would probably immediately become its biggest failure-point (internal dynamics breaking down) and attack surface (target for others to exploit in order to manipulate behavior, which wouldn't trigger fight/flight like most manipulations do).

Arguably, we do see both of these things happen to some degree with regard to "pseudo-pleasure-ceptors" in the pelvis(?).

Comment author: Brian_Tomasik 19 January 2017 06:23:33PM *  1 point [-]

I'd suggest that if pleasure-ceptors are easy contextually habituated, they might not be pleasure-ceptors per se.

Not sure why that is unless you're just defining things that way, which is fine. :)

BTW, this page says

While large mechanosensory neurons such as type I/group Aß display adaptation, smaller type IV/group C nociceptive neurons do not. As a result, pain does not usually subside rapidly but persists for long periods of time; in contrast, one quickly stops receiving touch or sensory information if surroundings remain constant.


Arguably, we do see both of these things happen to some degree with regard to "pseudo-pleasure-ceptors" in the pelvis(?).

Yeah, as well as with various other addictions.

Comment author: MikeJohnson 18 January 2017 11:44:19PM 2 points [-]

Our question is: For a given physical system, what kinds of emotion(s) is it experiencing and how good/bad are they? The answers will not be factual in any deep ontological sense, since emotions and moral valence are properties that we attribute to physics. Rather, we want an ethical theory of how to make these judgments.

Certainly, Barrett makes a strong case that statements about emotions “will not be factual in any deep ontological sense,” because they aren’t natural kinds. My argument is that valence probably is a natural kind, however, and so we can make statements about it that are as factual as statements about the weak nuclear force, if (and only if) we find the right level of abstraction by which to view it.

When I began discussing utilitarianism in late 2005, a common critique from friends was: "But how can you measure utility?" Initially I replied that utility was a real quantity, and we just had to do the best we could to guess what values it took in various organisms. Over time, I think I grew to believe that while consciousness was metaphysically real, the process of condensing conscious experiences into a single utility number was an artificial attribution by the person making the judgment. In 2007, when a friend pressed me on how I determined the net utility of a mind, I said: "Ultimately I make stuff up that seems plausible to me." In late 2009, I finally understood that even consciousness wasn't ontologically fundamental, and I adopted a stance somewhat similar to, though less detailed than, that of the present essay.

I would say I’ve undergone the reverse process. :)

Your implication is that questions of consciousness & suffering are relegated to ‘spiritual poetry’ and can only be ‘debated in the moral realm’ (as stated in some of your posts). But I would suggest this is rather euphemistic, and runs into failure modes that are worrying.

The core implication seems to be that there are no crisp facts of the matter about what suffering is, or which definition is the ‘correct’ one, and so it's ultimately a subjective choice which definition we use. This leads to insane conclusions: we could use odd definitions of suffering to conclude that animals probably don’t feel pain, or that current chatbots can feel pain, or that the suffering which happens when a cis white man steps on a nail, is less than the suffering which happens when a bisexual black female steps on a nail, or vice-versa. I find it very likely that there are people making all these claims today.

Now, I suspect you and I have similar intuitions about these things: we both think animals can feel pain, whereas current chatbots probably can’t, and that race almost certainly doesn’t matter with respect to capacity to suffer. I believe I can support these intuitions from a principled position (as laid out in Principia Qualia). But being a functionalist, and especially if our moral intuitions and definitions of suffering are “subjective, personal, and dependent on one’s emotional whims,” then it would seem that your support of these intuitions is in some sense arbitrary— they are your spiritual poetry, but other people can create different spiritual poetry that comes from very different directions.

And so, I fear that if we’re constructivists about suffering, then we should expect a very dark scenario: that society’s definition of suffering, and any institutions we build whose mission is to reduce suffering, will almost certainly be co-opted by future intellectual fashions. And, in fact, that given enough time and enough Moloch, society’s definition of suffering could in fact invert, and some future Effective Altruism movement may very well work to maximize what we today would call suffering.

I believe I have a way out of this: I think consciousness and suffering(valence) are both ‘real’, and so a crisp definition of each exists, about which one can be correct or incorrect. My challenge to you is to find a way out of this ‘repugnant conclusion’ also. Or to disprove that I’ve found a way out of it, of course. :)

In short, I think we can be constructivists about qualia & suffering, or we can be very concerned about reducing suffering, but I question the extent to which we can do both at the same time while maintaining consistency.

Comment author: Brian_Tomasik 19 January 2017 06:19:09PM 2 points [-]

we could use odd definitions of suffering to conclude that animals probably don’t feel pain, or that current chatbots can feel pain, or that the suffering which happens when a cis white man steps on a nail, is less than the suffering which happens when a bisexual black female steps on a nail, or vice-versa.

But doing so would amount to shifting the goalpost, which is a way of cheating at arguments whether there's a single definition of a word or not. :)

It's similar to arguments over abortion of very early embryos. One side calls a small clump of cells "a human life", and the other side doesn't. There's no correct answer; it just depends what you mean by that phrase. But the disagreement isn't rendered trivial by the lack of objectivity of a single definition.

will almost certainly be co-opted by future intellectual fashions

If by this you mean society's prevailing concepts and values, then yes. But everything is at the mercy of those. If reducing your precisely defined version of suffering falls out of fashion, it won't matter that it has a crisp definition. :)

some future Effective Altruism movement may very well work to maximize what we today would call suffering.

Hm, that doesn't seem too likely to me (more likely is that society becomes indifferent to suffering), except if you mean that altruists might, e.g., try to maximize the amount of sentience that exists, which would as a byproduct entail creating tons of suffering (but that statement already describes many EAs right now).

My challenge to you is to find a way out of this ‘repugnant conclusion’ also. Or to disprove that I’ve found a way out of it, of course. :)

I think your solution, even if true, doesn't necessarily help with goal drift / Moloch stuff because people still have to care about the kind of suffering you're talking about. It's similar to moral realism: even if you find the actual moral truth, you need to get people to care about it, and most people won't (especially not future beings subject to Darwinian pressures).

Comment author: MikeJohnson 19 January 2017 07:24:34AM 2 points [-]

there's no stimulus that's always fitness-enhancing (or is there?), while flames, skin wounds, etc. are always bad. Sugar receptors usually convey pleasure, but not if you're full, nauseous, etc.

Yeah, strongly agree.

Additionally, accidentally wireheading oneself had to have been at least a big potential problem during evolution, which would strongly select against anything like a pleasure-ceptor.

Comment author: Brian_Tomasik 19 January 2017 12:59:59PM 1 point [-]

Hm, I would think that hedonic adaptation/habituation could be applied to stimuli from pleasure-ceptors fairly easily?

Comment author: Brian_Tomasik 17 January 2017 04:43:08PM 1 point [-]

I'm worried that family planning increases total suffering by allowing for more wild animals to exist. In contrast, life-saving charities like AMF probably reduce wild-animal suffering. If you support AMF and family planning about equally on anti-poverty grounds, I would recommend AMF on wild-animal grounds.

What are your thoughts on how to incorporate wild-animal suffering into these calculations? Unlike with far-future considerations, we have lots of concrete data on impacts of humans on wild-animal population sizes.

View more: Next