Comment author: mhendric 30 April 2018 09:26:23AM 1 point [-]

Hey there,

anyone have the link for the economists guesses Askill refers to? I have no copy of doing good better around so I cant check myself.

Also, anyone know if demand-independent subsidies are factored in? I would expect the expected value to be lower when subsidies allow producers to be producing below "production/world market price", as they could easily export whatever is not locally consumed (as some EU countries do).

Thanks for the post. This issue regularly arises in our local EA group (mainly due to me desperately grasping straws to justify my carnivorous ways), and it is surprisingly hard to get good information on the topic. So far I knew only the "Does Vegetarianism make a difference" post, which is well-written but does seem a bit light on the economics side, with no peer-reviewed articles or analyses being quoted as far as I remember.

Comment author: Brian_Tomasik 30 April 2018 11:03:08AM *  1 point [-]

I think the economist guesses are from Compassion, By the Pound, though I also don't have a copy of either book.

no peer-reviewed articles or analyses being quoted

Yeah. Matheny (2003) is a journal article on the same topic, though it's not an economics journal.

they could easily export whatever is not locally consumed (as some EU countries do).

Perhaps that would reduce local meat production in the destination countries.

Comment author: Jan_Kulveit 24 February 2018 01:07:19AM *  2 points [-]

Thanks for writing it.

Here are my reasons for the belief wild animal/small minds/... suffering agenda is based mostly on errors and uncertainties. Some of the uncertainties should warrant research effort, but I do not believe the current state of knowledge justifies prioritization ofany kind of advocacy or value spreading.

1] The endeavour seems to be based on extrapolating intuitive models far outside the scope for which we have data. The whole suffering calculus is based on extrapolating the concept of suffering far away from the domain for which we have data from human experience.

2] Big part of it seems arbitrary. When expanding the moral circle toward small computational processes and simple systems, why not expand it toward large computational processes and complex systems? E.g. we can think about the DNA based evolution as about large computational/optimization process - suddenly "wild animal suffering" has a purpose and traditional environmnet and biodiversity protection efforts make sense.

(Similarly we could argue much "human utility" is in the larger system structure above individual humans)

3] We do not know how to measure and aggregate utility of mind states. Like, really don't know. E.g. it sems to me completely plausible the utility of 10 people reaching some highly joyful mindstates is the dominanat contribution over all human and animal minds.

4] Part of the reasoning usually seems contradictory. If the human cognitive processes are in the priviledged position of creating meaning in this universe ... well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds. If they are not in the priviledged positions, how it comes humans should impose their ideas about meaning on other agents?

5] MCE efforts directed toward AI researchers with the intent of influencing values of some powerful AI may increase x-risk. E.g. if the AI is not "speciist" and gives the same weight to satysfing prefrences of all humans and all chicken, the chicken would outnumber humans.

Comment author: Brian_Tomasik 24 February 2018 04:56:26PM 4 points [-]

You raise some good points. (The following reply doesn't necessarily reflect Jacy's views.)

I think the answers to a lot of these issues are somewhat arbitrary matters of moral intuition. (As you said, "Big part of it seems arbitrary.") However, in a sense, this makes MCE more important rather than less, because it means expanded moral circles are not an inevitable result of better understanding consciousness/etc. For example, Yudkowsky's stance on consciousness is a reasonable one that is not based on a mistaken understanding of present-day neuroscience (as far as I know), yet some feel that Yudkowsky's view about moral patienthood isn't wide enough for their moral tastes.

Another possible reply (that would sound better in a political speech than the previous reply) could be that MCE aims to spark discussion about these hard questions of what kinds of minds matter, without claiming to have all the answers. I personally maintain significant moral uncertainty regarding how much I care about what kinds of minds, and I'm happy to learn about other people's moral intuitions on these things because my own intuitions aren't settled.

E.g. we can think about the DNA based evolution as about large computational/optimization process - suddenly "wild animal suffering" has a purpose and traditional environmnet and biodiversity protection efforts make sense.

Or if we take a suffering-focused approach to these large systems, then this could provide a further argument against environmentalism. :)

If the human cognitive processes are in the priviledged position of creating meaning in this universe ... well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds.

I selfishly consider my moral viewpoint to be "privileged" (in the sense that I prefer it to other people's moral viewpoints), but this viewpoint can have in its content the desire to give substantial moral weight to non-human (and human-but-not-me) minds.

Comment author: Ben_West  (EA Profile) 22 February 2018 06:49:53PM 2 points [-]

AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators

This is something I also struggle with in understanding the post. it seems like we need:

  1. AI creators can be convinced to expand their moral circle
  2. Despite (1), they do not wish to be convinced to expand their moral circle
  3. The AI follows this second desire to not be convinced to expand their moral circle

I imagine this happening with certain religious things; e.g. I could imagine someone saying "I wish to think the Bible is true even if I could be convinced that the Bible is false".

But it seems relatively implausible with regards to MCE?

Particularly given that AI safety talks a lot about things like CEV, it is unclear to me whether there is really a strong trade-off between MCE and AIA.

(Note: Jacy and I discussed this via email and didn't really come to a consensus, so there's a good chance I am just misunderstanding his argument.)

Comment author: Brian_Tomasik 22 February 2018 09:23:57PM 7 points [-]

I tend to think of moral values as being pretty contingent and pretty arbitrary, such that what values you start with makes a big difference to what values you end up with even on reflection. People may "imprint" on the values they receive from their culture to a greater or lesser degree.

I'm also skeptical that sophisticated philosophical-type reflection will have significant influence over posthuman values compared with more ordinary political/economic forces. I suppose philosophers have sometimes had big influences on human politics (religions, Marxism, the Enlightenment), though not necessarily in a clean "carefully consider lots of philosophical arguments and pick the best ones" kind of way.

Comment author: Jacy_Reese 21 February 2018 12:55:20AM 7 points [-]

Thanks for the comment! A few of my thoughts on this:

Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.

If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.

Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.

Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that's been done to get media coverage and widespread attention without the technical attention to detail of Bostrom's book.

I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.

There's also social work in coordinating the AIA community.

First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.

Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn't otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.

Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.

Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious

I disagree that "many people don't think animals are conscious." I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, "Farmed animals have roughly the same ability to feel pain and discomfort as humans," and presumably even more think they have at least some ability.

Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness.

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)

Comment author: Brian_Tomasik 21 February 2018 01:02:23PM 5 points [-]

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious.

I would guess that increasing understanding of cognitive science would generally increase people's moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you'll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.

Comment author: ThomasSittler 01 January 2018 04:03:30PM 1 point [-]

Don't donate to AMF ;)

Comment author: Brian_Tomasik 02 January 2018 08:06:59AM 10 points [-]

Or maybe do donate to AMF. :)

Comment author: TruePath 31 December 2017 08:17:13AM 3 points [-]

While this isn't an answer I suspect that if you are interested in insect welfare one first needs a philosophical/scientific program to get a grip on what that entails.

First, unlike other kinds of animal suffering it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population. Thus, unlike large animals, where one can find common ground between various consequentialist moral views it seems quite likely that whether a particular intervention is good or actually harmful for insects will often turn on subtle questions about one's moral views, e.g., average utility or total, does the welfare of possible future beings count, is the life of your average insect a net plus or minus.

As such simply donating to insect welfare risks doing (what you feel is) a great moral harm unless you've carefully considered these aspects of your moral view and chosen interventions that align with them.

Secondly, merely figuring out what makes insects better off is hard. While our intuitions can go wrong its not too unreasonable to think that we can infer other mammals and even vertebrates level of pain/pleasure based on analogies to our own experiences (a dog yelping is probably in pain). However, when it comes to something as different as an insect its unclear if its even safe to assume an insect's neural response to damage even feels unpleasant. After all, surely at some simple enough level of complexity, we don't believe those lifeform's response to damage manifests as a qualitative experience of suffering (even though the tissues in my body can react to damage and even change behavior to avoid further damage without interaction with my brain we don't think my liver can experience pain on its own). At the very least to figure out what kinds of events might induce pain/pleasure responses in an insect would require some philosophical analysis of what is known about insect neurobiology.

Finally, it is quite likely that it will be the indirect effects of any intervention on the wider insect ecosystem rather than any direct effect which will have the largest impact. As such, it would be a mistake to try and engage in any interventions without first doing some in depth research into the downstream effects.


Point of this all is that with respect to insects we need to support academic study and consideration more before actually engaging in any interventions.

Comment author: Brian_Tomasik 31 December 2017 01:51:09PM 3 points [-]

Nice points. :)

it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population

One exception might be identifying insecticides that are less painful than existing ones while having roughly similar effectiveness, broad/narrow-spectrum effects, etc. Other forms of humane slaughter, such as on insect farms, would also fall under this category.

Comment author: TruePath 31 December 2017 08:23:24AM 1 point [-]

I'm disappointed that the link about which invertebrates feel pain doesn't go into more detail on the potential distinction between merely learning from damage signals and the actual qualitative experience of pain. It is relatively easy to build a simple robot or write a software program that demonstrates reinforcement learning in the face of some kind of damage but we generally don't believe such programs truly have a qualitative experience of pain. Moreover, the fact that some stimuli are both unpleasant yet rewarding (e.g. encourage repetition) indicates these notions come apart.

Comment author: Brian_Tomasik 31 December 2017 11:15:20AM 2 points [-]

It's a big topic area, and I think we need articles on lots of different issues. The overview piece for invertebrate sentience was just a small first step. Philosophers, neuroscientists, etc. have written thousands of papers debating criteria for sentience, so I don't expect such issues to be resolved soon. In the meanwhile, cataloguing what abilities different invertebrate taxa have seems valuable. But yes, some awareness of the arguments in philosophy of mind and how they bear on the empirical research is useful. :)

Comment author: MetricSulfateFive 31 December 2017 03:19:16AM *  14 points [-]

The closest is probably Wild-Animal Suffering Research, since they have published (on their website) a few papers on invertebrate welfare (e.g., Which Invertebrate Species Feel Pain?, An Analysis of Lethal Methods of Wild Animal Population Control: Invertebrates). However, their work doesn't focus exclusively on invertebrates, as they have published some articles that either apply to all animals (e.g., “Fit and Happy”: How Do We Measure Wild-Animal Suffering?), or only apply to vertebrates (e.g., An Analysis of Lethal Methods of Wild Animal Population Control: Vertebrates).

Animal Ethics and Utility Farm also work on issues relating to wild animal suffering. My impression is that AE mostly focuses on outreach (e.g., About Us, leaflets, FB page), and UF mostly focuses on advocacy and social change research (e.g., Study: Effective Communication Strategies For Addressing Wild Animal Suffering, Reviewing 2017 and Looking to 2018), although AE also claims to do some research (mainly moral philosophy literature reviews?). Again, these organizations don't only focus on invertebrates. In fact, AE doesn't even focus solely on wild animals, as they seem to spend significant resources on traditional animal advocacy (farm animals, veganism) as well.

I don't know of any insect-specific charities, although some may exist. Unfortunately, the Society for the Prevention of Cruelty to Insects is only satire. If we widen the scope a bit and include invertebrate-specific charities, I know only of Crustacean Compassion, but there may be others. There was also at one point a website called Invertebrate Considerations that seemed to be EA-aligned, but it's gone now and I don't think it was ever anything more than just a mockup.

Humane insecticides might be a promising area for future work.

Comment author: Brian_Tomasik 31 December 2017 04:51:27AM 7 points [-]

Great overview!

Yeah, Wild-Animal Suffering Research's plans include some invertebrate components, especially Georgia Ray’s topics.

If you're also concerned about reducing the suffering of small artificial minds in the far future, Foundational Research Institute may be of interest.

Comment author: SoerenMind  (EA Profile) 20 September 2017 08:18:50PM *  1 point [-]

After some clarification Dayan thinks that vigour is not the thing I was looking for.

We discussed this a bit further and he suggested that the temporal difference error does track pretty closely what we mean by happiness/suffering, at least as far as the zero point is concerned. Here's a paper making the case (but it has limited scope IMO).

If that's true, we wouldn't need e.g. the theory that there's a zero point to keep firing rates close to zero.

The only problem with TD errors seems to be that they don't account for the difference between wanting and liking. But it's currently just unresolved what the function of liking is. So I came away with the impression that liking vs wanting and not the zero point is the central question.

I've seen one paper suggesting that liking is basically the consumption of rewards, which would bring us back to the question of the zero point though. But we didn't find that theory satisfying. E.g. food is just a proxy for survival. And as the paper I linked shows, happiness can follow TD errors even when no rewards are consumed.

Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon. I don't know if that would mean that liking has no function.

Any thoughts?

Comment author: Brian_Tomasik 21 September 2017 03:37:07AM 0 points [-]

Interesting. :)

Daswani and Leike (2015) also define (p. 4) happiness as the temporal difference error (in an MDP), and for model-based agents, the definition is, in my interpretation, basically the common Internet slogan that "happiness = reality - expectations". However, the authors point out (p. 2) that pleasure = reward != happiness. This still leaves open the issue of what pleasure is.

Personally I think pleasure is more morally relevant. In Tomasik (2014), I wrote (p. 11):

After training, dopamine spikes when a cue appears signaling that a reward will arrive, not when the reward itself is consumed [Schultz et al., 1997], but we know subjectively that the main pleasure of a reward comes from consuming it, not predicting it. In other words, in equation (1), the pleasure comes from the actual reward r, not from the amount of dopamine δ.

In this post commenting on Daswani and Leike (2015), I said:

I personally don't think the definition of "happiness" that Daswani and Leike advance is the most morally relevant one, but the authors make an interesting case for their definition. I think their definition corresponds most closely with "being pleased of one's current state in a high-level sense". In contrast, I think raw pleasure/pain is most morally significant. As a simple test, ask whether you'd rather be in a state where you've been unexpectedly notified that you'll get a cookie in a few minutes or whether you'd rather be in the state where you actually eat the cookie after having been notified a few minutes earlier. Daswani and Leike's definition considers being notified about the cookie to be happiness, while I think eating the cookie has more moral relevance.


Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon.

I'm not sure I understand, but I wrote a quick thing here inspired by this comment. Do you think that's what he meant? If so, may I attribute him/you for the idea? It seems fairly plausible. :) Studying what separates red from blue might help shine light on this topic.

In response to S-risk FAQ
Comment author: gworley3  (EA Profile) 18 September 2017 07:30:39PM 4 points [-]

One thing I find meta-interesting about s-risk is that s-risk is included in the sort of thing we were pointing at in the late 90s before we started talking about x-risk, and so to my mind s-risk has always been part of the x-risk mitigation program but, as you make clear, that's not how it's been communicated.

I wonder if there are types of risks for the long-term future we implicitly would like to avoid but have accidentally explicitly excluded from both x-risk and s-risk definitions.

In response to comment by gworley3  (EA Profile) on S-risk FAQ
Comment author: Brian_Tomasik 19 September 2017 12:11:11AM 3 points [-]

the sort of thing we were pointing at in the late 90s before we started talking about x-risk

I'd be interested to hear more about that if you want to take the time.

View more: Next