Luke Muehlhauser of the Open Philanthropy Project recently published a major report on animal consciousness and the question of "moral patienthood" — i.e. which beings merit moral concern? The purpose of the report is to inform Open Phil's grantmaking, especially in its farm animal welfare focus area. Luke would like to hear your questions and objections, and he will host an "Ask Me Anything" session on the issues discussed in the report, here on the Effective Altruism Forum, starting at 9am Pacific on Wednesday, June 28th.

I hope you will read the report and then join in with lots of questions about the topics it covers: consciousness, moral patienthood, animal cognition, meta-ethics, moral weight, illusionism, hidden qualia, and more!

Luke would also like to note that much of the most interesting content in the report is in the appendices and even some footnotes, e.g. on unconscious vision, on what a more satisfying theory of consciousness might look like, and an explanation of attention schema theory (footnote 288).

 

(In case it's confusing why I'm posting this: I'm coming on as a moderator of the Forum, and will post shortly with more info about that.)

Comments14
Sorted by Click to highlight new comments since: Today at 5:34 PM

I super highly recommend reading this report. In full, including many of the appendices (and footnotes :) )

I thought it was really interesting, and helpful for thinking this question through and understanding the state of what evidence and arguments are out there (unfortunately there is much less to go on than I’d even expected, though).

I was the most proximate audience for the report, so discount my recommendation as much as feels appropriate with that in mind.

Update: the AMA is now live, here.

May we submit questions here to be asked on our behalf if we don't think we'll be free on Wednesday to ask during the AMA live?

Sure. In that case, I won't reply to them (if they aren't posted directly to the AMA) until the AMA is "winding down," or something.

Hey, that sounds great to me. Thanks. Here's my question.

Do you think science or philosophy can meaningfully separate the capacity to experience suffering or pain from however else consciousness is posited to be distributed across species? What would be fruitful avenues of research for effective altruists to pursue if it's possible to solve the problem in the first question, without necessarily addressing whatever remains of consciousness?

Hi Evan, your question has now been posted to the AMA here.

This is a good start about some of the issues, but there is a need to bulk it up with information directly from neuroscientists.

For instance, some very senior people in the Stanford neuroscience community think that an essential difference between animals and people may be that the astrocytes, "helper cells," are so very different. Among many other things, astrocytes help to create and destroy synapses.

Neuroscientists also routinely do mice experiments, and a few have very sophisticated answers to ethical questions about what they do.

There are a lot of topics in EA ethics that benefit from a grounding in neuroscience and neuroethics. Both of these fields also contain many EA opportunities themselves. If money is being put down, then it's time to add some expert scientific opinion.

Has anyone ever suggested that astrocytes may be necessary for, or at least a strong indicator of, phenomenal consciousness? If so, could you point me to the claim?

I don't think we don't know yet!

But here is Ben Barres' related NIH Grant: "An Astrocytic Basis for Humanity"

https://projectreporter.nih.gov/project_info_description.cfm?aid=9068256&icde=34843317&ddparam=&ddvalue=&ddsub=&cr=2&csb=default&cs=ASC&pball=

Sadly, Prof. Barres, one the most respected neuroscientists in the country, has terminal cancer.

I don't know what will happen to this grant. Stanford's astrocyte research is at risk if they lose funding. However, there are plenty of people at Stanford now who would continue the tradition if they could.

Hi, Steve. I'm an EA and also a neuroscience PhD student who studies astrocytes. As you might imagine, I totally agree that neuroscience perspectives are valuable for EA decisions. It's dismaying to me that physical and computer scientists are so well-represented in EA, but there are so few life scientists. I'm trying to figure out why this might be. Any ideas?

Regarding the ethics of animal experiments, I'm working on a project to create educational materials about the importance of environmental enrichment (IMHO the most important welfare issue for laboratory animals). I've actually applied to EA Grants for funding to create a website aimed at educating life scientists about this issue.

On the topic of astrocytes, you'll be happy to know that I asked Adam Marblestone about glia after his talk at EA Global in Boston. :) https://youtu.be/0eX1UqMmaLM?t=21m27s

I think that certain arguments from neuroscience were definitely considered, see the extended section on 'necessary and sufficient conditions' which looks at the cortex-required view, and the section right before that on 'potentially consciousness indicating factors' which looks at 'neuroanatomical similarity' and has a whole appendix associated. These two would both probably cover the types of argument that you're making, even if it doesn't address your specific mechanism, so pointing out what he missed in the relevant section would probably be helpful.

DC
7y0
0
0

Be the change you want to see in this world. You are clearly motivated and knowledgeable about the matter enough to try emailing some neuroscientists about the matter. :)

Thanks, I've been talking with 'em every week :) .

What's quite clear to me, whether it's morally justifiable in terms some EAs will agree with, or not:

If we do not let them do some unappealing things to mice, that will cost millions of human lives.

The question of animal experimentation bears directly on EA funding decisions.

There is no "vegan" way out for some kinds of studies. I personally would volunteer for some kinds of experiments, if I had just a short time to live. Even that would not cover all of the necessary cases, and I might be prevented.

For example, we urgently need to map fluid flows in the brain. When we sleep, flows in the "glymphatic system" turn on and off. We barely understand this phenomenon.

If we knew more, we could try new treatments for Alzheimer's disease, stroke, sleep disorders and mental illness. Medication dosing would become more accurate, and we might even know more about how cancers in the brain spread.

Institutional Review Boards get confused about these issues, too. Without clarity, both fighting disease and human enhancement (for good or bad) will be hampered.

That's why we need clear vision in neuroethics. Ethical theory very quickly feeds into research approval and funding determinations.