Hi EA Forum,
I'm Luke Muehlhauser and I'm here to answer your questions and respond to your feedback about the report on consciousness and moral patienthood I recently prepared for the Open Philanthropy Project. I'll be here today (June 28th) from 9am Pacific onward, until the flow of comments drops off or I run out of steam, whichever comes first. (But I expect to be avaliable through at least 3pm and maybe later, with a few breaks in the middle).
Feel free to challenge the claims, assumptions, and inferences I make in the report. Also feel free to ask questions that you worry might be "dumb questions," and questions you suspect might be answered somewhere in the report (but you're not sure where) — it's a long report! Please do limit your questions to the topics of the report, though: consciousness, moral patienthood, animal cognition, meta-ethics, moral weight, illusionism, hidden qualia, etc.
As noted in the announcement post, much of the most interesting content in the report is in the appendices and even some footnotes, e.g. on unconscious vision, on what a more satisfying theory of consciousness might look like, and a visual explanation of attention schema theory (footnote 288). I'll be happy to answer questions about those topics as well.
I look forward to chatting with you all!
EDIT: Please post different questions as separate comments, for discussion threading. Thanks!
EDIT: Alright, I think I replied to everything. My thanks to everyone who participated!
Got it, thanks for clarifying. Off the top of my head, I can't think of any unconscious or at least "hidden" processing that is known to work in the relatively sophisticated manner your describe, but I might have read about such cases and am simply not remembering them at the moment. Certainly an expert on unconscious/hidden cognitive processing might be able to name some fairly well-characterized examples, and in general I find it quite plausible that such cognitive processes occur in (e.g.) the human brain (and thus potentially in the brains of other animals). Possibly the apparent cognitive operations undertaken by the non-verbal hemisphere in split-brain patients would qualify, though they seem especially likely to qualify as "conscious" under the Schwitzgebel-inspired definition even if they are not accessible to the hemisphere that can make verbal reports.
Anyway, the sort of thing you describe is one reason why, in section 4.2, my probabilities for "consciousness of a sort I intuitively morally care about" are generally higher than my probabilities for "consciousness as loosely defined by example above." Currently, I don't think I'd morally care about such cognitive processes so long as they were "unconscious" (as loosely defined by example in my report), but I think it's at least weakly plausible that if I was able to carry out my idealized process for making moral judgments, I would conclude that I care about some such unconscious processes. I don't use Brian's approach of "mere" similarities in a multi-dimensional concept space, but regardless I could still imagine myself morally caring about certain types of unconscious processes similar to those you describe, even if I don't care about some other unconscious processes that may be even more similar (in Brian's concept space) to the processes that do instantiate "conscious experience" (as loosely defined by example in my report). (I'd currently bet against making such moral judgments, but not super-confidently.)