Hi EA Forum,
I'm Luke Muehlhauser and I'm here to answer your questions and respond to your feedback about the report on consciousness and moral patienthood I recently prepared for the Open Philanthropy Project. I'll be here today (June 28th) from 9am Pacific onward, until the flow of comments drops off or I run out of steam, whichever comes first. (But I expect to be avaliable through at least 3pm and maybe later, with a few breaks in the middle).
Feel free to challenge the claims, assumptions, and inferences I make in the report. Also feel free to ask questions that you worry might be "dumb questions," and questions you suspect might be answered somewhere in the report (but you're not sure where) — it's a long report! Please do limit your questions to the topics of the report, though: consciousness, moral patienthood, animal cognition, meta-ethics, moral weight, illusionism, hidden qualia, etc.
As noted in the announcement post, much of the most interesting content in the report is in the appendices and even some footnotes, e.g. on unconscious vision, on what a more satisfying theory of consciousness might look like, and a visual explanation of attention schema theory (footnote 288). I'll be happy to answer questions about those topics as well.
I look forward to chatting with you all!
EDIT: Please post different questions as separate comments, for discussion threading. Thanks!
EDIT: Alright, I think I replied to everything. My thanks to everyone who participated!
I'm not sure what you mean by "objective definition" or "objectively correct answer," but I don't think I think of consciousness as being "objective" in your sense of the term.
The final question, for me, is "What should I care about?" I elaborate my "idealized" process for answering this question in section 6.1.2. Right now, my leading guess for what I'd conclude upon going through some approximation of that idealized process is that I'd care about beings with valenced conscious experience, albeit with different moral weights depending on a variety of other factors (early speculations in Appendix Z7).
But of course, I don't know quite what sense of "valenced conscious experience" I'd end up caring about upon undergoing my idealized process for making moral judgments, and the best I can do at this point is something like the definition by example (at least for the "consciousness" part) that I begin to elaborate in section 2.3.1.
Re: Type A physicalism, aka Type A materialism. As mentioned in section 2.3.2, I do think my current view is best thought of as "'type A materialism,' or perhaps toward the varieties of 'type Q' or 'type C' materialism that threaten to collapse into 'type A' materialism anyway…" (see the footnote after this phrase for explanations). One longer article that might help clarify how I think about "type A materialism" w.r.t. consciousness or other things is Mixed Reference: The Great Reductionist Project and its dependencies.
That said, I do think the "triviality" objection is a serious one (Ctrl+F the report for "triviality objection to functionalism"), and I haven't studied the issue enough to have a preferred answer for it, nor am I confident there will ever be a satisfying answer to it — at least, for the purposes of figuring out what I should care about. Brian wrote a helpful explainer on some of these issues: How to Interpret a Physical System as a Mind. I endorse many of the points he argues for there, though he and I end up with somewhat different intuitions about what we morally care about, as discussed in the notes from our conversation.
I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.
If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that s... (read more)