Hi EA Forum,
I'm Luke Muehlhauser and I'm here to answer your questions and respond to your feedback about the report on consciousness and moral patienthood I recently prepared for the Open Philanthropy Project. I'll be here today (June 28th) from 9am Pacific onward, until the flow of comments drops off or I run out of steam, whichever comes first. (But I expect to be avaliable through at least 3pm and maybe later, with a few breaks in the middle).
Feel free to challenge the claims, assumptions, and inferences I make in the report. Also feel free to ask questions that you worry might be "dumb questions," and questions you suspect might be answered somewhere in the report (but you're not sure where) — it's a long report! Please do limit your questions to the topics of the report, though: consciousness, moral patienthood, animal cognition, meta-ethics, moral weight, illusionism, hidden qualia, etc.
As noted in the announcement post, much of the most interesting content in the report is in the appendices and even some footnotes, e.g. on unconscious vision, on what a more satisfying theory of consciousness might look like, and a visual explanation of attention schema theory (footnote 288). I'll be happy to answer questions about those topics as well.
I look forward to chatting with you all!
EDIT: Please post different questions as separate comments, for discussion threading. Thanks!
EDIT: Alright, I think I replied to everything. My thanks to everyone who participated!
These are reasonable questions, and I won't be able to satisfactorily address them in a short comment reply. Nevertheless, I'll try to give you a bit more of a sense of "where I'm coming from" on the topic of meta-ethics.
As I say in the report,
We don't plan to release an "empirical-only" version of the report, but I think those with different meta-ethical views will be able to read the empirical sections of the report — e.g. most of section 3, appendices C-E, and some other sections — and think for themselves about what those empirical data imply given their own meta-ethical views.
However, your primary question seems to be about why Open Phil is willing to make decisions that are premised on particular views about meta-ethics that we find plausible (e.g. ideal advisor theory) rather than a broader survey of (expert?) views about meta-ethics. I'll make a few comments about this.
First: the current distribution of expert opinion is a significant input to our thinking, but we don't simply defer to it. This is true with respect to most topics that intersect with our work, not just meta-ethics. Instead, we do our best to investigate decision-relevant topics deeply enough ourselves that we develop our own opinions about them. Or, as we said in our blog post on Hits-based giving (re-formatted slightly):
Second: one consequence of investigating topics deeply enough to form our own opinions about them, rather than simply deferring to what seems to be the leading expert opinion on the topic (if any exists), is that (quoting again from "Hits-based giving") "we don't expect to be able to fully justify ourselves in writing." That is why, throughout my report, I repeat that my report does not really "argue" for the assumptions I make and the tentative conclusions I come to. But I did make an effort to refer the reader to related readings and give some sense of "where I'm coming from."
Third: even if I spent an entire year writing up my best case for (e.g.) ideal advisor theory, I don't think it would convince you, and I don't think it would be thoroughly convincing to myself, either. We can't solve moral philosophy. All we can do is take pragmatic steps of acceptable cost to reduce our uncertainty as we aim to (as I say in the report) "execute our mission to 'accomplish as much good as possible with our giving' without waiting to first resolve all major debates in moral philosophy."
In the end, anyone who is trying to do as much "good" as possible — or even just "more good than bad" — must either (1) wrestle with the sorts of difficult issues we're wrestling with (or the similarly unsolved problems of some other moral framework), and come to some "best guesses for now," or (2) implicitly make assumptions about ~all those same fraught issues anyway, but without trying to examine and question them. (At least, this is true so long as "good" isn't just defined with respect to domain-narrow, funder-decided "goods" like "better scores by American children on standardized tests.")
We don't think it's possible for Open Phil or any other charitable project to definitively answer such questions, but we do prefer to act on questioned/examined assumptions rather than on largely unexamined assumptions. Hence our reports on difficult philosophical questions summarize what we did to examine these questions and what our best-guess conclusions are for the moment, but those reports do not convincingly argue for any solid "answers" to these questions. (Besides the moral patienthood report, see also e.g. here and here.)
Of course, you might think the above points are reasonable, but still want to know more about why I find ideal advisor theory particularly compelling among meta-ethical views. I can't think of anything especially brief to say, other than "that's the family of views I find most plausible after having read, thought, and argued about meta-ethics for several years." I haven't personally written a defense of ideal advisor theory, and I'm not aware of a published defense of ideal advisor theory that I would thoroughly endorse. If you're curious to learn more about my particular views, perhaps the best I can do is point you to Pluralistic moral reductionism, Mixed Reference: The Great Reductionist Project, and ch. 9 of Miller (2013), and Extrapolated volition (normative moral theory).
Another question you seem to be asking is why Open Phil chose to produce a report with this framing first, as opposed to "a general overview of what many [meta-ethical] views would say." I think this is because ideal advisor theory is especially popular among the people at Open Phil who engage most deeply with the details of our philosophical framework for giving. As far I know, all these people (myself included) have substantial uncertainty over meta-ethical views and normative moral theories (see footnote 12 on normative uncertainty), but (as far I know) we put unusually high "weight" on ideal advisor theories — either as a final "theory" of normative morality, or as a very important input to our moral thinking. Because of this, it seemed likely to be more informative (to our decision-making about grants) per unit effort to conduct an investigation that was a mix of empirical data (not premised on any meta-ethical theory) and moral philosophy (premised on some kind of ideal advisory theory), rather than to produce a more neutral survey of the implications of a large variety of meta-ethical theories, most of which we (the people at Open Phil who engage most deeply with the details of our philosophical framework for giving) have considered before and decided to give little or no weight to (again, as far as I know).
One more comment on ideal advisor theory: What I mean by ideal advisor theory might be less narrow than what you're thinking of. For example, on my meaning, ideal advisor theory could (for all I know) result in reflective equilibria as diverse as contractarianism, deontological ethics, hedonic utilitarianism, egoism, or a thorough-going nihilism, among other views.
That said, as I say in the report, I don't think my tentative moral judgments in the report depend on my meta-ethical views, and the empirical data I present don't depend on them either. Also, we continue to question and examine the assumptions behind our current philosophical framework for giving, and I expect that framework to evolve over time as we do so.
A final clarification: another reason I discuss my meta-ethical views so much (albeit mostly in the appendices) is that I suspect one's ethical views unavoidably infect one's way of discussing the relevant empirical data, and so I chose to explain my ethical views in part so that people can interpret my presentation of the empirical data while having some sense of what biases I may bring to that discussion as a result of my ethical views.
Yeah, but what you're doing is antithetical to that. You're basically assuming that you have solved a major debate in philosophy and not paying attention to uncertainty. At the very least, we should know more clearly if Open Phil is going to be ... (read more)