Hi EA Forum,

I'm Luke Muehlhauser and I'm here to answer your questions and respond to your feedback about the report on consciousness and moral patienthood I recently prepared for the Open Philanthropy Project. I'll be here today (June 28th) from 9am Pacific onward, until the flow of comments drops off or I run out of steam, whichever comes first. (But I expect to be avaliable through at least 3pm and maybe later, with a few breaks in the middle).

Feel free to challenge the claims, assumptions, and inferences I make in the report. Also feel free to ask questions that you worry might be "dumb questions," and questions you suspect might be answered somewhere in the report (but you're not sure where) — it's a long report! Please do limit your questions to the topics of the report, though: consciousness, moral patienthood, animal cognition, meta-ethics, moral weight, illusionism, hidden qualia, etc.

As noted in the announcement post, much of the most interesting content in the report is in the appendices and even some footnotes, e.g. on unconscious vision, on what a more satisfying theory of consciousness might look like, and a visual explanation of attention schema theory (footnote 288). I'll be happy to answer questions about those topics as well.

I look forward to chatting with you all!

EDIT: Please post different questions as separate comments, for discussion threading. Thanks!

EDIT: Alright, I think I replied to everything. My thanks to everyone who participated!

28

0
0

Reactions

0
0
Comments66
Sorted by Click to highlight new comments since: Today at 2:05 PM

Has OpenPhil (and in particular Lewis Bollard), to your knowledge, altered any grant recommendations based on your report, and if so, how?

We have!

Prior to beginning this investigation, we were on the fence about whether we should recommend grants related to fish welfare. As a result of initial findings from this report (long before publication), we decided to go ahead with investigating fish welfare grants:

A third key output from this investigation is that we decided (months ago) to begin investigating possible grants targeting fish welfare. This is largely due to my failure to find any compelling reason to “draw lines” in phylogeny (see previous section). As such, I could find little justification for suggesting that there is a knowably large difference between the probability of chicken consciousness and the probability of fish consciousness. Furthermore, humans harm and kill many, many more fishes than chickens, and some fish welfare interventions appear to be relatively cheap.

Since then, Lewis has recommended >$1 million in grants related to fish welfare: 1, 2, 3.

That said, keep in mind this caveat from the report:

Of course, this decision to investigate possible fish welfare grants could later be shown to have been unwise, even if the Open Philanthropy Project assumes my personal probabilities of consciousness in different taxa, and even if those probabilities don’t change. For example, I have yet to examine other potential criteria for moral patienthood besides consciousness, and I have not yet examined the question of moral weight (see above). The question of moral weight, especially, could eventually undermine the case for fish welfare grants, even if the case for chicken welfare grants remains robust. Nevertheless, and consistent with our strategies of hits-based giving and worldview diversification, we decided to seek opportunities to benefit fishes in case they should be considered moral patients with non-negligible weight.

(1) To what degree did your beliefs about the consciousness of insects (if insects are too broad a category please just focus on the common fruit fly) change from completing this report and what were the main reasons for those beliefs changing? I would be particularly interested in an answer that covers the following three points: (i) the rough probability that you previously assigned to them being conscious, (ii) the rough probability that you now assign to them being conscious and (iii) the main reasons for the change in that probability.

(2) Do you assign a 0% probability to electrons being conscious?

(3) In section 5.1 you write

I’d like to get more feedback on this report from long-time “consciousness experts” of various kinds. (So far, the only long-time “consciousness expert” from which I’ve gotten extensive feedback is David Chalmers.)

David Chalmers seems like an interesting choice for the one long-time “consciousness expert” to receive extensive feedback from. Why was he the only one that you got extensive feedback from? And of the other consciousness experts that you would like to receive extensive feedback from, do you think that most of them would disagree with some part of the report in a similar way, and if you think they would, what would that disagreement or those disagreements be?

(4) A while ago Carl Shulman put out this document detailing research advice. Can you please do the same, or if you already have a document like this can you please point me to it? I would probably find it useful and I would guess some others would too.

Re: (1), I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Re: (2). This question raises issues related to Pascal's Mugging. I don't pretend to have a solution, but some especially relevant discussions are Pascal's Muggle: Infinitesimal Priors and Strong Evidence, Why we can’t take expected value estimates literally (even when they’re unbiased), Modeling Extreme Model Uncertainty, and Sequence thinking vs. cluster thinking. As mentioned here, the Open Philanthropy Project remains eager to get more clarity on how we should think about making decisions under different kinds of uncertainty, and we expect to write more about this issue in the future.

In the meantime, the possibility of electron consciousness does not currently inform my actions.

Re: (3). You're right that Chalmers is "an interesting choice" for me to get feedback from. For example, Chalmers is a property dualist and panpsychist, which in some ways is about as different a position on consciousness from mine as you could find. On the other hand, Chalmers is (in my opinion) an unusually sharp thinker about consciousness, and also he has a deserved reputation for his ability to give critical feedback to others from the perspective of their own view. And, as hoped, Chalmers' feedback on an earlier draft of my report was very helpful, and I invested dozens of hours improving the draft in response to his feedback alone.

There were a few other consciousness researchers (whom I won't name) that I had hoped to get feedback from, but they weren't interested to give it. That's not surprising, since my report is so different from the type of work that consciousness researchers typically engage with.

My report makes so many claims (or at least, "guesses") that I have no doubt that if other consciousness experts gave extensive feedback on it, they would find plenty with which they disagree. In some cases, I know from their writing some specific things they would disagree with. But in many cases, I'm not sure where they would disagree, both because I haven't read all their works on consciousness, and because most consciosuness experts have only written about a tiny portion of the issues covered (at least briefly) in my report.

Re: (4). This question is beyond the scope of the intended purpose of this AMA, but I'll make a couple brief comments. It would take a lot of work for me to write a similar document that usefully complements Carl's, but I may do so one day. An old post of mine on this general topic is Scholarship: How to Do It Efficiently, but it's pretty narrow in scope.

Quick note: I agree with Benito that it's preferable to split different questions into separate comments, but in this case don't worry about doing so. I'll reply to all your questions as soon as I can, though I'm going to answer some other, quicker-to-answer questions first.

(Meta: It might be more helpful to submit individual questions as separate comments, so that people can up vote them separately and people's favourite questions (and associated answers) can rise to the top.)

Thanks. That feedback was useful :)

In future, I will submit individual questions as separate comments.

Thanks for doing this AMA. I'm curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question "Is an insect conscious?" or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?

The Open Phil conversation notes with Brian Tomasik say:

Luke isn’t certain he endorses Type A physicalism as defined in that article, but he thinks his views are much closer to “Type A” physicalism than to “Type B” physicalism

(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik's well-known analogy is that there's no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there's something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn't give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn't give you an objective definition of a table.)

However, in the report, you write as though there is an objective definition (e.g. in the "Consciousness, innocently defined" section), and I feel most readers of the report will get that impression, e.g. that there's an objective answer as to whether insects are conscious.

Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it's still useful to use common sense rhetoric that treats it as objective, and you don't think it's that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there's still enough likelihood of Type B that you focus on questions like "If Type B is true, then is an insect conscious?" and would just shorthand this as "Is an insect conscious?" because e.g. if Type A is true, then consciousness research is not that useful in your view.

I'm not sure what you mean by "objective definition" or "objectively correct answer," but I don't think I think of consciousness as being "objective" in your sense of the term.

The final question, for me, is "What should I care about?" I elaborate my "idealized" process for answering this question in section 6.1.2. Right now, my leading guess for what I'd conclude upon going through some approximation of that idealized process is that I'd care about beings with valenced conscious experience, albeit with different moral weights depending on a variety of other factors (early speculations in Appendix Z7).

But of course, I don't know quite what sense of "valenced conscious experience" I'd end up caring about upon undergoing my idealized process for making moral judgments, and the best I can do at this point is something like the definition by example (at least for the "consciousness" part) that I begin to elaborate in section 2.3.1.

Re: Type A physicalism, aka Type A materialism. As mentioned in section 2.3.2, I do think my current view is best thought of as "'type A materialism,' or perhaps toward the varieties of 'type Q' or 'type C' materialism that threaten to collapse into 'type A' materialism anyway…" (see the footnote after this phrase for explanations). One longer article that might help clarify how I think about "type A materialism" w.r.t. consciousness or other things is Mixed Reference: The Great Reductionist Project and its dependencies.

That said, I do think the "triviality" objection is a serious one (Ctrl+F the report for "triviality objection to functionalism"), and I haven't studied the issue enough to have a preferred answer for it, nor am I confident there will ever be a satisfying answer to it — at least, for the purposes of figuring out what I should care about. Brian wrote a helpful explainer on some of these issues: How to Interpret a Physical System as a Mind. I endorse many of the points he argues for there, though he and I end up with somewhat different intuitions about what we morally care about, as discussed in the notes from our conversation.

I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.

Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.

If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:

I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don't think there's a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it's only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).

I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think "Oh, I can just assign 0% to fruit flies!" but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.

Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)

My hope was that the Type A-ness / subjectivity of the concept of "consciousness" I'm using would be clear from section 2.3.1 and 2.3.2, and then I can write paragraphs like the one above about fruit fly consciousness, which refers back to the subjective notion of consciousness introduced in section 2.3.

But really, I just find it very cumbersome to write in detail and at length about consciousness in a way that allows every sentence containing consciousness words to clearly be subjective / Type A-style consciousness. It's similar to what I say in the report about fuzziness:

given that we currently lack such a detailed decomposition of “consciousness,” I reluctantly organize this report around the notion of “consciousness,” and I write about “which beings are conscious” and “which cognitive processes are conscious” and “when such-and-such cognitive processing becomes conscious,” while pleading with the reader to remember that I think the line between what is and isn’t “conscious” is extremely “fuzzy” (and as a consequence I also reject any clear-cut “Cartesian theater.”)

But then, throughout the report, I make liberal use of "normal" phrases about consciousness such as what's conscious vs. not-conscious, "becoming" conscious or not conscious, what's "in" consciousness or not, etc. It's just really cumbersome to write in any other way.

Another point is that, well, I'm not just a subjectivist / Type A theorist about consciousness, but about nearly everything. So why shouldn't we feel fine using more "normal" sentence structures to talk about consciousness, if we feel fine talking about "living things" and "mountains" and "sorting algorithms" and so on that way? I don't have any trouble talking about the likelihood that there's a mountain in such-and-such city, even though I think "mountain" is a layer of interpretation we cast upon the world.

That pragmatic approach makes sense and helps me understand your view better. Thanks! I do feel like the consequences of suggesting objectivism for consciousness are more significant than for "living things," "mountains," and even terms that are themselves very important like "factory farming."

Consequences being things like (i) whether we get wrapped up in the ineffability/hard problem/etc. such that we get distracted from the key question (for subjectivists) of "What are the mental things we care about, and which beings have those?" and (ii) in the particular case of small minds (e.g. insects, simple reinforcement learners), whether we try to figure out their mental lives based on objectivist speculation (which, for subjectivists, is misguided) or force ourselves to decide what the mental things we care about are, and then thoughtfully evaluate small minds on that basis. I think evaluating small minds is where the objective/subjective difference really starts to matter.

Also, to a less extent, (iii) how much we listen to "expert" opinion outside of just people who are very familiar with the mental lives of the being in question, and (iv) unknown unknowns and keeping a norm of intellectual honesty, which seems to apply more to discussions of consciousness than of mountains/etc.

I don't think I understand what you mean by consciousness being objective. When you mention "what processes, materials, etc. we subjectively choose to use as the criteria for consciousness", this sounds to me as if you're talking about people having different definitions of consciousness, especially if the criteria are meant as definitive rather than indicative. However presumably in many cases whether the criteria are present will be an objective question.

When you talk about whether "consciousness is an actual property of the world", do you mean whether it's part of ontologic base reality?

A good example of what thebestwecan means by "objectivity" is the question "If a tree falls in a forest and no one is around to hear it, does it make a sound?" He and I would say there's no objective answer to this question because it depends what you mean by "sound". I think "Is X conscious?" is a tree-falls-in-a-forest kind of question.

When you talk about whether "consciousness is an actual property of the world", do you mean whether it's part of ontologic base reality?

Yeah, ontologically primitive, or at least so much of a natural kind, like the difference between gold atoms and potassium atoms, that people wouldn't really dispute the boundaries of the concept. (Admittedly, there might be edge cases where even what counts as a "gold atom" is up for debate.)

The idea of a natural kind is helpful. The fact that people mean different things by "consciousness" seems unsurprising, as that's the case for any complex word that people have strong motives to apply (in this case because consciousness sounds valuable). It also tells us little about the moral questions we're considering here. Do you guys agree or am I missing something?

I agree that it tells us little about the moral questions, but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven't gotten that far.)

One thing that makes consciousness interesting is that there's such a wide spectrum of views, from some people thinking that among current entities on Earth, only humans have consciousness, to some people thinking that everything has consciousness.

but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven't gotten that far.)

Who do and do not agree with that, then? You and thebestwecan clearly do. Do you know the opinions of prominent philosophers in the field? For instance David Chalmers, who sounds like he is amongst these(?)

IMO, the philosophers who accept this understanding are the so-called "type-A physicalists" in Chalmers's taxonomy. Here's a list of some such people, but they're in the minority. Chalmers, Block, Searle, and most other philosophers of mind aren't type-A physicalists.

IMO, the philosophers who accept this understanding are the so-called "type-A physicalists" in Chalmers's taxonomy.

I'm not wholly sure I understand the connection between this and denying that consciousness is a natural kind. The best I can do (and perhaps you or thebestwecan can do better? ;-) ) is:

"If consciousness is a natural kind, then the existence of that natural kind is a separate fact from the existence of such-and-such a physical brain state (and vica versa)"

You're right that there's probably not a strict logical relationship between those things. Also, I should note that I have a poor understanding of the variety of different type-B views. What I usually have in mind as "type B" is the view that the connection between consciousness and brain processing is only something we can figure out a posteriori, by noticing the correlation between the two. If you hold that view, it presumably means you think consciousness is a definite thing that we discover introspectively. For example, we can say we're conscious of an apple in front of us but are not conscious of a very fast visual stimulus. Since we generally assume most of these distinctions between conscious and unconscious events are introspectively clear-cut (though some disagree), there would seem to be a fairly sharp distinction within reality itself between conscious vs unconscious? Hence, consciousness would seem more like a natural kind.

In contrast, the type-A people usually believe that consciousness is a label we give to certain physical processes, and given the complexity of cognitive systems, it's plausible that different people would draw the boundaries between conscious vs unconscious in different places (if they care to make such a distinction at all). Daniel Dennett, Marvin Minsky, and Susan Blackmore are all type-A people and all of them make the case that the boundaries of consciousness are fuzzy (or even that the distinction between conscious and unconscious isn't useful at all).

In theory, there could be a type-A physicalist who believes that there will turn out to be some extremely clean distinction in the brain that captures the difference between consciousness vs unconsciousness, such that almost everyone would agree that this is the right way to carve things up. In this case, the type-A person could still believe consciousness will turn out to be a natural kind.

(I'm not an expert on either the type A/B distinction or natural kinds, so apologies if I'm misusing concepts here.)

How relevant is "the mirror test" in thinking about consciousness?

Among the PCIFs mentioned in section 3.2.2, I consider it to be one of the more interesting ones, principally because it may be indicative of certain kinds of "self-modeling," and the theoretical approaches to phenomenal consciousness I intuitively find most promising are those which involve certain kinds of self-modeling.

That said, I still give it very little weight, since it's very unclear what, exactly, different mirror-related behaviors imply about the self-modeling capacities of the animal, and how those relate to the self-modeling capacities that may be necessary for conscious experience. For example see the quotes and sources in footnote 109.

One thing I found extremely nice about your report is that it could serve EAs (and people in general) as a basis for shared terminology in discussions! If two people from different backgrounds wanted to have a discussion about philosophy of mind or animal consciousness, which texts would you recommend they both read in order to prepare themselves? (Not so much in terms of familiarity with popular terminology, but rather useful terminology.) Can you think of anything really good that is shorter than this report?

Unfortunately, the literature on consciousness is in even greater terminological disarray than many other fields, for understandable reasons: (1) to an unusual degree, we still don't know what we're talking about w.r.t. consciousness, and as a result, (2) consciousness studies is an extremely interdisciplinary "field," with contributions from scholars of nearly non-overlapping professional backgrounds. So, I'm not sure what a useful source on "useful terminology" would look like. Possibly a broad and shallow survey like my report or Weisberg (2014) is the best we can do at this stage.

How much weight do you give to the self-reports of long-term meditation practitioners on their experience of consciousness? Do they have a privileged perspective on the true nature of conscious experience?

I do suspect it’s possible for long-term meditation practitioners to have observations about the “internal” explananda of consciousness that non-practitioners (such as myself) can’t have. Susan Blackmore may be one such person. Of course, before taking any particular report seriously I’d want to check whether the reporting practitioner seems to be a reliable and clear reporter of their internal experience, whether their reports can be replicated by others in a certain way, etc.

As for determining the “true nature” of conscious experience: that’s more a matter of normal science, and especially of proposing cognitive algorithms that would instantiate the observed explananda of consciousness, and the meditation practitioners who are helpful for increasing our understanding of the explananda of consciousness might or might not happen to also be useful for doing the scientific work of coming up with strong explanations of those explananda.

From a productivity standpoint, how do you structure your life / typical day in order to get research done? How long does it take to produce a report of this quality, from both total time spent and wall clock time (start date minus finish date)?

These questions are beyond the intended scope of this AMA, but I’ll make a few quick remarks about the second question.

The calendar time spent is pretty uninformative, since large chunks of it passed by while I was working on other Open Phil projects that were (temporarily) more urgent, or waiting for feedback from someone, or for other reasons. As for total hours, my work logs suggest something like >800 hours on this report (by far my largest project thus far), though that includes lots of “getting up to speed” background reading on the various fields with which the report engages.

You touched on this a lot in your report but I don't think you went all the way unless I missed something -- what do you think a computer program would need to have in order to exhibit "minimally viable consciousness" in your view?

I'm not sure. My best guess would be something in the direction of the GDAK-inspired cognitive architecture discussed in section 6.2.4, but I don't yet have a clear picture of all the cognitive features that would need to be working together in a certain way for me to be even moderately confident the program is "conscious" in the sense defined in the report. I'm pretty sure none of the programs I described in the report are conscious (in the sense defined in the report).

You mention that a further project might be to attempt to make the case that chimpanzees aren’t conscious, and that Gazami crabs are, each to confirm your suspicion you could in fact make a plausible case for each. Could you outline what such cases might look like (knowing that you can’t provide the output of an investigation you haven’t performed)? What evidences would you be looking into that aren’t already in this report (e.g. would it mainly be information as to how their cognition in particular is similar to / differs from human cognition)?

For others' benefit, what I said in the report was:

I think I can make a weakly plausible case for (e.g.) Gazami crab consciousness, and I think I can make a weakly plausible case for chimpanzee non-consciousness.

By "weakly plausible" I meant that I think I can argue for a ~10% chance of Gazami crab consciousness, and separately for a ~10% chance of chimpanzee non-consciousness.

Such arguments would draw from considerations that are mentioned at least briefly somewhere in the report, but it would bundle them together in a certain way and elaborate certain points.

My argument for ~10% chance of chimpanzee non-consciousness would look something like an updated version of Macphail (1998), plus many of the considerations from Dennett (2017). Or, to elaborate that a bit: given the current state of evidence on animal cognition and behavior, and given what is achievable using relatively simple deep learning architectures (including deep reinforcement learning), it seems plausible (though far from guaranteed) that the vast majority of animal behaviors, including fairly sophisticated ones, are the product of fairly simple (unconscious) learning algorithms operating in environments with particular reward and punishment gradients, plus various biases in the learning algorithms "organized in advance of experience" via evolution. Furthermore, it seems plausible (though not likely, I would say) that phenomenal consciousness depends on a relatively sophisticated suite of reasoning and self-modeling capacities that humans possess and chimpanzees do not (and which may also explain why chimpanzees can't seem to learn human-like syntactically advanced language). I am pretty confident this conjunction of hypotheses isn't true, but I think something like this is "weakly plausible." There are other stories by which it could turn out that chimpanzees aren't conscious, but the story outlined above is (very loosely speaking) the "story" I find most plausible (among stories by which chimpanzees might not be conscious).

My case for a ~10% chance of Gazami crab consciousness would involve pulling together a variety of weak considerations in favor of the "weak plausibility" of Gazami crab consciousness. For example: (1) given the considerations from Appendix H, perhaps phenomenal consciousness can be realized by fairly simple cognitive algorithms, (2) even assuming fairly "sophisticated" cognition is required for consciousness (e.g. a certain kind of self-model), perhaps 100,000 neurons are sufficient for that, and (3) perhaps I'm confused about something fairly fundamental, and I should be deferring some probability mass to the apparently large number of consciousness scholars who are physicalist functionalists and yet think it's quite plausible that arthropods are conscious.

Cross-posted here from a comment on the announcement post, a question from Evan_Gaensbauer:

Do you think science or philosophy can meaningfully separate the capacity to experience suffering or pain from however else consciousness is posited to be distributed across species? What would be fruitful avenues of research for effective altruists to pursue if it's possible to solve the problem in the first question, without necessarily addressing whatever remains of consciousness?

(To preserve structure, I'll reply in a comment reply.)

I'll use the terms "nociception" and "pain" as defined in Appendix D:

nociception is the encoding and processing of noxious stimuli, where a noxious stimulus is an actually or potentially body-damaging event (either external or internal, e.g. cutaneous or visceral)… Pain, in contrast to mere nociception, is an unpleasant conscious experience associated with actual or potential body damage (or akin to unpleasant experiences associated with noxious stimuli).

…Nociception can occur without pain, and pain can occur without nociception. Loeser & Treede (2008) provide examples: “after local anesthesia of the mandibular nerve for dental procedures, there is peripheral nociception without pain, whereas in a patient with thalamic pain [a kind of neuropathic pain resulting from stroke], there is pain without peripheral nociception.”

Often, it's assumed that if animals are conscious, then it's likely that they experience pain in conjunction with the types of nociceptive processing that, in humans, would be accompanied by conscious pain — or at least, this seems likely for the animals that are fairly similar to us in their neural architecture (e.g. mammals, or perhaps all vertebrates). And likewise, it seems that if we limit the nociception that occurs in their bodies, then this will also limit the conscious pain they experience if they are conscious at all.

Unfortunately, humans also experience pain without nociception, e.g. (as my report says) "neuropathic pain, and perhaps also… some cases of psychologically-created experiences of pain, e.g. when a subject is hallucinating or dreaming a painful experience." Assuming whichever animals are conscious are also capable of non-nociceptive pain, this means that conscious animals may be capable of suffering in ways that are difficult for us to detect.

Even still, I think we can study the mechanisms of nociception (and other triggers for conscious pain) independently of studying consciousness, and this could lead to interventions that (probably) address the usual causes of conscious pain even if we don't know whether a given animal is conscious or not.

However, I'm not sure this is quite what you meant to be asking; let me know if there was a somewhat different question you were hoping I'd answer.

Have you given any thought to how animal and human interests might be commensurated? Even with very good evidence for animal consciousness, it's much more difficult to devise a way of weighing their moral interests.

It's a very difficult question. I wrote up some initial speculations on how I might address the "moral weight" question in Appendix Z7. Each of those "candidate dimensions of moral concern" can also be studied on their own, often in pretty similar ways to how "consciousness in general" can be studied.

I was confused by the issue regarding diet qualia. Does the argument reduce to answering this question: “Is it the case that explaining away all the individuals properties of conscious experience could ever add up to a completed explanation-away of consciousness”? (In my understanding, the weak illusionists say that it wouldn’t, the strong illusionists say that it would, and the not-illusionists say that this process can’t even get started.)

I'm not sure whether the thing you're trying to say is compatible with what I'd say or not. The way I'd say it is this:

The 'weak illusionist' says that while many features of conscious experience can be 'explained away' as illusions, the 'something it's like'-ness of conscious experience is not (and perhaps cannot be) an illusion, and thus must be "explained" rather than "explained away." In contrast, the "strong illusionist" says that even the 'something it's like'-ness of conscious experience is an illusion.

But what might it be for the 'something it's like'-ness to be an illusion? Basically, that it seems to us that there is more to conscious experience than 'zero qualia', but in fact there are only zero qualia. E.g. it seems to us that there are 'diet qualia' that are more than 'zero qualia', but in fact 'diet qualia' have no distinctive features beyond 'zero qualia.'

Now in fact, I think there probably is more to 'zero qualia' than Frankish's "properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective," but I don't think those additional properties will be difficult for the strong illusionst to adopt, and I don't think they'll vindicate a position according to which there is a distinctive 'diet qualia' option. Speaking of diet qualia vs. zero qualia is very rough: the true form of the answer will be classes of cognitive algorithms (on my view).

You mentioned many avenues future research could take, but do you have any early sense of prioritization for those research questions?

As mentioned in another comment, I'm perhaps most excited about the potential informativeness of (1) computational modeling of the sort I describe in section 6.2.4, (2) certain kinds of studies of human consciousness (Ctrl+F in the report for "perhaps the most promising path forward"), and (3) improvements to tools and techniques of human neuroscience that could help with (2) (Ctrl+F in the report for "we need fundamental breakthroughs").

But if I spent ~10 hours on each of the suggested research ideas from section 5, I'd probably have a better sense of each suggestion's cost and likely benefits, and my intuitions about prioritization would probably change.

I think that functionalism is incorrect and that we are super-confused about this issue.

Specifically, there is merit to the "Explanatory gap" argument. See https://en.m.wikipedia.org/wiki/Explanatory_gap

I also sort of think I know what the missing thing is. It's that the input is connected to the algorithm that constitutes you.

If this is true, there is no objective fact-of-the-matter about which entities are conscious (in the sense of having qualia). From my point of view only I am conscious. From your point of view, only you are. Neither of us are wrong.

What's the explanatory gap argument?

Why is the methodology of the report and Open Phil's recommendations based on your personal preferences about which animals matter, instead of direct inquiry into moral status?

The report aims to be a "direct inquiry into moral status," but because it does so from an anti-realist perspective, a certain notion of idealized preferences comes into play. In other words: if you don't think "objective values" are "written into the fabric of the universe," then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on. I won't make the case for this meta-ethical approach here, but I link some relevant sources in the report, in particular in footnote 239.

This is one reason I say at the top of the report that:

This report is unusually personal in nature, as it necessarily draws heavily from the empirical and moral intuitions of the investigator. Thus, the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my "idealized" moral intuitions would be.

The report aims to be a "direct inquiry into moral status," but because it does so from an anti-realist perspective,

Why? It's not more widely accepted than realism, and arguably it's decision-irrelevant regardless of its plausibility as per Ross's deflationary argument (http://www-bcf.usc.edu/~jacobmro/ppr/deflation-ross.pdf).

a certain notion of idealized preferences comes into play. In other words: if you don't think "objective values" are "written into the fabric of the universe," then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on.

But there are plenty of accounts of anti-realist ethics, and they don't all make everything reducible to preferences or provide this account of what we ought to value. I still don't see what makes this view more noteworthy than the others and why Open Phil is not as interested in either a general overview of what many views would say or a purely empirical inquiry from which diverse normative conclusions can be easily drawn.

And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my "idealized" moral intuitions would be.

I don't see what reason they have to take them into account at all, unless they accept ideal advisor theory and are using your preferences as a heuristic for what their preferences would be if they did the same thing that you are doing. Is ideal advisor theory their view now? And is it also the case for other moral topics besides animal ethics?

Also: can we expect a reduced, empirical-only version of the report to be released at some point?

(Just FYI, I'm drafting a reply to this, but it might be a while before it's ready to post.)

These are reasonable questions, and I won't be able to satisfactorily address them in a short comment reply. Nevertheless, I'll try to give you a bit more of a sense of "where I'm coming from" on the topic of meta-ethics.

As I say in the report,

I suspect my metaethical approach and my moral judgments overlap substantially with those of at least some other Open Philanthropy Project staff members, and also with those of many likely readers, but I also assume there will be a great deal of non-overlap with my colleagues at the Open Philanthropy Project and especially with other readers. My only means for dealing with that fact is to explain as clearly as I can which judgments I am making and why, so that others can consider what the findings of this report might imply given their own metaethical approach and their own moral judgments.

We don't plan to release an "empirical-only" version of the report, but I think those with different meta-ethical views will be able to read the empirical sections of the report — e.g. most of section 3, appendices C-E, and some other sections — and think for themselves about what those empirical data imply given their own meta-ethical views.

However, your primary question seems to be about why Open Phil is willing to make decisions that are premised on particular views about meta-ethics that we find plausible (e.g. ideal advisor theory) rather than a broader survey of (expert?) views about meta-ethics. I'll make a few comments about this.

First: the current distribution of expert opinion is a significant input to our thinking, but we don't simply defer to it. This is true with respect to most topics that intersect with our work, not just meta-ethics. Instead, we do our best to investigate decision-relevant topics deeply enough ourselves that we develop our own opinions about them. Or, as we said in our blog post on Hits-based giving (re-formatted slightly):

We don’t defer to expert opinion or conventional wisdom, though we do seek to be informed about them… following expert opinion and conventional wisdom is likely to cut against our goal of seeking neglected causes… We do think it would be a bad sign if no experts… agreed with our take on a topic, but when there is disagreement between experts, we need to be willing to side with particular ones. In my view, it’s often possible to do this productively by learning enough about the key issues to determine which arguments best fit our values and basic epistemology.

Second: one consequence of investigating topics deeply enough to form our own opinions about them, rather than simply deferring to what seems to be the leading expert opinion on the topic (if any exists), is that (quoting again from "Hits-based giving") "we don't expect to be able to fully justify ourselves in writing." That is why, throughout my report, I repeat that my report does not really "argue" for the assumptions I make and the tentative conclusions I come to. But I did make an effort to refer the reader to related readings and give some sense of "where I'm coming from."

Third: even if I spent an entire year writing up my best case for (e.g.) ideal advisor theory, I don't think it would convince you, and I don't think it would be thoroughly convincing to myself, either. We can't solve moral philosophy. All we can do is take pragmatic steps of acceptable cost to reduce our uncertainty as we aim to (as I say in the report) "execute our mission to 'accomplish as much good as possible with our giving' without waiting to first resolve all major debates in moral philosophy."

In the end, anyone who is trying to do as much "good" as possible — or even just "more good than bad" — must either (1) wrestle with the sorts of difficult issues we're wrestling with (or the similarly unsolved problems of some other moral framework), and come to some "best guesses for now," or (2) implicitly make assumptions about ~all those same fraught issues anyway, but without trying to examine and question them. (At least, this is true so long as "good" isn't just defined with respect to domain-narrow, funder-decided "goods" like "better scores by American children on standardized tests.")

We don't think it's possible for Open Phil or any other charitable project to definitively answer such questions, but we do prefer to act on questioned/examined assumptions rather than on largely unexamined assumptions. Hence our reports on difficult philosophical questions summarize what we did to examine these questions and what our best-guess conclusions are for the moment, but those reports do not convincingly argue for any solid "answers" to these questions. (Besides the moral patienthood report, see also e.g. here and here.)

Of course, you might think the above points are reasonable, but still want to know more about why I find ideal advisor theory particularly compelling among meta-ethical views. I can't think of anything especially brief to say, other than "that's the family of views I find most plausible after having read, thought, and argued about meta-ethics for several years." I haven't personally written a defense of ideal advisor theory, and I'm not aware of a published defense of ideal advisor theory that I would thoroughly endorse. If you're curious to learn more about my particular views, perhaps the best I can do is point you to Pluralistic moral reductionism, Mixed Reference: The Great Reductionist Project, and ch. 9 of Miller (2013), and Extrapolated volition (normative moral theory).

Another question you seem to be asking is why Open Phil chose to produce a report with this framing first, as opposed to "a general overview of what many [meta-ethical] views would say." I think this is because ideal advisor theory is especially popular among the people at Open Phil who engage most deeply with the details of our philosophical framework for giving. As far I know, all these people (myself included) have substantial uncertainty over meta-ethical views and normative moral theories (see footnote 12 on normative uncertainty), but (as far I know) we put unusually high "weight" on ideal advisor theories — either as a final "theory" of normative morality, or as a very important input to our moral thinking. Because of this, it seemed likely to be more informative (to our decision-making about grants) per unit effort to conduct an investigation that was a mix of empirical data (not premised on any meta-ethical theory) and moral philosophy (premised on some kind of ideal advisory theory), rather than to produce a more neutral survey of the implications of a large variety of meta-ethical theories, most of which we (the people at Open Phil who engage most deeply with the details of our philosophical framework for giving) have considered before and decided to give little or no weight to (again, as far as I know).

One more comment on ideal advisor theory: What I mean by ideal advisor theory might be less narrow than what you're thinking of. For example, on my meaning, ideal advisor theory could (for all I know) result in reflective equilibria as diverse as contractarianism, deontological ethics, hedonic utilitarianism, egoism, or a thorough-going nihilism, among other views.

That said, as I say in the report, I don't think my tentative moral judgments in the report depend on my meta-ethical views, and the empirical data I present don't depend on them either. Also, we continue to question and examine the assumptions behind our current philosophical framework for giving, and I expect that framework to evolve over time as we do so.

A final clarification: another reason I discuss my meta-ethical views so much (albeit mostly in the appendices) is that I suspect one's ethical views unavoidably infect one's way of discussing the relevant empirical data, and so I chose to explain my ethical views in part so that people can interpret my presentation of the empirical data while having some sense of what biases I may bring to that discussion as a result of my ethical views.

We can't solve moral philosophy. All we can do is take pragmatic steps of acceptable cost to reduce our uncertainty as we aim to (as I say in the report) "execute our mission to 'accomplish as much good as possible with our giving' without waiting to first resolve all major debates in moral philosophy."

Yeah, but what you're doing is antithetical to that. You're basically assuming that you have solved a major debate in philosophy and not paying attention to uncertainty. At the very least, we should know more clearly if Open Phil is going to be an Ideal Advisor Theory grantmaking organization from now on. Meta-ethics is what you use to figure out how to decide what your mission should be in the first place. Introducing it as this stage and in this manner is kind of weird.

Another question you seem to be asking is why Open Phil chose to produce a report with this framing first, as opposed to "a general overview of what many [meta-ethical] views would say." I think this is because ideal advisor theory is especially popular among the people at Open Phil who engage most deeply with the details of our philosophical framework for giving. As far I know, all these people (myself included) have substantial uncertainty over meta-ethical views and normative moral theories (see footnote 12 on normative uncertainty), but (as far I know) we put unusually high "weight" on ideal advisor theories — either as a final "theory" of normative morality, or as a very important input to our moral thinking.

To be quite honest it is hard to believe that a significant portion of the assorted staff at Open Phil independently reviewed philosophical arguments and independently arrived at the same relatively niche meta-ethical view. It sounds a lot more like an information cascade.

One more comment on ideal advisor theory: What I mean by ideal advisor theory might be less narrow than what you're thinking of. For example, on my meaning, ideal advisor theory could (for all I know) result in reflective equilibria as diverse as contractarianism, deontological ethics, hedonic utilitarianism, egoism, or a thorough-going nihilism, among other views.

But that just makes the whole methodology even more confusing since you are talking about meta-ethics and empirical issues at the same time, while not talking about the normative issues in the middle, and then coming to normative conclusions. If you really use ideal advisor theory as a meta-ethical approach then you should use it to determine a model of normative ethics, and then match that with science on consciousness. Two people with the same meta-ethical views could have very different normative views but you're not explicating this possibility. At the same time, you might have the same normative views as someone else but there is no way to tell since you're only talking about meta-ethics.

A final clarification: another reason I discuss my meta-ethical views so much (albeit mostly in the appendices) is that I suspect one's ethical views unavoidably infect one's way of discussing the relevant empirical data, and so I chose to explain my ethical views in part so that people can interpret my presentation of the empirical data while having some sense of what biases I may bring to that discussion as a result of my ethical views.

Ethical views might, but it's not clear how meta-ethical views would.

[anonymous]7y0
0
0

What probability would you assign to a China brain being conscious?

I think of this as more of a definitional issue than a probability issue. Given functionalism about consciousness, if the population of China managed to implement the necessary function of consciousness, then this system would be conscious. The problem is that, on my view, the function of consciousness is likely so complicated and specific that for the population of China to implement it, the "population of China" would be basically unrecognizable as "the population of China." Hence, I think the Chinese nation thought experiment is a misleading intuition pump.

Are you aware of any "hidden" (nociception-related?) cognitive processes that could be described as "two systems in conflict?" I find the hidden qualia view very plausible, but I also find it plausible that I might settle on a view on moral relevance where what matters about pain is not the "raw feel" (or "intrinsic undesirability" in Drescher's words), but a kind of secondary layer of "judgment" in the sense of "wanting things to change/be different" or "not accepting some mental component/input." I'm wondering whether most of the processes that would constitute hidden qualia are too simple to fit this phenomenological description or not...

I probably have thoughts on this, but first: Can you say more about what would count as "two systems in conflict"? E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way? Also, is the "secondary layer" you're talking about also meant to be "hidden", or are you talking about a "phenomenally conscious" second layer?

I was thinking about a secondary layer that is hidden as well.

E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way?

Hard to say. On Brian's perspective with similarities in multi-dimensional concept space, the competition among neural signals may already qualify to an interesting degree. But let's say we are interested in something slightly more sophisticated, but not sophisticated enough that we're inclined to look at it as "not hidden." (Maybe it would qualify if the hidden nociceptive signals alter subconscious dispositions in interesting ways, though it depends on how that would look like and how it compares to what is going on introspectively with suffering that we have conscious access to.)

Got it, thanks for clarifying. Off the top of my head, I can't think of any unconscious or at least "hidden" processing that is known to work in the relatively sophisticated manner your describe, but I might have read about such cases and am simply not remembering them at the moment. Certainly an expert on unconscious/hidden cognitive processing might be able to name some fairly well-characterized examples, and in general I find it quite plausible that such cognitive processes occur in (e.g.) the human brain (and thus potentially in the brains of other animals). Possibly the apparent cognitive operations undertaken by the non-verbal hemisphere in split-brain patients would qualify, though they seem especially likely to qualify as "conscious" under the Schwitzgebel-inspired definition even if they are not accessible to the hemisphere that can make verbal reports.

Anyway, the sort of thing you describe is one reason why, in section 4.2, my probabilities for "consciousness of a sort I intuitively morally care about" are generally higher than my probabilities for "consciousness as loosely defined by example above." Currently, I don't think I'd morally care about such cognitive processes so long as they were "unconscious" (as loosely defined by example in my report), but I think it's at least weakly plausible that if I was able to carry out my idealized process for making moral judgments, I would conclude that I care about some such unconscious processes. I don't use Brian's approach of "mere" similarities in a multi-dimensional concept space, but regardless I could still imagine myself morally caring about certain types of unconscious processes similar to those you describe, even if I don't care about some other unconscious processes that may be even more similar (in Brian's concept space) to the processes that do instantiate "conscious experience" (as loosely defined by example in my report). (I'd currently bet against making such moral judgments, but not super-confidently.)

Did you always find illusionism plausible or was there a moment where it made “click” or just a gradual progression? Do you think reading more about neuroscience makes people more sympathetic to it?

Do you think the p-zombie thought experiment can be helpful to explain the difference between illusionism and realism (“classic qualia” mapping onto the position “p-zombies are conceivable"), or do you find that it is unfair or often leads discussions astray?

I think I always found Dennett's general approach quite plausible (albeit a bit too "hasty" in its proposed reductions; see footnote 222), though I hadn't read other illusionist accounts prior to beginning this investigation. For me, reading Frankish made a bigger difference to my confidence in illusionism than any particular neuroscience papers or books.

Personally, I find discussions of p-zombies somewhat unhelpful, but for those steeped in that literature, it might be a useful set of concepts for explaining illusionism. My first recommendation would still be Frankish's papers, though.

You mentioned in the report that moral patienthood could correspond with things other than consciousness. Could you briefly summarize (no need for citations) some philosophical views on what else may be necessary or sufficient for moral patienthood?

Section 2.2.2 and its many footnotes survey the suggested options briefly but fairly thoroughly, I think. My sense is that even theories of moral patienthood (or "moral status", or "full moral status", or other related but not always identical concepts) which don't emphasize phenomenal consciousness as a necessary condition do, upon closer inspection, include phenomenal consciousness as a necessary condition. For example, in one case a theory (I can't remember which one, now) emphasized "self-awareness" but didn't mention consciousness, but upon reading further, it turned out that their notion of "self-awareness" required phenomenal consciousness, and that they would not think that a system that was "self-aware" without being phenomenally conscious was a moral patient.

That said, it is common for theorists to add additional requirements beyond phenomenal consciousness, especially "valenced experience" / "sentience", but also many other properties such as self-awareness, desires, and the other things listed in the bullet list of section 2.2.2. For elaborations, see the sources cited in the footnotes in that bullet list.

Of course, one might also have a meta-ethical approach that doesn't make use of ideas of "moral patienthood" at all. For example, see the sources cited in the footnote after the phrase "As with many framing choices in this report, this is far from the only way to approach the question…"

You reported that you found investigation from the perspective of how consciousness evolved to be wanting. Do you think there might be relatively high potential upside in encouraging more and better theoretical work and discussion of this sort? Based on what you’ve seen in other fields, what might that depend on?

Do you suspect things in that space could be usefully incorporated into a model of progress towards a theory resembling your six step GDAK-inspired cognitive architecture approach?

In spite of concern over just-so stories (and I think anthropic bias may apply here?), might even partial progress on this kind of work be relatively likely to decrease uncertainty for distributions of different probabilities of question-of-suffering scenarios, for various animals?

I'm not sure I'd say the literature on the evolution of consciousness is especially "wanting" — it's just a really hard problem, and so as with theories of how consciousness works, I didn't find any theories of how consciousness evolved that were even moderately persuasive.

In our current state of uncertainty, little bits of "partial progress" can be made from many angles, and the evolution of consciousness is one of those angles. I don't think I'd highlight it as especially promising, though (in terms of reduction of uncertainty per unit effort). On the present margin, I'm probably more excited about the potential informativeness of (1) computational modeling of the sort I describe in section 6.2.4, (2) certain kinds of studies of human consciousness (Ctrl+F in the report for "perhaps the most promising path forward"), and (3) improvements to tools and techniques of human neuroscience that could help with (2) (Ctrl+F in the report for "we need fundamental breakthroughs").

The "GDAK-inspired cognitive architecture approach" tackles the problem from a different angle. It asks: "What are all the consciousness explananda we can identify and describe in some detail, and which cognitive algorithms might we combine with each other to build a computer program that would exhibit as many of those explananda as possible (including "internally," not just in its "external" behavior), with as much precision as possible?" My current hunch is that further theorizing about the evolution of consciousness wouldn't contribute to that project in an especially direct way, though it might help guide the search process for consciousness explananda, or contribute in some other somewhat indirect way.

It seems to me (based only on looking through your report and having read one or two books in the field) that the way many of the better theories of consciousness (e.g. multiple drafts) were formed by philosophers was through the following process:

  • Introspect and notice a phenomena occurring in their conscious experience that they don’t believe to have any known explanation


  • Propose a cognitive mechanism to explain it


  • Call this their explanation of consciousness

Firstly, does this seem like an accurate characterisation of how some of the stronger consciousness theories have been produced?

Secondly, do I correctly understand your hypothetical ‘agenda for producing a theory of consciousness’ (from Appendix B) to be iterating the first two steps of this process, with the idea that in the limit it should account for all the explanda of consciousness (whilst significantly improving the process by (a) writing a program that fits the theory (b) using said program to make predictions, and, instead of largely introspecting yourselves (c) gathering the mass introspections of many people)?

I think the strategy you outline is, very roughly, one of the most common strategies for coming up with cognitive theories of consciousness, or at least for coming up with cognitive theories of particular features of consciousness.

However, upon reading a given paper or book of this sort, I'm often left unsure whether the author thinks they've "explained consciousness," or whether they think what they've done so far is more like "gesturing in the right direction." Indeed, Dennett called his 1991 book Consciousness Explained, but even as late as 2011 is saying that some of his favorite theories are "merely the beginning, rather than the end, of the study of consciousness" (Cohen & Dennett 2011).

And of course, many theories of consciousness — including some of the most popular ones — don't appeal to cognitive mechanisms at all.

As for your last paragraph: I'm not sure what you mean by "using said program to make predictions, and, instead of largely introspecting yourselves (c) gathering the mass introspections of many people)?" Could you elaborate on what you're asking, there?

What outputs/deliverables do you think you’d get from your hypothetical ‘consciousness’ agenda (from Appendix B), and how much resources (time/staff/money) do you think would be required to achieve them? For example, might you (ambitiously) think that this agenda would be able to move the field of consciousness studies into an agreed paradigm (a la your reference to Kuhn)?

As I mention at one point in the report, “We have begun to collaborate with a programmer on such a project.” That project is only a quick experiment, but once the experiment has hit certain milestones, I’ll have a better guess about which deliverables might be achievable with how much resources. We hope to write about this experiment in the future, at least briefly.

As for catalyzing a paradigm shift in consciousness studies. I’m not sure about that, but I think we could make a lot of progress if we could find 5 great researchers to fund in the paradigm from Appendix B. However, that would require a substantial investment of staff time, and might turn out not to be tractable, and might also compete for scarce talent with other projects that are even more urgent and important (in Open Phil’s estimation, anyway). We're still thinking about whether we want to take a shot at this, though.