Comment author: Benito 28 June 2017 04:00:41PM *  2 points [-]

You mention that a further project might be to attempt to make the case that chimpanzees aren’t conscious, and that Gazami crabs are, each to confirm your suspicion you could in fact make a plausible case for each. Could you outline what such cases might look like (knowing that you can’t provide the output of an investigation you haven’t performed)? What evidences would you be looking into that aren’t already in this report (e.g. would it mainly be information as to how their cognition in particular is similar to / differs from human cognition)?

Comment author: lukeprog 28 June 2017 10:02:10PM *  2 points [-]

For others' benefit, what I said in the report was:

I think I can make a weakly plausible case for (e.g.) Gazami crab consciousness, and I think I can make a weakly plausible case for chimpanzee non-consciousness.

By "weakly plausible" I meant that I think I can argue for a ~10% chance of Gazami crab consciousness, and separately for a ~10% chance of chimpanzee non-consciousness.

Such arguments would draw from considerations that are mentioned at least briefly somewhere in the report, but it would bundle them together in a certain way and elaborate certain points.

My argument for ~10% chance of chimpanzee non-consciousness would look something like an updated version of Macphail (1998), plus many of the considerations from Dennett (2017). Or, to elaborate that a bit: given the current state of evidence on animal cognition and behavior, and given what is achievable using relatively simple deep learning architectures (including deep reinforcement learning), it seems plausible (though far from guaranteed) that the vast majority of animal behaviors, including fairly sophisticated ones, are the product of fairly simple (unconscious) learning algorithms operating in environments with particular reward and punishment gradients, plus various biases in the learning algorithms "organized in advance of experience" via evolution. Furthermore, it seems plausible (though not likely, I would say) that phenomenal consciousness depends on a relatively sophisticated suite of reasoning and self-modeling capacities that humans possess and chimpanzees do not (and which may also explain why chimpanzees can't seem to learn human-like syntactically advanced language). I am pretty confident this conjunction of hypotheses isn't true, but I think something like this is "weakly plausible." There are other stories by which it could turn out that chimpanzees aren't conscious, but the story outlined above is (very loosely speaking) the "story" I find most plausible (among stories by which chimpanzees might not be conscious).

My case for a ~10% chance of Gazami crab consciousness would involve pulling together a variety of weak considerations in favor of the "weak plausibility" of Gazami crab consciousness. For example: (1) given the considerations from Appendix H, perhaps phenomenal consciousness can be realized by fairly simple cognitive algorithms, (2) even assuming fairly "sophisticated" cognition is required for consciousness (e.g. a certain kind of self-model), perhaps 100,000 neurons are sufficient for that, and (3) perhaps I'm confused about something fairly fundamental, and I should be deferring some probability mass to the apparently large number of consciousness scholars who are physicalist functionalists and yet think it's quite plausible that arthropods are conscious.

Comment author: kbog  (EA Profile) 28 June 2017 07:56:53PM *  1 point [-]

Why is the methodology of the report and Open Phil's recommendations based on your personal preferences about which animals matter, instead of direct inquiry into moral status?

Comment author: lukeprog 28 June 2017 08:44:40PM 8 points [-]

The report aims to be a "direct inquiry into moral status," but because it does so from an anti-realist perspective, a certain notion of idealized preferences comes into play. In other words: if you don't think "objective values" are "written into the fabric of the universe," then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on. I won't make the case for this meta-ethical approach here, but I link some relevant sources in the report, in particular in footnote 239.

This is one reason I say at the top of the report that:

This report is unusually personal in nature, as it necessarily draws heavily from the empirical and moral intuitions of the investigator. Thus, the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my "idealized" moral intuitions would be.

Comment author: Lukas_Gloor 28 June 2017 05:14:25PM 0 points [-]

Did you always find illusionism plausible or was there a moment where it made “click” or just a gradual progression? Do you think reading more about neuroscience makes people more sympathetic to it?

Do you think the p-zombie thought experiment can be helpful to explain the difference between illusionism and realism (“classic qualia” mapping onto the position “p-zombies are conceivable"), or do you find that it is unfair or often leads discussions astray?

Comment author: lukeprog 28 June 2017 08:26:41PM *  1 point [-]

I think I always found Dennett's general approach quite plausible (albeit a bit too "hasty" in its proposed reductions; see footnote 222), though I hadn't read other illusionist accounts prior to beginning this investigation. For me, reading Frankish made a bigger difference to my confidence in illusionism than any particular neuroscience papers or books.

Personally, I find discussions of p-zombies somewhat unhelpful, but for those steeped in that literature, it might be a useful set of concepts for explaining illusionism. My first recommendation would still be Frankish's papers, though.

Comment author: Lukas_Gloor 28 June 2017 08:01:30PM 0 points [-]

I was thinking about a secondary layer that is hidden as well.

E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way?

Hard to say. On Brian's perspective with similarities in multi-dimensional concept space, the competition among neural signals may already qualify to an interesting degree. But let's say we are interested in something slightly more sophisticated, but not sophisticated enough that we're inclined to look at it as "not hidden." (Maybe it would qualify if the hidden nociceptive signals alter subconscious dispositions in interesting ways, though it depends on how that would look like and how it compares to what is going on introspectively with suffering that we have conscious access to.)

Comment author: lukeprog 28 June 2017 08:22:02PM 1 point [-]

Got it, thanks for clarifying. Off the top of my head, I can't think of any unconscious or at least "hidden" processing that is known to work in the relatively sophisticated manner your describe, but I might have read about such cases and am simply not remembering them at the moment. Certainly an expert on unconscious/hidden cognitive processing might be able to name some fairly well-characterized examples, and in general I find it quite plausible that such cognitive processes occur in (e.g.) the human brain (and thus potentially in the brains of other animals). Possibly the apparent cognitive operations undertaken by the non-verbal hemisphere in split-brain patients would qualify, though they seem especially likely to qualify as "conscious" under the Schwitzgebel-inspired definition even if they are not accessible to the hemisphere that can make verbal reports.

Anyway, the sort of thing you describe is one reason why, in section 4.2, my probabilities for "consciousness of a sort I intuitively morally care about" are generally higher than my probabilities for "consciousness as loosely defined by example above." Currently, I don't think I'd morally care about such cognitive processes so long as they were "unconscious" (as loosely defined by example in my report), but I think it's at least weakly plausible that if I was able to carry out my idealized process for making moral judgments, I would conclude that I care about some such unconscious processes. I don't use Brian's approach of "mere" similarities in a multi-dimensional concept space, but regardless I could still imagine myself morally caring about certain types of unconscious processes similar to those you describe, even if I don't care about some other unconscious processes that may be even more similar (in Brian's concept space) to the processes that do instantiate "conscious experience" (as loosely defined by example in my report). (I'd currently bet against making such moral judgments, but not super-confidently.)

Comment author: kierangreig 28 June 2017 03:55:57PM *  8 points [-]

(1) To what degree did your beliefs about the consciousness of insects (if insects are too broad a category please just focus on the common fruit fly) change from completing this report and what were the main reasons for those beliefs changing? I would be particularly interested in an answer that covers the following three points: (i) the rough probability that you previously assigned to them being conscious, (ii) the rough probability that you now assign to them being conscious and (iii) the main reasons for the change in that probability.

(2) Do you assign a 0% probability to electrons being conscious?

(3) In section 5.1 you write

I’d like to get more feedback on this report from long-time “consciousness experts” of various kinds. (So far, the only long-time “consciousness expert” from which I’ve gotten extensive feedback is David Chalmers.)

David Chalmers seems like an interesting choice for the one long-time “consciousness expert” to receive extensive feedback from. Why was he the only one that you got extensive feedback from? And of the other consciousness experts that you would like to receive extensive feedback from, do you think that most of them would disagree with some part of the report in a similar way, and if you think they would, what would that disagreement or those disagreements be?

(4) A while ago Carl Shulman put out this document detailing research advice. Can you please do the same, or if you already have a document like this can you please point me to it? I would probably find it useful and I would guess some others would too.

Comment author: lukeprog 28 June 2017 07:54:41PM *  5 points [-]

Re: (1), I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Re: (2). This question raises issues related to Pascal's Mugging. I don't pretend to have a solution, but some especially relevant discussions are Pascal's Muggle: Infinitesimal Priors and Strong Evidence, Why we can’t take expected value estimates literally (even when they’re unbiased), Modeling Extreme Model Uncertainty, and Sequence thinking vs. cluster thinking. As mentioned here, the Open Philanthropy Project remains eager to get more clarity on how we should think about making decisions under different kinds of uncertainty, and we expect to write more about this issue in the future.

In the meantime, the possibility of electron consciousness does not currently inform my actions.

Re: (3). You're right that Chalmers is "an interesting choice" for me to get feedback from. For example, Chalmers is a property dualist and panpsychist, which in some ways is about as different a position on consciousness from mine as you could find. On the other hand, Chalmers is (in my opinion) an unusually sharp thinker about consciousness, and also he has a deserved reputation for his ability to give critical feedback to others from the perspective of their own view. And, as hoped, Chalmers' feedback on an earlier draft of my report was very helpful, and I invested dozens of hours improving the draft in response to his feedback alone.

There were a few other consciousness researchers (whom I won't name) that I had hoped to get feedback from, but they weren't interested to give it. That's not surprising, since my report is so different from the type of work that consciousness researchers typically engage with.

My report makes so many claims (or at least, "guesses") that I have no doubt that if other consciousness experts gave extensive feedback on it, they would find plenty with which they disagree. In some cases, I know from their writing some specific things they would disagree with. But in many cases, I'm not sure where they would disagree, both because I haven't read all their works on consciousness, and because most consciosuness experts have only written about a tiny portion of the issues covered (at least briefly) in my report.

Re: (4). This question is beyond the scope of the intended purpose of this AMA, but I'll make a couple brief comments. It would take a lot of work for me to write a similar document that usefully complements Carl's, but I may do so one day. An old post of mine on this general topic is Scholarship: How to Do It Efficiently, but it's pretty narrow in scope.

Comment author: Peter_Hurford  (EA Profile) 28 June 2017 04:35:38PM 4 points [-]

How relevant is "the mirror test" in thinking about consciousness?

Comment author: lukeprog 28 June 2017 07:01:26PM 1 point [-]

Among the PCIFs mentioned in section 3.2.2, I consider it to be one of the more interesting ones, principally because it may be indicative of certain kinds of "self-modeling," and the theoretical approaches to phenomenal consciousness I intuitively find most promising are those which involve certain kinds of self-modeling.

That said, I still give it very little weight, since it's very unclear what, exactly, different mirror-related behaviors imply about the self-modeling capacities of the animal, and how those relate to the self-modeling capacities that may be necessary for conscious experience. For example see the quotes and sources in footnote 109.

Comment author: concerned_ 28 June 2017 06:17:13PM 0 points [-]

What probability would you assign to a China brain being conscious?

Comment author: lukeprog 28 June 2017 06:55:55PM 5 points [-]

I think of this as more of a definitional issue than a probability issue. Given functionalism about consciousness, if the population of China managed to implement the necessary function of consciousness, then this system would be conscious. The problem is that, on my view, the function of consciousness is likely so complicated and specific that for the population of China to implement it, the "population of China" would be basically unrecognizable as "the population of China." Hence, I think the Chinese nation thought experiment is a misleading intuition pump.

Comment author: kevinwong  (EA Profile) 28 June 2017 05:12:32PM 2 points [-]

Have you given any thought to how animal and human interests might be commensurated? Even with very good evidence for animal consciousness, it's much more difficult to devise a way of weighing their moral interests.

Comment author: lukeprog 28 June 2017 06:45:49PM 1 point [-]

It's a very difficult question. I wrote up some initial speculations on how I might address the "moral weight" question in Appendix Z7. Each of those "candidate dimensions of moral concern" can also be studied on their own, often in pretty similar ways to how "consciousness in general" can be studied.

Comment author: Lukas_Gloor 28 June 2017 05:29:28PM *  0 points [-]

Are you aware of any "hidden" (nociception-related?) cognitive processes that could be described as "two systems in conflict?" I find the hidden qualia view very plausible, but I also find it plausible that I might settle on a view on moral relevance where what matters about pain is not the "raw feel" (or "intrinsic undesirability" in Drescher's words), but a kind of secondary layer of "judgment" in the sense of "wanting things to change/be different" or "not accepting some mental component/input." I'm wondering whether most of the processes that would constitute hidden qualia are too simple to fit this phenomenological description or not...

Comment author: lukeprog 28 June 2017 06:42:28PM 0 points [-]

I probably have thoughts on this, but first: Can you say more about what would count as "two systems in conflict"? E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way? Also, is the "secondary layer" you're talking about also meant to be "hidden", or are you talking about a "phenomenally conscious" second layer?

Comment author: thebestwecan 28 June 2017 05:24:29PM *  4 points [-]

Thanks for doing this AMA. I'm curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question "Is an insect conscious?" or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?

The Open Phil conversation notes with Brian Tomasik say:

Luke isn’t certain he endorses Type A physicalism as defined in that article, but he thinks his views are much closer to “Type A” physicalism than to “Type B” physicalism

(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik's well-known analogy is that there's no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there's something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn't give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn't give you an objective definition of a table.)

However, in the report, you write as though there is an objective definition (e.g. in the "Consciousness, innocently defined" section), and I feel most readers of the report will get that impression, e.g. that there's an objective answer as to whether insects are conscious.

Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it's still useful to use common sense rhetoric that treats it as objective, and you don't think it's that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there's still enough likelihood of Type B that you focus on questions like "If Type B is true, then is an insect conscious?" and would just shorthand this as "Is an insect conscious?" because e.g. if Type A is true, then consciousness research is not that useful in your view.

Comment author: lukeprog 28 June 2017 06:36:23PM 4 points [-]

I'm not sure what you mean by "objective definition" or "objectively correct answer," but I don't think I think of consciousness as being "objective" in your sense of the term.

The final question, for me, is "What should I care about?" I elaborate my "idealized" process for answering this question in section 6.1.2. Right now, my leading guess for what I'd conclude upon going through some approximation of that idealized process is that I'd care about beings with valenced conscious experience, albeit with different moral weights depending on a variety of other factors (early speculations in Appendix Z7).

But of course, I don't know quite what sense of "valenced conscious experience" I'd end up caring about upon undergoing my idealized process for making moral judgments, and the best I can do at this point is something like the definition by example (at least for the "consciousness" part) that I begin to elaborate in section 2.3.1.

Re: Type A physicalism, aka Type A materialism. As mentioned in section 2.3.2, I do think my current view is best thought of as "'type A materialism,' or perhaps toward the varieties of 'type Q' or 'type C' materialism that threaten to collapse into 'type A' materialism anyway…" (see the footnote after this phrase for explanations). One longer article that might help clarify how I think about "type A materialism" w.r.t. consciousness or other things is Mixed Reference: The Great Reductionist Project and its dependencies.

That said, I do think the "triviality" objection is a serious one (Ctrl+F the report for "triviality objection to functionalism"), and I haven't studied the issue enough to have a preferred answer for it, nor am I confident there will ever be a satisfying answer to it — at least, for the purposes of figuring out what I should care about. Brian wrote a helpful explainer on some of these issues: How to Interpret a Physical System as a Mind. I endorse many of the points he argues for there, though he and I end up with somewhat different intuitions about what we morally care about, as discussed in the notes from our conversation.

View more: Prev | Next