Comment author: Nekoinentr 29 June 2017 06:39:29AM 0 points [-]

The idea of a natural kind is helpful. The fact that people mean different things by "consciousness" seems unsurprising, as that's the case for any complex word that people have strong motives to apply (in this case because consciousness sounds valuable). It also tells us little about the moral questions we're considering here. Do you guys agree or am I missing something?

Comment author: Brian_Tomasik 29 June 2017 06:16:34AM *  0 points [-]

A good example of what thebestwecan means by "objectivity" is the question "If a tree falls in a forest and no one is around to hear it, does it make a sound?" He and I would say there's no objective answer to this question because it depends what you mean by "sound". I think "Is X conscious?" is a tree-falls-in-a-forest kind of question.

When you talk about whether "consciousness is an actual property of the world", do you mean whether it's part of ontologic base reality?

Yeah, ontologically primitive, or at least so much of a natural kind, like the difference between gold atoms and potassium atoms, that people wouldn't really dispute the boundaries of the concept. (Admittedly, there might be edge cases where even what counts as a "gold atom" is up for debate.)

Comment author: kbog  (EA Profile) 29 June 2017 06:06:45AM *  0 points [-]

The report aims to be a "direct inquiry into moral status," but because it does so from an anti-realist perspective,

Why? It's not more widely accepted than realism, and arguably it's decision-irrelevant regardless of its plausibility as per Ross's deflationary argument (http://www-bcf.usc.edu/~jacobmro/ppr/deflation-ross.pdf).

a certain notion of idealized preferences comes into play. In other words: if you don't think "objective values" are "written into the fabric of the universe," then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on.

But there are plenty of accounts of anti-realist ethics, and they don't all make everything reducible to preferences or provide this account of what we ought to value. I still don't see what makes this view more noteworthy than the others and why Open Phil is not as interested in either a general overview of what many views would say or a purely empirical inquiry from which diverse normative conclusions can be easily drawn.

And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my "idealized" moral intuitions would be.

I don't see what reason they have to take them into account at all, unless they accept ideal advisor theory and are using your preferences as a heuristic for what their preferences would be if they did the same thing that you are doing. Is ideal advisor theory their view now? And is it also the case for other moral topics besides animal ethics?

Also: can we expect a reduced, empirical-only version of the report to be released at some point?

Comment author: Nekoinentr 29 June 2017 05:16:09AM 1 point [-]

I don't think I understand what you mean by consciousness being objective. When you mention "what processes, materials, etc. we subjectively choose to use as the criteria for consciousness", this sounds to me as if you're talking about people having different definitions of consciousness, especially if the criteria are meant as definitive rather than indicative. However presumably in many cases whether the criteria are present will be an objective question.

When you talk about whether "consciousness is an actual property of the world", do you mean whether it's part of ontologic base reality?

Comment author: Nekoinentr 29 June 2017 03:20:19AM 0 points [-]

I wouldn't have thought that hits-based giving should be a general strategy, as it's one highly specific way of having an impact. I can understand 80,000 hours developing it as a way to understand their own impact; it fits when you're giving in-depth advice to a few individuals on their whole careers, but that's an atypical case.

Comment author: Roxanne_Heston  (EA Profile) 29 June 2017 12:47:56AM 0 points [-]

Appreciate you posting! I actually drew inspiration from that for the Involvement Guide, but if you think I missed something I'd be more than happy to hear it.

Comment author: Roxanne_Heston  (EA Profile) 29 June 2017 12:46:51AM 0 points [-]

Thanks for the comment. It wasn't very necessary there, so even though it seems fairly innocuous to me given its frequency of use I decided to nix it.

Comment author: lukeprog 29 June 2017 12:09:49AM *  1 point [-]

I'll use the terms "nociception" and "pain" as defined in Appendix D:

nociception is the encoding and processing of noxious stimuli, where a noxious stimulus is an actually or potentially body-damaging event (either external or internal, e.g. cutaneous or visceral)… Pain, in contrast to mere nociception, is an unpleasant conscious experience associated with actual or potential body damage (or akin to unpleasant experiences associated with noxious stimuli).

…Nociception can occur without pain, and pain can occur without nociception. Loeser & Treede (2008) provide examples: “after local anesthesia of the mandibular nerve for dental procedures, there is peripheral nociception without pain, whereas in a patient with thalamic pain [a kind of neuropathic pain resulting from stroke], there is pain without peripheral nociception.”

Often, it's assumed that if animals are conscious, then it's likely that they experience pain in conjunction with the types of nociceptive processing that, in humans, would be accompanied by conscious pain — or at least, this seems likely for the animals that are fairly similar to us in their neural architecture (e.g. mammals, or perhaps all vertebrates). And likewise, it seems that if we limit the nociception that occurs in their bodies, then this will also limit the conscious pain they experience if they are conscious at all.

Unfortunately, humans also experience pain without nociception, e.g. (as my report says) "neuropathic pain, and perhaps also… some cases of psychologically-created experiences of pain, e.g. when a subject is hallucinating or dreaming a painful experience." Assuming whichever animals are conscious are also capable of non-nociceptive pain, this means that conscious animals may be capable of suffering in ways that are difficult for us to detect.

Even still, I think we can study the mechanisms of nociception (and other triggers for conscious pain) independently of studying consciousness, and this could lead to interventions that (probably) address the usual causes of conscious pain even if we don't know whether a given animal is conscious or not.

However, I'm not sure this is quite what you meant to be asking; let me know if there was a somewhat different question you were hoping I'd answer.

Comment author: lukeprog 28 June 2017 11:41:54PM 0 points [-]

Hi Evan, your question has now been posted to the AMA here.

Comment author: lukeprog 28 June 2017 11:41:38PM 1 point [-]

Cross-posted here from a comment on the announcement post, a question from Evan_Gaensbauer:

Do you think science or philosophy can meaningfully separate the capacity to experience suffering or pain from however else consciousness is posited to be distributed across species? What would be fruitful avenues of research for effective altruists to pursue if it's possible to solve the problem in the first question, without necessarily addressing whatever remains of consciousness?

(To preserve structure, I'll reply in a comment reply.)

Comment author: lukeprog 28 June 2017 11:35:00PM 1 point [-]

My hope was that the Type A-ness / subjectivity of the concept of "consciousness" I'm using would be clear from section 2.3.1 and 2.3.2, and then I can write paragraphs like the one above about fruit fly consciousness, which refers back to the subjective notion of consciousness introduced in section 2.3.

But really, I just find it very cumbersome to write in detail and at length about consciousness in a way that allows every sentence containing consciousness words to clearly be subjective / Type A-style consciousness. It's similar to what I say in the report about fuzziness:

given that we currently lack such a detailed decomposition of “consciousness,” I reluctantly organize this report around the notion of “consciousness,” and I write about “which beings are conscious” and “which cognitive processes are conscious” and “when such-and-such cognitive processing becomes conscious,” while pleading with the reader to remember that I think the line between what is and isn’t “conscious” is extremely “fuzzy” (and as a consequence I also reject any clear-cut “Cartesian theater.”)

But then, throughout the report, I make liberal use of "normal" phrases about consciousness such as what's conscious vs. not-conscious, "becoming" conscious or not conscious, what's "in" consciousness or not, etc. It's just really cumbersome to write in any other way.

Another point is that, well, I'm not just a subjectivist / Type A theorist about consciousness, but about nearly everything. So why shouldn't we feel fine using more "normal" sentence structures to talk about consciousness, if we feel fine talking about "living things" and "mountains" and "sorting algorithms" and so on that way? I don't have any trouble talking about the likelihood that there's a mountain in such-and-such city, even though I think "mountain" is a layer of interpretation we cast upon the world.

Comment author: lukeprog 28 June 2017 11:20:02PM *  1 point [-]

I'm not sure whether the thing you're trying to say is compatible with what I'd say or not. The way I'd say it is this:

The 'weak illusionist' says that while many features of conscious experience can be 'explained away' as illusions, the 'something it's like'-ness of conscious experience is not (and perhaps cannot be) an illusion, and thus must be "explained" rather than "explained away." In contrast, the "strong illusionist" says that even the 'something it's like'-ness of conscious experience is an illusion.

But what might it be for the 'something it's like'-ness to be an illusion? Basically, that it seems to us that there is more to conscious experience than 'zero qualia', but in fact there are only zero qualia. E.g. it seems to us that there are 'diet qualia' that are more than 'zero qualia', but in fact 'diet qualia' have no distinctive features beyond 'zero qualia.'

Now in fact, I think there probably is more to 'zero qualia' than Frankish's "properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective," but I don't think those additional properties will be difficult for the strong illusionst to adopt, and I don't think they'll vindicate a position according to which there is a distinctive 'diet qualia' option. Speaking of diet qualia vs. zero qualia is very rough: the true form of the answer will be classes of cognitive algorithms (on my view).

Comment author: WillPearson 28 June 2017 11:08:48PM 0 points [-]

Have there been any models of other activities that might reduce existential risk.

E.g. convincing prospective AGI researchers that it is dangerous and should be handled carefully? It would seem that that might increase the pool of potential researchers and also give more time for a safe approach to be developed?

In response to 2017 LEAN Statement
Comment author: DonyChristie 28 June 2017 10:31:36PM 0 points [-]

Local group seeding and activation.

Thanks for this link. I may have raised this in a private channel, but I want to take the time to point out that based on anecdotal experience, I think LEAN & CEA shouldn't be seeding groups without taking the time to make sure the groups are mentored, managed by motivated individual(s), and grown to sustainability. I found my local group last year and it was essentially dilapidated. I felt a responsibility to run it but was mostly unsuccessful in establishing contact to obtain help managing it until some time in 2017. I'm predicting these kinds of problems will diminish at least somewhat now that LEAN & Rethink have full-time staff, the Mentoring program, more group calls, etc. :)

Comment author: lukeprog 28 June 2017 10:02:10PM *  1 point [-]

For others' benefit, what I said in the report was:

I think I can make a weakly plausible case for (e.g.) Gazami crab consciousness, and I think I can make a weakly plausible case for chimpanzee non-consciousness.

By "weakly plausible" I meant that I think I can argue for a ~10% chance of Gazami crab consciousness, and separately for a ~10% chance of chimpanzee non-consciousness.

Such arguments would draw from considerations that are mentioned at least briefly somewhere in the report, but it would bundle them together in a certain way and elaborate certain points.

My argument for ~10% chance of chimpanzee non-consciousness would look something like an updated version of Macphail (1998), plus many of the considerations from Dennett (2017). Or, to elaborate that a bit: given the current state of evidence on animal cognition and behavior, and given what is achievable using relatively simple deep learning architectures (including deep reinforcement learning), it seems plausible (though far from guaranteed) that the vast majority of animal behaviors, including fairly sophisticated ones, are the product of fairly simple (unconscious) learning algorithms operating in environments with particular reward and punishment gradients, plus various biases in the learning algorithms "organized in advance of experience" via evolution. Furthermore, it seems plausible (though not likely, I would say) that phenomenal consciousness depends on a relatively sophisticated suite of reasoning and self-modeling capacities that humans possess and chimpanzees do not (and which may also explain why chimpanzees can't seem to learn human-like syntactically advanced language). I am pretty confident this conjunction of hypotheses isn't true, but I think something like this is "weakly plausible." There are other stories by which it could turn out that chimpanzees aren't conscious, but the story outlined above is (very loosely speaking) the "story" I find most plausible (among stories by which chimpanzees might not be conscious).

My case for a ~10% chance of Gazami crab consciousness would involve pulling together a variety of weak considerations in favor of the "weak plausibility" of Gazami crab consciousness. For example: (1) given the considerations from Appendix H, perhaps phenomenal consciousness can be realized by fairly simple cognitive algorithms, (2) even assuming fairly "sophisticated" cognition is required for consciousness (e.g. a certain kind of self-model), perhaps 100,000 neurons are sufficient for that, and (3) perhaps I'm confused about something fairly fundamental, and I should be deferring some probability mass to the apparently large number of consciousness scholars who are physicalist functionalists and yet think it's quite plausible that arthropods are conscious.

Comment author: thebestwecan 28 June 2017 09:30:13PM *  1 point [-]

I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.

Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.

If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:

I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don't think there's a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it's only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).

I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think "Oh, I can just assign 0% to fruit flies!" but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.

Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)

Comment author: lukeprog 28 June 2017 08:44:40PM 4 points [-]

The report aims to be a "direct inquiry into moral status," but because it does so from an anti-realist perspective, a certain notion of idealized preferences comes into play. In other words: if you don't think "objective values" are "written into the fabric of the universe," then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on. I won't make the case for this meta-ethical approach here, but I link some relevant sources in the report, in particular in footnote 239.

This is one reason I say at the top of the report that:

This report is unusually personal in nature, as it necessarily draws heavily from the empirical and moral intuitions of the investigator. Thus, the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my "idealized" moral intuitions would be.

Comment author: lukeprog 28 June 2017 08:26:41PM *  1 point [-]

I think I always found Dennett's general approach quite plausible (albeit a bit too "hasty" in its proposed reductions; see footnote 222), though I hadn't read other illusionist accounts prior to beginning this investigation. For me, reading Frankish made a bigger difference to my confidence in illusionism than any particular neuroscience papers or books.

Personally, I find discussions of p-zombies somewhat unhelpful, but for those steeped in that literature, it might be a useful set of concepts for explaining illusionism. My first recommendation would still be Frankish's papers, though.

Comment author: lukeprog 28 June 2017 08:22:02PM 1 point [-]

Got it, thanks for clarifying. Off the top of my head, I can't think of any unconscious or at least "hidden" processing that is known to work in the relatively sophisticated manner your describe, but I might have read about such cases and am simply not remembering them at the moment. Certainly an expert on unconscious/hidden cognitive processing might be able to name some fairly well-characterized examples, and in general I find it quite plausible that such cognitive processes occur in (e.g.) the human brain (and thus potentially in the brains of other animals). Possibly the apparent cognitive operations undertaken by the non-verbal hemisphere in split-brain patients would qualify, though they seem especially likely to qualify as "conscious" under the Schwitzgebel-inspired definition even if they are not accessible to the hemisphere that can make verbal reports.

Anyway, the sort of thing you describe is one reason why, in section 4.2, my probabilities for "consciousness of a sort I intuitively morally care about" are generally higher than my probabilities for "consciousness as loosely defined by example above." Currently, I don't think I'd morally care about such cognitive processes so long as they were "unconscious" (as loosely defined by example in my report), but I think it's at least weakly plausible that if I was able to carry out my idealized process for making moral judgments, I would conclude that I care about some such unconscious processes. I don't use Brian's approach of "mere" similarities in a multi-dimensional concept space, but regardless I could still imagine myself morally caring about certain types of unconscious processes similar to those you describe, even if I don't care about some other unconscious processes that may be even more similar (in Brian's concept space) to the processes that do instantiate "conscious experience" (as loosely defined by example in my report). (I'd currently bet against making such moral judgments, but not super-confidently.)

Comment author: Lukas_Gloor 28 June 2017 08:01:30PM 0 points [-]

I was thinking about a secondary layer that is hidden as well.

E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way?

Hard to say. On Brian's perspective with similarities in multi-dimensional concept space, the competition among neural signals may already qualify to an interesting degree. But let's say we are interested in something slightly more sophisticated, but not sophisticated enough that we're inclined to look at it as "not hidden." (Maybe it would qualify if the hidden nociceptive signals alter subconscious dispositions in interesting ways, though it depends on how that would look like and how it compares to what is going on introspectively with suffering that we have conscious access to.)

View more: Next