The remarkable capabilities of ChatGPT and other tools based on large language models (LLMs) have generated a fair amount of idle speculation over whether such programs might in some sense be considered sentient. The conventional wisdom could be summarized as: of course not, but they are way spookier than anticipated, in a way that is waking people up to just how weird it might be interact with a truly intelligent machine.

(It is also worth noting that some very knowledgeable people are open to granting LLMs at least a smidgen of consciousness.)

Given that LLMs are not, in my view, in any way conscious, they raise another question: should the human-like behavior of non-sentient computer programs cause me to re-evaluate my opinions on the consciousness of other species?

My beliefs about the consciousness of other species are held lightly. Because there is no general scientific understanding of the material basis of consciousness, all I have to go on is intuition based on my sense of the complexity of other animals and their similarity to the only animals I know to be conscious (i.e., humans).

Over time, my opinions have shifted in the direction of allowing more animals into the "consciousness club." At one point, my beliefs were roughly thus:

  • Humans: conscious
  • Non-human primates: almost certainly conscious
  • Dogs: overwhelmingly likely to be conscious (just look at that face)
  • Mice: Probably conscious, but getting trickier to litigate
  • Fish and amphibians: maybe conscious in some limited way but probably not
  • Insects: almost certainly not conscious
  • Single-celled organisms: not conscious

I certainly may be guilty of chauvinism towards non-mammals, but, again, these opinions are lightly held.

These days, based on a greater awareness of the complexity of many animal behaviors, I'm more likely to let fish and amphibians into the club and admit to greater uncertainty regarding insects. (Sorry, protozoa.)

LLMs, however, raise a challenging counterexample to the idea that complexity of behavior serves as evidence of consciousness. The internet abounds with examples of these programs engaging in conversations that are not only shockingly sophisticated but also deeply unsettling in the way they seem to convey personality, desire, and intent.

I don't think many people seriously entertain the notion that these programs are conscious. Consciousness aside, are LLMs, in some sense, smarter than a bee? A trout? A squirrel? They clearly have capabilities that these other animals don't, and just as clearly have deficits these other animals don't. If LLMs are an existence proof of extremely complex behavior in the absence of consciousness, should we revise our beliefs about the likelihood of consciousness in other animals?

One obvious objection is that complexity might be a correlate of consciousness in biological organisms but not in machines. For example, the U.S. electrical grid is extremely complex, but no one suspects it of being sentient, because it lacks the basic features and organization of other conscious systems.

We know fairly well how LLMs work. We know that they are organized in a way that is not biologically plausible. Brains have features such as memory, attention, sensory awareness, and self-reflexivity that may be necessary to support consciousness.

In this view, LLMs can churn out human-like output via an extremely complex mathematical function while revealing little about animal minds. LLMs don't have any interiority. They don't even have a representation of the world. (This is evident in the way they are happy to spout reasonable sounding untruths.) We might therefore  conclude that LLMs teach us nothing about consciousness, even if they may hold lessons about certain functions of brains (such as language construction).

I think this is a powerful objection, but also maybe a little too quick. Insects such as the Sphex wasp are famous for displaying behavior that is both fairly complex and also extremely stereotyped. And it's worth underscoring just how deeply spooky conversing with LLMs can be. It's easy enough for me to write off ChatGPT as a machine. It feels somewhat dissonant, however, to write ChatGPT off as a machine while also allowing that the architecture of a spider's brain makes it conscious of the world. These things both can be true. But are they?

It strikes me as more plausible that it once did that simpler organisms -- including, yes, fish and amphibians -- might be "mere automatons" displaying behavior that, like ChatGPT, seems to carry intentionality but is really "just" a sophisticated algorithm.

As before, I hold this opinion lightly.

11

0
0

Reactions

0
0

More posts like this

Comments15
Sorted by Click to highlight new comments since:

It seems reasonable to guess that modern language models aren't conscious in any morally relevant sense. But it seems odd to use that as the basis for a reductio of arguments about consciousness, given that we know nothing about the consciousness of language models.

Put differently: if a line of reasoning would suggest that language models are conscious, then I feel like the main update should be about consciousness of language models rather than about the validity of the line of reasoning. If you think that e.g. fish are conscious based on analysis of their behavior rather than evolutionary analogies with humans, then I think you should apply the same reasoning to ML systems.

I don't think that biological brains are plausibly necessary for consciousness. It seems extremely likely to me that a big neural network can in principle be conscious without adding any of these bells or whistles, and it seems clear that SGD could find conscious models. 

I don't think the fact that language models say untrue things show they have no representation of the world (in fact for a pre-trained model that would be a clearly absurd  inference---they are trained to predict what someone else would say and then sample from that distribution, which will of course lead to confidently saying false things when the predicted-speaker can know things the model does not!)

That all said, I think it's worth noting and emphasizing that existing language models' statements about their own consciousness are not evidence  that they are conscious, and that more generally the relationship between a language model's inner life and its utterances is completely unlike the relationship between a human's inner life and their utterances (because they are trained to produce these utterances by mimicking humans, and they would make similar utterances regardless of whether they are conscious). A careful analysis of how models generalize out of distribution, or about surprisingly high accuracy on some kinds of prediction tasks could provide evidence of consciousness, but we don't have that kind of evidence right now.

Thanks for this response. It seems like we are coming at this topic from very different starting assumptions. If I'm understanding you correctly, you're saying that we have no idea whether LLMs are conscious, so it doesn't make sense to draw any inferences from them to other minds.

That's fair enough, but I'm starting from the premise that LLMs in their current form are almost certainly not conscious. Of course, I can't prove this. It's my belief based on my understanding of their architecture. I'm very much not saying they lack consciousness because they aren't instantiated in a biological brain. Rather, I don't think that GPUs performing parallel searches through a probabilistic word space by themselves are likely to support consciousness.

Stepping back a bit: I can't know if any animal other than myself is conscious, even fellow humans. I can only reason through induction that consciousness is a feature of my brain, so other animals that have brains similar in construction to mine may also have consciousness. And I can use the observed output of those brains -- behavior -- as an external proxy for internal function. This makes me highly confident that, for example,  primates are conscious, with my uncertainty growing with greater evolutionary distance.

Now along come LLMs to throw a wrench in that inductive chain. LLMs are -- in my view -- zombies that can do things previously only humans were capable of.  And the truth is, a mosquito's brain doesn't really have all that much in common with a human's. So now I'm even more uncertain -- is complex behavior really a sign for interiority? Does having a brain made of neurons really put lower animals on a continuum with humans? I'm not sure anymore. 

Rather, I don't think that GPUs performing parallel searches through a probabilistic word space by themselves are likely to support consciousness.

This seems like the crux. It feels like a big neural network run on a GPU, trained to predict the next word, could definitely be conscious. So to me this is just a question about the particular weights of large language models, not something that can be established a priori based on architecture.

My current belief in the sentience of most nonhuman animals comes partly from the fact that they were subjected to many of the same evolutionary forces that gave consciousness to humans.  Other animals also share many brain structures with us.  ChatGPT never went through that process and doesn't have the same structures, so I wouldn't really expect it to be conscious.  I guess your post looks at the outputs of conscious beings, which are very similar to what ChatGPT produces, whereas I'm partly looking at the inputs that we know have created consciousness.

Just my two cents.  And I do think this is a worthwhile question to ask!  But I would probably update more in the direction of "digital sentience is a (future) possibility" than "more nonhuman animals probably aren't conscious".

Many nonhuman animals also show long-term abnormal behaviours, and will try to access analgesia (even paying a cost to do so), if they are in pain. I don’t think we have evidence that’s quite analogous to that with large language models, and if we did, it would cause me to update in favour of current models having sentience. It’s also worth noting that the same lines of evidence that cause me to believe nonhuman animals are sentient also lead me to believe that humans are sentient, even if some of the evidence (like physiological and neuro-anatomical similarities, and evolutionary distance) may be somewhat stronger in humans.

Other animals do share many brain structures with us, but by the same token, most animals lack brain structures that are the most fundamental to what make us human. As far as I am aware (and I will quickly get out of my depth here), only mammals have a neocortex, and small mammals don't have much of one. 

Hopefully this is clear from my post, but ChatGPT hasn't made me rethink my beliefs about primates or even dogs. It definitely has made me more uncertain about invertebrates, reptiles, and  fish. (I have no idea what to think about birds.)

Even in humans, language production is generally subconscious. At least, my experience of talking is that I generally first become conscious of what I say as I'm saying it. I have some sense of what I might want to say before I say it, but the machinery that selects specific words is not conscious. Sometimes, I think of a couple of different things I could say and consciously select between them. But often I don't: I just hear myself speak. Language generation may often lead to conscious perceptions of inner speech, but it doesn't seem to rely on it.

All of this suggests that the possibility of non-conscious chatbots should not be surprising. It may be that chatbots provide pretty good evidence that cognitive complexity can come apart from consciousness. But introspection alone should provide sufficient evidence for that.

splinter -- if we restrict attention to sentience (capacity to feel pleasure/pain, or to flourish/suffer) rather than consciousness, then it would be very difficult for any AI findings or capabilities to challenge my conviction that most non-human, mobile animals are sentient.

The reasons are evolutionary and functional. Almost every animal nervous system evolves to be capable of adjusting its behavior based on feedback from the environment, in the form of positive and negative reinforcers, which basically boil down to pleasure and pain signals. My hunch is that any animal capable of operant conditioning is sentient in a legitimate sense -- and that would include basically all vertebrates with a central nervous system (inc. mammals, birds, reptiles), and also most invertebrates that evolved to move around to find food and avoid predators.

So, consciousness is a red herring. If we're interested in the question of whether non-human animals can suffer, we need to ask whether they can be operantly conditioned by any negative reinforcers. The answer, almost always, is 'yes'. 

I am using conscious and sentient as synonyms. Apologies if this is confusing. 

I don't doubt at all that all animals are sentient in the sense that you mean. But I am  referring to the question of whether they have subjective experience -- not just pleasure and pain signals but also a subjective experience of pleasure and pain.

This doesn't feel like a red herring to me. Suffering only takes on a moral valence if it describes a conscious experience.

splinter -- I strongly disagree on that. I think consciousness is built up out of valenced reactions to things (e.g. pleasure/pain signals); it's not some qualitatively special overlay on top of those signals. 

And I don't agree that suffering is only morally relevant if it's 'consciously experienced'.  

Not to rehash everyone's well-rehearsed position on the hard problem, but surely in the above sentience is the red herring? If non-human animals are not conscious, i.e. "there are no lights on inside" not just "the lights are on but dimmer", then there is actually no suffering? 

Edit: A good intuition pump on this crux is David Chalmer's 'Vulcan' thought experiment (see the 80k podcast transcript) - my intuition tells me we care about the Vulcans, but maybe the dominant philosophy of mind position in EA is to not care about them (I might be confounding overlap between illusionism and negative utiliarianism though)? That seems like a pretty big crux to me.

I don't see, at the evolutionary-functional level, why human-type 'consciousness' (whatever that means) would be required for sentience (adaptive responsiveness to positive/negative reinforcers, i.e. pleasure/pain). Sentience seems much more foundational, operationalizable, testable, functional, and clear.

But then, 99% of philosophical writing about consciousness strikes me as wildly misguided, speculative, vague, and irrelevant. 

Psychology has been studying 'consciousness' ever since the 1850s, and has made a lot of progress. Philosophy, not so much, IMHO.

Follow-up: I've never found Chalmers' zombie or vulcan thought experiments at all compelling. They sound plausible at first glance as interesting edge cases, but I think they're not at all plausible or illuminating if one asks how such a hypothetical being could have evolved, and whether their cognitive/affective architecture really makes sense. The notion of a mind that doesn't have any valences regarding external objects, beings, or situations would boil down to a mind that can't make any decisions, can't learn anything (through operant conditioning), and can't pursue any goals -- i.e. not a 'mind' at all.

I critiqued the Chalmers zombie thought experiment in this essay from c. 1999. Also see this shorter essay about the possible functions of human consciousness, which I think center around 'public relations' functions in our hypersocial tribal context, more than anything else.

Comparing the "consciousness" of LLMs and AI models with the consciousness of living organisms feels to me almost like comparing apples to oranges.

Yet  I'm of the opinion that the process in which living brains manifest consciousness may not be all that different from the process that LLMs use, just translated into its biochemical near-equivalent.

However I'm 100% confident that LLMs and other AI (now or in the future) can never be conscious of the world in the same way that living organisms are conscious (or not conscious).

This is because there is something in living things that is missing and can never be found in LLMs and AI. That is the soul / spirit which is what brings about that consciousness and life that living organisms have. (Don't quote me on this though because, of course, I can not prove it LOL.)

However, at some point LLMs (or AIs) will be able to perfectly simulate human-like consciousness that it would become nearly impossible to tell that they are really not conscious or sentient (if GPT-3 is like this, imagine what GPT-50 would be like!!!)

But they will never have the same kind of consciousness as even the lowest of living organisms.

Unless a way to give them a spirit or a soul is discovered.

Curated and popular this week
Relevant opportunities