Daniel Dennett once wrote:
“Knock-down refutations are rare in philosophy, and unambiguous self-refutations are even rarer, for obvious reasons, but sometimes we get lucky.”
Earlier this year, we got lucky with Dennett 2023 in which Dennett characterizes AI (implicitly large language models) as “counterfeit” and potentially dangerous non-persons. The bulk of Dennett 2023 offers arguments for allegedly catastrophic outcomes entailed by the (supposed) existence of “counterfeit people” (hereafter c-people) and proposes regulatory and social remedies. All of this, though, turns on accepting the begged question that LLMs are unquestionably c-people. In this note, I examine the claim that LLMs are c-people through the lens of Dennett’s own foundational work on personhood and consciousness.
Sketch. Dennett 2023 makes claims about personhood, so it is personhood on which I nominally have to engage. In the sequel I will follow Dennett’s own theory of personhood and argue that a modern LLM could satisfy five of the six criteria for personhood laid out in Dennett 1988. The sixth criterion is where the real fun lies: the claim that persons must have conscious self-awareness and it is here we will introduce the key antagonists of our story: philosophical zombies (p-zombies).
P-zombies are hypothetical entities that are exactly like humans in every way except that they lack subjective experience. Their existence has been full-throatedly denied by Dennett throughout his career; Dennett 1995, e.g., articulates clear arguments that an entity which is behaviourally indistinguishable from a human, but does not have subjective experience, is an invalid construct which cannot exist. If the entity is behaviourally indistinguishable from a human, then it must also have subjective experience.
Can Dennett have it both ways? Can he outright deny the possibility that zombies exist while pointing at an LLM – which he acknowledges is so behaviourally indistinguishable from a human that it is a dangerous “counterfeit” – and cry “But this kind of zombie does exist!”? I will deal with personhood first, and then brace for the zombie attack.
Personhood is a complex concept with rich literatures of philosophical, legal, and social accounts, including, e.g., legal persona ficta for corporations and important environmental entities (Kramm 2020). There is already a rich, and growing, literature on the possibilities of moral patienthood, personhood, and other statuses and protections for AI (see, e.g., Harris and Anthis 2021, Mosakas 2021, Akova 2023). I make no attempt here to provide a comprehensive review or to engage explicitly with the breadth of this literature; my only goal with this note to respond directly to Dennett’s position that AI persons are necessarily counterfeit. Dennett’s work is foundational to the investigation of consciousness and his voice carries the weight it deserves, which is why it is so important to interrogate critically his latest claims.
The rich literature on personhood aside, the only definition that really matters in the context of interrogating Dennett’s imagined c-people is his own. Dennett 1988 articulates a set of six criteria that he considers necessary for personhood:
A person must be a rational being.
A person must be a being to which states of consciousness are attributed, orto which psychological or mental or intentional predicates, are ascribed.
A person must compel other persons to adopt a stance with respect to it.
A person must be capable of adopting a reciprocal stance with respect toother persons.
A person must be capable of verbal communication.
A person must be conscious and capable of subjective experience.
Dennett 2023 declines to offer a concrete definition of a c-person, so I am left to infer one: a c-person is an entity which fulfills all-but-one of Dennett’s criteria, in the same way that a philosophical zombie falls short of its human counterpart solely by lacking subjective experience.
I will now consider the capabilities of current foundation model scale LLMs against Dennett’s criteria for personhood.
RATIONALITY. Dennett 2023 acknowledges that LLMs are passing, or close to passing, Turing’s imitation game, when he writes “Our natural inclination to treat anything that seems to talk sensibly with us as a person.” In agreeing that LLMs ‘talk sensibly’, I take Dennett to have ceded the point that they are perceived as rational.
INTENTIONAL PREDICATES. Again, I take Dennett 2023 to have ceded this point from the outset. If there were no danger of attributing intentional predicates to LLMs, there would be no more need to label them “counterfeit people” than there would be to label a blueberry a “counterfeit peach.”
ADOPTING A STANCE. Cherry-picking the same quote again, I note that Dennett 2023 confesses to the adoption of an intentional stance with respect LLMs – indeed, Dennett suggests that it is almost impossible to not do so: “Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the ‘intentional stance’—turns out to be easy to invoke and almost impossible to resist, even for experts.”
RECIPROCITY IN STANCE ADOPTION. Short of descending into a Cartesian standoff, the best one can do is to ask for a self-report[1] I asked the 25 September 2023 version of ChatGPT (GPT-4 with RLHF) if it adopted a stance towards me and, if so, what it was. It replied thus:
Given your background in computer science and AI research, it’s reasonable to adopt the Intentional Stance when interacting with you. I treat you as a rational agent with beliefs, desires, and intentions. This allows for a more nuanced and effective interaction, particularly in discussing complex topics like AI ethics, personhood, and philosophy.
VERBAL COMMUNICATION. LLMs communicate with language and, if one insists on a pedantic interpretation of ‘verbal’, many, including ChatGPT[2], are now capable of truly verbal interaction.
This leaves us with only the sixth criterion of Dennett 1988 unmet: a person must be CONSCIOUS AND CAPABLE OF SUBJECTIVE EXPERIENCE.
The Zombie Attack. Suppose there exists a LLM that passes Turing’s imitation game (viz., the LLM is behaviourally indistinguishable from a human). Although some existing LLMs are already apparently passing the Turing test so hard that technically sophisticated human persons are willing to sacrifice their career on the point, see Lemoine 2022, the expert consensus opinion articulated in Butlin et al. 2023 suggests that while current LLMs do not yet convincingly demonstrate subjective experience, there are no fundamental barriers to future AI technologies that could; indeed, even zombie enthusiast David Chalmers gives 20% credence to the development of AI with subjective experience in the next decade in Chalmers 2023. Dennett 2023 acknowledges this coming reality:
“This has engendered not just a cottage industry but a munificently funded high-tech industry engaged in making products that will trick even the most skeptical of interlocutors.”
If an LLM is able to “trick even the most skeptical of interlocutors” into believing that it has subjective experience, is it not entitled to the same consideration that Dennett 1995 very aggressively grants to philosophical zombies (p-zombies)? Dennett argues convincingly that zombies are invalid constructs; if an entity is behaviourally (functionally) identical to a human being, then it must be a human being.
Like the protagonist in a grindhouse zombie film, Dennett has backed himself into a corner as a silicon zombie advances on him; a manifestation of an unavoidable contradiction with his own beliefs.
If, as Dennett 1995 demands, we grant subjective experience to p-zombies on the basis of their behavioural indistinguishability from humans, then the only available escape from Dennett’s dilemma is to extend the same consideration to LLMs. Dennett 2023 has no manevouring room left to deny the subjective experience of such an LLM without rejecting Dennett 1995’s position on p-zombies.
And if we grant LLMs subjective experience, then they’ve met the sixth criterion of Dennett 1988 for personhood and we have no choice but to conclude that LLMs are persons (and not c-persons)[3]
This raises a serious question: why, when faced with the possibility of a silicon zombie — a machine that passes Turing’s imitation game and is functionally indistinguishable from a human — Dennett found the only available logical conclusion (constrained by his own axioms and past arguments) so unpalatable that he wrote what appears on the face of it to be an “unambiguous self-refutation”? Resolving this apparent contradiction has profound moral implications regarding whether, and when, AI systems could be considered moral patients deserving ethical treatment and personhood rights.
Dennett 2023 argues the dangers of adopting an anthropomorphic stance towards an entity which has no such moral, or rational, entitlement. These dangers are real, and have been articulated in the literature (see, e.g., Bostrom 2017) and litigated at length in the public media.
Consensus opinion at the end of 2023 is that no current AI systems (LLMs included) meet our best criteria for consciousness, but Butlin and colleagues are clear that there is no principled reason that a future system could not do so. We must also, then, consider the danger of adopting an anthropocentric stance to which we are not entitled and, in so doing, allow ourselves to treat real persons as if they were counterfeit objects. I’ll give Dennett 1988 the last word:
“... our assumption that an entity is a person is shaken precisely in those cases where it matters: when wrong has been done and the question of responsibility arises.”
References
Akova, Fırat (2023), “Artificially sentient beings: Moral, political, and legal issues”, New Techno Humanities.
Bender, Emily M, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell (2021), “On the dangers of stochastic parrots: Can language models be too big?”, in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610-623.
Bostrom, Nick (2017), Superintelligence, Dunod.
Butlin, Patrick, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M Fleming, Chris Frith, Xu Ji, et al. (2023), “Consciousness in artificial intelligence: Insights from the Science of consciousness”, arXiv preprint arXiv:2308.08708.
Chalmers, David J (2023), “Could a large language model be conscious?”, arXiv preprint arXiv:2303.07103.
Dennett, DC (1988), “Conditions of personhood”, in What is a person?, Springer, pp. 145-167.
— (1995), “The unimagined preposterousness of zombies”, Journal of Consciousness Studies, 2, pp. 322-325.
— (2023), “The problem with counterfeit people”, The Atlantic.
Harris, Jamie and Jacy Reese Anthis (2021), “The moral consideration of artificial entities: a literature review”, Science and engineering ethics, 27, 4, p. 53.
Kramm, Matthias (2020), “When a river becomes a person”, Journal of Human Development and Capabilities, 21, 4, pp. 307-319.
Lemoine, Blake (2022), “Is LaMDA Sentient?—an Interview”, Medium.
Li, Kenneth, Aspen K Hopkins, David Bau, Fernanda Viegas, Hanspeter Pfister,´ and Martin Wattenberg (2022), “Emergent world representations: Exploring a sequence model trained on a synthetic task”, arXiv preprint arXiv:2210.13382.
Mosakas, Kestutis (2021), “On the moral status of social robots: considering the consciousness criterion”, AI & SOCIETY, 36, pp. 429-443.
[1] I acknowledge the objection that GPT-4 may simply be “parroting” back what it has learned from its training data. However, the evidence that LLMs build world models is now overwhelming (Li et al. 2022) and so I reject the claims of “mere” stochastic psittacinary advocated in Bender et al. 2021. An LLM can build an internal model of a human with which it is interacting.
[2] see, e.g., ChatGPT can now see hear and speak
[3] There is one small trap door through which our hero Dennett may still escape: he claimed only that his six criteria were necessary for personhood, explicitly avoiding a claim of sufficiency. The onus now, though, is on Dennett to articulate a sufficient condition for personhood that an LLM cannot meet.