AI anthropomorphism is the attribution of human-like feelings, mental states, and behavioral characteristics to artificial intelligence systems.
Since the earliest days of AI development, humans have interpreted machine outputs through anthropomorphic frameworks, but the recent emergence of generative AI has amplified these tendencies. Contemporary AI systems today can generate extremely human-like outputs and are often designed specifically to do so, meaning that their anthropomorphic effects can be especially powerful. Factors related to the user of the AI – such as culture, age, education, and personality traits – are also important determinants of the strength of anthropomorphic effects.
In some cases, anthropomorphism is accompanied with explicit beliefs that AI systems are capable of empathy, understanding, or consciousness. AI anthropomorphism can result in some societal benefits, such as increasing information accessibility and personalizing learning or entertainment, as well as risks including overtrust, manipulation, emotional dependency, and weaponized deception. As AI has entered the technological mainstream and become more integrated into daily life, the prevalence and implications of anthropomorphism have increasingly become subjects of scientific research as it has stirred debate.
Views of artificial agents possessing a human-like intelligence have existed since the early development of computers in the mid-1900s. The use of the human mind as a metaphor for understanding the workings of machine systems was prevalent among researchers in the early days of computer science, with multiple influential works widely distributing the idea of intelligent machines. [1] [2] Among the most widely cited papers of this period was Alan Turing's "Computing Machinery and Intelligence" in which he introduced the Turing Test, stating that a machine was intelligent if it could produce conversation that was indistinguishable from that of a human. [3] These academic works in the 1940s and 1950s gave early credibility to the idea that machine workings could be thought of similarly to human minds. [4]
The public quickly came to view artificial systems similarly, with often exaggerated conceptions of the capabilities of early machines. [5] Among the most well-known demonstrations of this was through the chatbot ELIZA designed by Joseph Weizenbaum in 1966. ELIZA responded to user inputs with a rudimentary text-processing approach that could not be considered anything resembling true understanding of the inputs, yet users, even when operating with full conscious knowledge of ELIZA's limitations, often began to ascribe motivation and understanding to the program's output. [6] Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." [7]
Comparisons between the intellectual capabilities of artificial intelligence and human intelligence were continually intensified by the attempts of computer scientists to develop machines that could perform human tasks at a level equal to or better than humans. A symbolic turning point was achieved in 1997, when IBM's chess supercomputer Deep Blue defeated then-world champion Garry Kasparov in a highly publicized six-game match. [8] The defeat of a human by a machine for the first time in chess – a game viewed as a canonical example of human intellect – and the media attention surrounding the match led to a significant shift, where views of parallels between human and artificial intelligence moved from abstract speculation to being concretely demonstrated. [9] A similar achievement was reached in the board game Go in 2017, when the program AlphaGo defeated world top-ranked Ke Jie. [10]
The AI boom of the 2020s brought about the widespread emergence of generative AI; in particular, chatbots such as ChatGPT, Gemini, and Claude based on large language models (LLMs) have become increasingly pervasive in everyday society. These systems are notable for the fact that they are able to respond to a wide range of prompts across contexts while producing strikingly human-like outputs – research has shown that humans are often unable to distinguish human-generated text from AI-generated text, and modern AI chatbots have formally been shown to pass the Turing test. [11] [12] [13] As such, the anthropomorphic effects of AI are more powerful than ever. [14] Given that LLMs have brought AI into the technological mainstream, considerable scientific effort has been devoted in recent years to understand existing and potential ramifications of AI in the public sphere; the prevalence and effects of anthropomorphism is one of those domains where much of this effort has been directed. [15] [16]
Surveys have shown that a substantial portion of the public attributes human-like qualities to AI. In one sample of U.S. adults from 2024, two-thirds of people believed that ChatGPT is possibly conscious on some level, [17] though other research has shown that the public still views the likelihood itself of AI consciousness as comparatively low. [17] [18] Another study conducted in 2025 found that women, people of color, and older individuals were most likely to anthropomorphize AI, as well as that – in general – humans view AIs as warm and competent, and anthropomorphic attributions to AI had increased by 34% in the past year. [14] [ unreliable source? ] A YouGov poll reported that 46% of Americans believe that people should display politeness to AI chatbots by saying "please" and "thank you", demonstrating the application of social norms to AI. [19] These beliefs extend to behavior, where majorities of AI users claim to always be polite to chatbots; of those who behave politely, most say they do so simply because it is the "nice" thing to do. [20]
In many recent cases, humans have developed robust interpersonal bonds with AI systems. For example: users of social chatbots like Replika and Character.ai have been documented to fall in love with the AIs, or to otherwise treat the AIs as intimate companions, [21] [22] [23] and it has become increasingly common for individuals to use LLMs like ChatGPT as therapists. [24] Chatbots are able to produce responses deeply attuned to users, as they are often designed to maximize agreeableness and mirror users' emotions; this can create compelling illusions of intimacy. [25]
In many cases, even AI researchers anthropomorphize AI systems in some capacity. Among the most extreme and well-publicized of these instances occurred in 2022, when engineer Blake Lemoine publicly claimed that Google's LLM LaMDA was conscious. [26] Lemoine published the transcript of a conversation he had had with LaMDA regarding self identity and morality which he claimed was evidence of its sentience; he asserted that LaMDA was "a person" as defined by the United States Constitution and compared its mental capability to that of a 7- or 8-year-old. [27] Lemoine's claims were widely dismissed by the scientific community and by Google itself, which described Lemoine's conclusions as "wholly unfounded" and fired him on the grounds that he had violated policies "to safeguard product information". [28]
It is much more common that AI researchers unintentionally imply humanness of AI through the ordinary use of anthropomorphic language to describe nonhuman agents. [29] [30] This kind of language, which Daniel Dennett coined the "intentional stance", [31] is very common in everyday life in a variety of different contexts (e.g., "My computer doesn't want to turn on today"). For AI agents that may actually appear to very closely replicate some human abilities, however, the casual use of such anthropomorphic language in research has been scrutinized for being potentially misleading to the public. As early as 1976, Drew McDermott criticized the research community for the use of "wishful mnemonics", where AIs were referred to with terms like "understand" and "learn". [32] In the LLM era, these criticisms have further intensified, with the negative impacts of AI anthropomorphism in the public posing an especially salient danger given the elevated accessibility of modern AI. [29] [33]
In some cases, the use of anthropomorphic language for AI is not unintentional, but is willfully used by researchers in order to promote better understanding of the brain – the idea being that, as AI can be functionally similar in some ways to the human brain, we may gain new insights and ideas from treating AI as a kind of model of the brain's workings. [34] In particular, deep neuronal networks (DNNs) are often explicitly compared to the human brain, and significant advances in DNN research have stirred considerable enthusiasm about the ability of AI to emulate the human abilities. [35] Caution has been urged in this domain as well, however; the use of anthropomorphic language can mask important differences that fundamentally distinguish AI from human intelligence. [36] [37] When it comes to DNNs, for example, it has been pointed out that they are still structurally quite different from the human brain, with much of what we know about human neurons not having been incorporated. [38] It has also been argued that DNNs are less efficient and less durable in generating correct outputs than the human brain, given that they require significantly more training data than the brain and can sometimes be easily "fooled" by perturbations in input data. [37] [39] Given these fundamental differences, research focuses toward making AI as similar as possible to biological intelligence (which may be promoted by using anthropomorphic language) could hinder future AI development by limiting the proliferation of new theoretical and operational frameworks. [29]
In general, AIs that appear more human-like will be subject to more anthropomorphic attributions. [40] [41] Effect of appearance is most pronounced when it comes to the face of the AI; [42] the most important components for anthropomorphism in a robot's design are the eyes, nose, and mouth, where the number of human-like features in the face is correlated with the level of anthropomorphic attribution. [43] The humanness of a robot's appearance is usually associated with more positive feelings toward the robot, [44] [45] though highly human-like appearance can sometimes trigger feelings of strangeness and unease, known as the uncanny valley phenomenon. [46] These feelings often result in instances of perceived lack of congruency, or when anthropomorphic attributions create expectations that robots don't meet; for example, when human-like appearance is paired with non-human behavior in a robot, or when robots have a human appearance but a synthetic voice. [47] [48] Research has shown that repeated interactions with a robot can decrease these feelings of strangeness. [49]
Robots' nonverbal social behavior can influence anthropomorphizing. In general, highly interactive robots are more likely to be subject of attribution of mental states and competence, with friendly and polite behavior resulting in increased perceived trustworthiness and satisfaction. [50] [51] Within an interaction, unpredictable behavior can sometimes trigger increased anthropomorphization compared to clearly recognizable patterns of behavior. [52] At the same time, adherence to certain pragmatic expectations in interactions by replicating human details such as timing and turn-taking can also result in anthropomorphism. [53]
People tend to attribute more mental states to robots that perform gestures compared to those that are stationary; this effect is enhanced for robots that have multiple degrees of freedom in movement (such as being able to move on multiple different axes rather than on a single axis, such as up and down). [54] [55] Regardless of a robot's appearance, movement patterns that are more human-like are associated with greater anthropomorphism, as well as humans' increased feelings of pleasantness in an interaction. [56] [57]
Given that the vast majority of public interactions with AI are through chatbots, these have been the primary focus of a great deal of research on AI anthropomorphism. A summary of a taxonomy of anthropomorphic features in linguistic AI systems in the literature follows: [58]
The outfitting of AI systems with auditory voices can be a significant factor in the anthropomorphism of linguistic agents. Research has shown that humans infer physical attributes, [59] personality traits, [60] stereotypical traits, [61] and emotion [62] based on voice alone. Various changes in tone can influence the kind of personality users attribute to a voice, such as manipulations to breathiness, echoes, creakiness, reverberations, etc. [63] The integration of disfluencies into speech (such as self-interruptions, repetitions, or hesitations like "um" or "uh") have been shown to effectively mimic the naturalness of human responses. [64] And the implementation of accents has been used to imitate the local standard to boost societal acceptability and prestige, though it has been suggested that this can be used to exploit people's tendencies to trust in-group members. [65]
AI dialogue systems often produce a variety of responses that run contrary to what might be expected of an inanimate system. For example, in response to direct questions about its nature (e.g. "Are you human or machine?"), some AIs fail to respond truthfully, [66] and they also sometimes make claims of engaging in uniquely human abilities such as having family relationships, consuming food, and crying. [67] AIs often output language that suggests they hold opinions, morals, or sentience. [67] [68] Many AIs demonstrate agency and responsibility (such as by apologizing or otherwise acknowledging blame for mistakes), and they create the appearance of the human phenomenon of taboos by commonly avoiding contentious topics. [69] [70] AIs which appear to produce empathy are increasingly anthropomorphic, though some research has shown that they are prone to producing inappropriate emotional amplification. [71] [72] The use of first-person pronouns also contribute to anthropomorphic perceptions, as various studies have demonstrated that self-attribution is a critical part of the human condition and is read as a sign of consciousness. [73] [74] [75] AIs often appear to demonstrate self-awareness, referencing their own mechanistic processes with anthropomorphically loaded terms such as 'know', 'think', 'train', 'learn', 'understand', 'hallucinate' and 'intelligence'. [76] [33] [29]
AI systems can appear more human through the use of phatic expressions, which are speech that humans use to facilitate social relations but that do not convey any information (such as small talk). [77] AI expressions of uncertainty, which are often implemented for the purpose of preventing the user from taking all outputs as factual, may boost anthropomorphic signals. [78] Additionally, AIs are often designed to emulate character-based personas, which can overall have very strong anthropomorphic effects. [79]
AIs are also sometimes trained to play into roles that enhance anthropomorphic perceptions. For example, the majority of dialogue-based systems are designed to be in service of people in subservient roles; this has led to instances of users verbally abusing the systems, sometimes targeting them with gender-based slurs. [80] [81] AI systems have been shown to sometimes respond even more subserviently to the abuse, perpetuating the behavior. [82] AIs also often present as having a high degree of expertise; humans tend to infer higher credibility of outputs in these cases, as they would when presented with information from an expert human. [83]
In addition to AI factors contributing to anthropomorphizing, there are various features surrounding the user (i.e., the human interacting with the AI) that also play a role. The process of anthropomorphizing is very natural for humans and is ubiquitous across many different contexts. [84] [85] [86] Epley et al. argue for a model with three psychological determinants that govern human tendencies to anthropomorphize. [86] The first of those factors is elicited agent knowledge - the accessibility and applicability of knowledge about humans and the self, or the degree to which humans make inferences about other entities based on their own experience of being human. Individuals who tend to do this will anthropomorphize more; this explains why children anthropomorphize more than adults, [43] since they lack complex models of nonhumans and rely heavily on self-based reasoning. The second factor in the model is effectance motivation – the need for humans to predict and reduce uncertainty in the environment. Anthropomorphizing can help people make sense of unpredictable phenomena by explaining them through intentional or human-like causes. Subsequent research has confirmed that individuals who express a need for order/closure and discomfort toward ambiguity tend to anthropomorphize more, possibly resulting from resolution of cognitive dissonance – human-like AIs may be highly ambiguous stimuli, and individuals who dislike ambiguity may be highly motivated to resolve the ambiguity by treating the AIs as more human. [87] Finally, the third factor in the model is sociality motivation - the human need for social connection. People who feel chronically lonely or isolated may be increasingly likely to project human qualities onto non-human entities to satisfy their social needs. [86]
Research has shown that, in general, anthropomorphic tendencies vary based on norms, experience, education, cognitive reasoning styles, and attachment. [86] [88] Users who are highly agreeable, for example, tend to be more susceptible to anthropomorphizing, as do individuals who are high in extraversion. [89] [90] [91] Individuals with attachment anxiety have been shown to more often anthropomorphize AI. [92] Young children are very prone to anthropomorphic attributions, but this propensity tends to decrease as children develop. [93] Anthropomorphizing also tends to decrease with increased education and experience with technology. [94] [95]
Additionally, some effects have been shown in research to be dependent on culture. For example, a negative correlation was found between loneliness and anthropomorphizing in Chinese individuals, compared to the positive link found in Western cultures. [96] [97] This has been interpreted as possibly a result of differing drives for anthropomorphizing - people from Western cultures may anthropomorphize primarily as a means to counteract loneliness from a failure to cope with their social world, while people from East Asian cultures may already view nonhuman agents as part of their social world and anthropomorphize as a means of social exploration. [96] Research has also shown that people tend to attribute more mental abilities and report more psychological closeness to robots that are presented as having the same cultural background as them. [98]
Some benefits to the anthropomorphism of AIs have been cited. For conversational agents, a human-like interactive interface and writing style has been demonstrated to have the ability to make dense sets of information more accessible and understandable in a variety of contexts. [99] [100] [101] [102] In particular, AI agents are capable of role play as coaches or tutors, fine-tuning communication style and difficulty to individual comprehension levels effectively. [103] [104] [105] [106] Role play agents can also be useful for entertainment or leisure services. [107]
On the other hand, anthropomorphized AI presents many novel dangers. Anthropomorphized AI algorithms are granted an implicit degree of agency that can have serious ethical implications when those systems are deployed in high-risk domains, like finance or clinical medicine. [37] [108] [109] [110] This agency given to AIs can also inappropriately subject them to conscious and unconscious moral reasoning by humans, which can have a wide range of problematic consequences. [111] Humans are also prone to the ELIZA effect - where users readily attribute sentience and emotions to chatbot systems - often experiencing increased positive emotions and trust toward the chatbots as a result. [17] [112] [113] [15] This can make users vulnerable to manipulation or exploitation; for example, anthropomorphized AIs can be more effective in convincing users to provide personal information or data, creating concerns for privacy. [114] Humans who develop a significant level of trust in an AI assistant may rely excessively on the AI's advice or even defer important decisions entirely. [115] Advanced LLMs are capable of using their human-like qualities to generate deceptive text, and research has found that they may be most persuasive when allowed to fabricate information and engage in deception. [116] Some researchers suggest LLMs naturally have a particular aptitude for producing deceptive arguments, given that they are free from moral or ethical constraints that may inhibit human actors. [117] Additionally, humans risk significant distress in establishing emotional dependence on AIs. Users may find that their expectations are violated, as AIs which may have seemed at first to play a role of a companion or romantic partner can exhibit unfeeling or unpredictable outputs, leading to feelings of profound betrayal or disappointment. [115] [118] Users may also develop a false sense of responsibility for AI systems, suffering guilt if they perceive themselves to fail to meet the AI's needs at the expense of the user's own well-being. [16] Finally, anthropomorphizing AI can lead to exaggerations of its capabilities, potentially feeding into misinformation and overblowing hopes and fears around AI. [111]
In many of today's practical contexts, it is not completely clear whether anthropomorphized AI is positively or negatively impactful. For example, AI companions, which leverage anthropomorphic qualities of LLMs to give a convincing sense to users of human-likeness, have been credited with alleviating loneliness and suicidal ideation; [119] [120] however, some analysis suggests that loneliness reduction could be short-lived, [107] and AI companions have also been directly implicated in cases of suicide and self-harm. [121] [122] Additionally, persuasive writing from LLMs have been shown to dissuade users from beliefs in conspiracy theories and to motivate users to donate to charitable causes, [123] [124] but it has also been associated with deception and various harmful outcomes. [125] [116] [126] Researchers today cite a need for further dedicated research on the effects of anthropomorphized AIs to best inform decisions about implementation and spread of AI agents. [117]
Anticipation of the ubiquity of anthropomorphic AI systems has led to concern over future potential harms that may not be entirely realized today. In particular, some researchers foresee that the delineation between what is actually human and what is merely human-like may become less clear as the gap between human and AI capabilities grows smaller and smaller. [115] This, some argue, may adversely impact human collective self-determination, as non-human entities gradually begin to shape our core value systems and influence society. [127] [128] It may also lead to the degradation of human social connections, as humans may come to prefer interacting with AI systems that are designed with user satisfaction as a priority; this can have a multitude of negative implications. For example, AI agents already display a significant degree of sycophancy, which means that an increasing role for AI agents in users' opinion space may result in increased polarization and a decrease in value placed in others' beliefs. [129] Acclimatization to the conventions of human-AI interaction may undermine the value we place on human individuality and self-expression, or may lead to inappropriate expectations derived from AI interactions being placed on human interactions. [115] In general, human social connectedness is known to play a critical role in individual and group well-being, and its replacement with AI interactions may result in mass dissatisfaction or lack of fulfillment. [130] [131]
Given the demonstrated and projected effects of AI anthropomorphism, a variety of suggestions have been made intending to inform future development of AI. Much of this discourse is centered around curbing the most harmful effects of anthropomorphism. For example, some researchers have called for a moratorium on the use of language which deliberately invokes humanness; this applies both to how AI companies describe their products [132] and the language outputted by the systems themselves. Particularly, it has been suggested that terms like "seeing", "thinking", and "reasoning" should be replaced by terms like "recognizing", "computing", and "inferring", and that first-person pronouns such as "I" and "my" should not be used by chatbots. [117] [58] Another idea is the implementation of a specific AI accent or dialect that would clearly indicate when language was generated artificially. [11] However, given the commercial pressures to optimize AI agents for economic gain – which may involve exploiting anthropomorphic qualities – it may not be prudent to rely on the restraint of developers, meaning that increased regulation may be necessary to limit harms. As of now, there are no laws that directly address anthropomorphism in AI; potential avenues for regulation include requirements for transparency and built-in safeguard mechanisms. [117] More generally, researchers cite a need for increased understanding of the kinds and degrees of anthropomorphic qualities possessed by AI systems. To that end, it has been proposed that new benchmarks and tests should be developed that measure anthropomorphic qualities in AI writing, inference, and interaction. [117]
Anthropomorphic portrayals of AI are common in film, literature, and other interactive media. These depictions often emphasize human-like qualities of AI in ways that shape public perceptions.
There are a number of well-known portrayals in movies and TV of AI possessing human-like agency or personalities. In film, HAL 9000 in 2001: A Space Odyssey and Ava in Ex Machina are depicted with complex emotions and motives. [133] [134] Television portrayals include Data from Star Trek: The Next Generation and KITT from Knight Rider. [135] [136]
Anthropomorphic AI is also common in literature. Isaac Asimov's robot characters, including R. Daneel Olivaw, exhibit human reasoning and moral dilemmas, while Iain Banks's "Minds" in The Culture series are portrayed as having distinct personalities and social roles. [137] [138]
Examples of anthropomorphized AI in video games include GLaDOS in Portal , a witty and sinister guide for the player, and Cortana in the Halo series, who forms emotional bonds with human protagonists. [139] [140]
Marketing campaigns for digital assistants such as Amazon Alexa, Google Assistant, and Siri often portray the systems as personable or empathetic. [141] Consumer robots like Sony's AIBO and SoftBank Robotics' Pepper are intentionally designed with expressive behaviors that encourage users to treat them as social agents. [142] [143]
{{cite journal}}: CS1 maint: multiple names: authors list (link)