Computers are social actors (CASA) is a paradigm which states that humans unthinkingly apply the same social heuristics used for human interactions to computers, because they call to mind similar social attributes as humans.[1][2][3]
In their 2000 article, Nass and Moon attribute their observation of anthropocentric reactions to computers and previous research on mindlessness as factors that lead them to study the phenomenon of computers as social actors. Specifically, they observed consistent anthropocentric treatment of computers by individuals in natural and lab settings, even though these individuals agreed that computers are not human and shouldn't be treated as such.
Additionally, Nass and Moon found a similarity between this behavior and research by Harvard psychology professor Ellen Langer on mindlessness. Langer states that mindlessness is when a specific context triggers an individual to rely on categories, associations, and habits of thought from the past with little to no conscious awareness. When these contexts are triggered, the individual becomes oblivious to novel or alternative aspects of the situation. In this respect, mindlessness is similar to habits and routines, but different in that with only one exposure to information, a person will create a cognitive commitment to the information and freeze its potential meaning. With mindlessness, alternative meanings or uses of the information become unavailable for active cognitive use.[4][5]
Social attributes that computers have which are similar to humans include:
Words for output
Interactivity (the computer 'responds' when a button is touched)
Ability to perform traditional human tasks
According to CASA, the above attributes trigger scripts for human-human interaction, which leads an individual to ignore cues revealing the asocial nature of a computer. Although individuals using computers exhibit a mindless social response to the computer, individuals who are sensitive to the situation can observe the inappropriateness of the cued social behaviors.[6] CASA has been extended to include robots and AI.[7][8]
Attributes
Cued social behaviors observed in research settings include some of the following:
Gender stereotyping: When voice outputs are used on computers, this triggers gender stereotype scripts, expectations, and attributions from individuals. For example, a 1997 study revealed that female-voiced tutor computers were rated as more informative about love and relationships than male-voiced computers, whereas male-voiced computers were more proficient in technical subjects than female-voiced computers.[9]
Reciprocity: When a computer provides help, favours, or benefits, this triggers the mindless response of the participant feeling obliged to 'help' the computer. For example, an experiment in 1997 found that when a specific computer 'helped' a person, that person was more likely to do more 'work' for that computer.[10]
Specialist versus generalist: When a technology is labeled as 'specialist', this triggers a mindless response by influencing people's perceptions of the content the labeled technology presents. For example, a 2000 study revealed when people watched a television labeled 'News Television', they thought the news segments on that TV were higher in quality, had more information, and were more interesting than people who saw the identical information on a TV labeled 'News and Entertainment Television'.[11]
Personality: When a computer user mindlessly creates a personality for a computer based on verbal or paraverbal cues in the interface. For example, research from 1996 and 2001 found people with dominant personalities preferred computers that also had a 'dominant personality'; that is, the computer used strong, assertive language during tasks.[12][13]
Academic research
Three research articles have represented some of the advances in the field of CASA. Researchers in this field are looking at how novel variables, manipulations, and new computer software influence mindlessness.
A 2010 article, "Cognitive load on social response to computers" by E.J. Lee discussed research on how human likeness of a computer interface, individuals' rationality, and cognitive load moderate the extent to which people apply social attributes to computers. The research revealed that participants were more socially attracted to a computer that flattered them than a generic-comment computer, but they became more suspicious about the validity of the flattery computer's claims and more likely to dismiss its answer. These negative effects disappeared when participants simultaneously engaged in a secondary task.[14]
A 2011 study, "Computer emotion – impacts on trust" by Dimitrios Antos, Celso De Melo, Jonathan Gratch, and Barbara Grosz investigated whether computer agents can use the expression of emotion to influence human perceptions of trustworthiness in the context of a negotiation activity followed by a trust activity. They found that computer agents displaying emotions congruent with their actions were preferred as partners in the trust game over computer agents whose emotion expressions and actions did not match. They also found that when emotion did not carry useful new information, it did not strongly influence human decision-making behavior in a negotiation setting.[15]
A 2011 study "Cloud computing – reexamination of CASA" by Hong and Sundar found that when people are in a cloud computing environment, they shift their source orientation—that is, users evaluate the system by focusing on service providers over the internet, instead of the machines in front of them. Hong and Sundar concluded their study by stating, "if individuals no longer respond socially to computers in clouds, there will need to be a fundamental re-examination of the mindless social response of humans to computers."[16]
One example of how CASA research can impact consumer behaviour and attitude is Moon's experiment, which tested the application of the principle of reciprocity and disclosure in a consumer context. Moon tested this principle with intimate self-disclosure of high-risk information (when disclosure makes the person feel vulnerable) to a computer, and observed how that disclosure affects future attitudes and behaviors. Participants interacted with a computer which questioned them using reciprocal wording and gradual revealing of intimate information, then participants did a puzzle on paper, and finally half the group went back to the same computer and the other half went to a different computer. Both groups were shown 20 products and asked if they would purchase them. Participants who used the same computer throughout the experiment had a higher purchase likelihood score and a higher attraction score toward the computer in the product presentation than participants who did not use the same computer throughout the experiment.[17] Studies also show that CASA can be applied to virtual influencers by showing that human-like appearance of virtual influencers show higher message credibility than anime-like virtual influencers.[18]
Developments and Challenges
Recently, there have been challenges to the CASA paradigm.[19][20] One reaction to these challenges has been MASA. This accounts for the advances in technology. MASA has been forwarded as a significant extension of CASA.[21] Additionally the challenges to CASA have influenced recent re-examinations of the CASA paradigm. Kaptelinin and Dalli (2025) acknowledge CASA's foundational role in explaining social responses to technology, but propose a nonessentialist perspective that emphasizes contextual framing. Contextual framing describes how people's social perceptions of technological artifacts emerge from how they experience these technologies within meaningful contexts, rather than from automatic or universal social reactions [22].
Some researchers have tried to demonstrate that the CASA effect no longer applies, after replicating early CASA experiments 30 years later, showing that people no longer reacted to desktop computers with the same human-human social behaviours that they did in the 1990s [23]. Contextual changes, such as societal changes, increased exposure to specific technologies, and overall perceived agency of specific technologies may have contribute to explaining the above discrepancies between the experiments in the 1990s and the experiments in the 2020s [23]. This contributes to arguments for updating CASA to reflect such contextual changes in human-technology dynamics [19]. It is argued that social scripts for interacting with technological agents have changed, in part, due to increased exposure to technologies. People are now adapting new unique social scripts for interacting with technologies [19]. Early CASA research suggested that people would apply human-human social behaviors when interacting with technologies [19][2]. However, because human–machine interactions change (for example with exposure to a specific technology over time) and often occur in very different contexts from human–human ones, they are experienced in correspondingly different ways. [19][22]. This does not necessarily mean the interaction between a human and a machine can't be social, however it is often very different due to a difference in context. These developments highlight the importance of studying how social meaning arises situationally, through people’s lived experiences with technology in specific contexts. A nonessentialist approach to CASA would re-frame sociality in human–technology interaction as emergent and context-dependent. The same artifact may be perceived as a social actor or a mere tool depending on how it is framed within an activity, an environment, a persons expectations and a persons extended experiences with it [22]. A technological artifact will influences parts of a whole interaction context, very often, in a profoundly different way then a human would in the same situation. Thus the experience of the interaction will be different. Alongside other contextual factors a social (or non-social) interaction emerges. To some degree a nonessential view is still in-line with CASA at its core (human-human interactions are context dependent themselves). For example, a person may experience a loud demanding person as rude in one context but as helpful in another (if they were coordinating the exit of a building that was on fire v.s. yelling at a video game they just lost). Humans can sometimes even treat each other in instrumental ways.
1 2 Reeves, B. and C.I. Nass, The media equation: how people treat computers, television, and new media like real people and places. 1996, Stanford, Calif.; New York: CSLI Publications; Cambridge University Press.
↑ Nass, C., Y. Moon, and N. Green, Are machines gender neutral? Gender-stereotypic responses to computers with voices. J Appl Soc Psychol Journal of Applied Social Psychology, 1997. 27: p. 864-76.
↑ Fogg, B.J., & Nass, C. I., How users reciprocate to computers: An experiment that demonstrates behavior change. CHI Extended Abstract. 1997, New York: ACM Press.
↑ Moon, Y. and C. Nass, How "Real" Are Computer Personalities? Psychological Responses to Personality Types in Human-Computer Interaction. Communication Research, 1996. 23(6): p. 651-74.
↑ Hong, S., and Sundar, S. S., Social Responses to Computers in Cloud Computing Environment: The Importance of Source Orientation. ACM, 2011. CHI 2011, May 7–12, 2011, Vancouver, BC, Canada.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.