An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI). [1]
The term "echoborg" was coined by social psychologists Kevin Corti and Alex Gillespie, whose research at the London School of Economics explored unscripted face-to-face social encounters between research participants and confederates whose words were covertly supplied by rudimentary AIs known as “chat bots" and vocalized via speech shadowing. [2] [3] The idea is derivative of the cyranoid concept that originated with Stanley Milgram. [4] [5] [6]
The “echoborg method” allows one to investigate how people behave and make attributions toward an AI (or more precisely, a human-AI “hybrid”) when their psychological state is fully primed for human-human interaction. Other forms of human-AI interaction (e.g., computer-mediated conversation) involve a machine interface, anthropomorphic analog, or a virtual reality layer through which a person communicates with an AI, and these forms of mediation fundamentally alter the intersubjective relationship between the human and artificial agents party to an interaction. [7]
The echoborg concept has been explored in performance art as commentary on the increasing ubiquitousness of AI and its contribution to human culture, as well as people's dependency on various types of AI (e.g., GPS navigation systems) for carrying out mundane social tasks. [8] [9]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems, as opposed to the natural intelligence of living beings. As a field of research in computer science focusing on the automation of intelligent behavior through machine learning, it develops and studies methods and software which enable machines to perceive their environment and take actions that maximize their chances of achieving defined goals, with the aim of performing tasks typically associated with human intelligence. Such machines may be called AIs.
ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school, and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.
Stanley Milgram was an American social psychologist, best known for his controversial experiments on obedience conducted in the 1960s during his professorship at Yale.
A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Conversation is interactive communication between two or more people. The development of conversational skills and etiquette is an important part of socialization. The development of conversational skills in a new language is a frequent focus of language teaching and learning. Conversation analysis is a branch of sociology which studies the structure and organization of human interaction, with a more specific focus on conversational interaction.
Artificial human companions may be any kind of hardware or software creation designed to give companionship to a person. These can include digital pets, such as the popular Tamagotchi, or robots, such as the Sony AIBO. Virtual companions can be used as a form of entertainment, or they can be medical or functional, to assist the elderly in maintaining an acceptable standard of life.
Customer service is the assistance and advice provided by a company to those people who buy or use its products or services. Each industry requires different levels of customer service, but towards the end, the idea of a well-performed service is that of increasing revenues. The perception of success of the customer service interactions is dependent on employees "who can adjust themselves to the personality of the customer". Customer service is often practiced in a way that reflects the strategies and values of a firm. Good quality customer service is usually measured through customer retention. Customer service for some firms is part of the firm’s intangible assets and can differentiate it from others in the industry. One good customer service experience can change the entire perception a customer holds towards the organization.
NICE is an American technology company specializing in customer relations management software, artificial intelligence, digital and workforce engagement management. The company serves various industries, such as financial services, telecommunications, healthcare, outsourcers, retail, media, travel, service providers, and utilities.
In philosophy, psychology, sociology, and anthropology, intersubjectivity is the relation or intersection between people's cognitive perspectives.
In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do.
Cyranoids are "people who do not speak thoughts originating in their own central nervous system: Rather, the words they speak originate in the mind of another person who transmits these words to the cyranoid by radio transmission".
Justine M. Cassell is an American professor and researcher interested in human-human conversation, human-computer interaction, and storytelling. Since August 2010, she has been on the faculty of the Carnegie Mellon Human Computer Interaction Institute (HCII) and the Language Technologies Institute, with courtesy appointments in Psychology, and the Center for Neural Bases of Cognition. Cassell has served as the chair of the HCII, as associate vice-provost, and as Associate Dean of Technology Strategy and Impact for the School of Computer Science. She currently divides her time between Carnegie Mellon, where she now holds the Dean's Professorship in Language Technologies, and PRAIRIE, the Paris Institute on Interdisciplinary Research in AI, where she also holds the position of senior researcher at Inria Paris.
A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
Virtual intelligence (VI) is the term given to artificial intelligence that exists within a virtual world. Many virtual worlds have options for persistent avatars that provide information, training, role-playing, and social interactions.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Computers are social actors (CASA) is a paradigm which states that humans unthinkingly apply the same social heuristics used for human interactions to computers, because they call to mind similar social attributes as humans.
Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.
Anne Harper Anderson OBE FRSE former University of Glasgow Vice Principal and Head of the College of Social Sciences, and Gender Champion, specialising in communications including machine-human interaction. She served on the Engineering and Physical Sciences Research Council (EPSRC) which allocated £800million per annum for research. She was awarded an Order of the British Empire for services to social science (2002) and elected as a Fellow of the Royal Society of Edinburgh (2015).
Alex Gillespie is a professor of Psychological and Behavioral Science at the London School of Economics and a researcher at Oslo New University.