![]() | |
Year of creation | 2013 |
---|
Nadine is a gynoid humanoid social robot that is modelled on Professor Nadia Magnenat Thalmann. [1] The robot has a strong human-likeness with a natural-looking skin and hair and realistic hands. Nadine is a socially intelligent robot which returns a greeting, makes eye contact, and can remember all the conversations had with it. It is able to answer questions autonomously in several languages, simulate emotions both in gestures and facially, depending on the content of the interaction with the user. [2] [3] [4] Nadine can recognise persons it has previously seen, and engage in flowing conversation. [5] [6] [7] Nadine has been programmed with a "personality", in that its demeanour can change according to what is said to it. [8] Nadine has a total of 27 degrees of freedom for facial expressions and upper body movements. With persons it has previously encountered, it remembers facts and events related to each person. [9] [10] It can assist people with special needs by reading stories, showing images, put on Skype sessions, send emails, and communicate with other members of the family. [11] [12] [13] [14] It can play the role of a receptionist in an office or be dedicated to be a personal coach. [15] [16]
Nadine interacted with more than 100,000 visitors at the ArtScience Museum in Singapore during the exhibition, "HUMAN+: The Future of our Species", that was held from May to October 2017. [17] [18] [19] Nadine has worked as a customer service agent in AIA Insurance Company in Singapore. [20] [21] [22] This is the first time in the world that a humanoid robot is used as a customer service agent.
Nadine is a next-generation humanoid robot that is a successor from Eva, [23] a humanoid robot head manufactured by Hanson Robotics in 2008. Eva's software platform was developed at MIRALab, [24] University of Geneva. Eva's head shows very realistic moods and emotions [25] and short term memory. [26] [27] Eva has also performed in a play in the Roten Fabrik Theatre at Zurich. [28]
Nadine has been created in 2013 by Kokoro, Japan and has been modelled after Professor Nadia Magnenat Thalmann. Nadine has a head and full body with a natural appearance. Nadine software platform which has been developed at the Institute for Media Innovation in Singapore's Nanyang Technological University is able to show emotions, speak naturally, understand some gestures, and remember and retrieve facts during dialogue sessions. [29] [30] Nadine also interacts with arm movements. Ongoing research provides the social robot with two articulated hands and natural grasping. [31] [32] [33] [34] Nadine is also linked to all kinds of databases such as its personal dataset, Wikipedia, weather channels, and many others.
Nadine (social robot) is built with a classic perception – processing/decision – interaction layer framework. The design of Nadine platform with objectives of maintaining human-like natural conduct even in complex situation, be generic to handle any kind of data and place of operation, multi-lingual support etc.
Nadine's functionalities are based on her understanding of environment and perception of users/people in front of her. Nadine's perception layer is focused on this task. Nadine uses a 3D depth cameras, webcam and microphone to pick up vision and audio inputs from her environment and users. Perception layer is composed of independent sub-modules that operate on different input streams of the above-mentioned devices to recognize faces, [35] emotions, [13] gestures, [36] [37] user location, intention, comportment etc. and other environmental attributes such as object recognition, [38] [39] location etc.
The processing layer functions as Nadine's brain that uses the perception outputs to gauge the situation and decide on how to act according to it. The main component of this layer is a behavior tree planner, Nadine's central processing unit allows to process all perceived inputs. Based on the inputs received from perception layer, the behavior tree planner updates the other sub-modules of processing layer, which include processing dialog between user and Nadine, affective system and memories of her interaction. To process dialog, generic chatbots [40] [41] have been built to handle different situations and questions. An online search based on Google Assistant is also integrated to answer questions outside the trained corpus. Based on the user's speech, emotion and Nadine's current emotion, Nadine can exhibit different human motion to user. [13] Nadine's memory model [42] also allows her to remember specific facts about the user and context of current conversation in order to provide appropriate responses. Upon understanding the user interaction and environment context, an appropriate verbal or non-verbal response is decided. For this purpose, Nadine's processing layer maps each perception layer stimuli to an activation and threshold. Based on the processing of each stimulus by each sub-module, the activation levels are varied. When thresholds are reached, each winning action is passed on to interaction layer to show the various responses in Nadine.
The interaction layer or Nadine controller is responsible for executing each of the responses received from processing layer to show it in Nadine's face or gesture. For example, based on user location modify Nadine's head to maintain eye gaze with user. Apart from this, the interaction layer is also responsible for controlling her motors to show different gestures and facial expressions. For verbal responses, it includes a speech synthesizer and lip societyion module. Based on the verbal response, corresponding phonemes and visemes are generated. The speech synthesizer also takes into account the tone of dialog (to show various emotions) while generating speech. The lorgahronization converts the visemes into corresponding facial motor position to move Nadine's lips according to her speech. Currently, Nadine can support six languages including English, German, French, Chinese, Hindi and Japanese.
Nadine has participated in live demos on stage and engaged with people from all walks of life. Proclaimed as one of the world's most realistic humanoid robot, [43] Nadine made her first public appearance as a key highlight at the “Human+: The Future of Our Species” exhibition held in Singapore's ArtScience Museum. [44]
She has interacted with many people from corporate companies across various industries such as Dentsu Aegis Network (DAN), Credit Suisse [45] and Deutsche Bank. [46]
Nadine also interacted with Prime Minister of India, His Excellency Narendra Modi during his historic visit to NTU Singapore, on 1 June 2018, which was one of the innovations he took special interest in. [47] [48]
Nadine has worked as a customer service agent at AIA Singapore. [20] [21] [22] She has been trained to handle questions that are usually asked to AIA customer service agents. She also encourages AIA customers to sign up with AIA e-care registration portal. Customer service interactions were used to train a machine-learning based conversational dialog engine. A client-server architecture was also set up between our platform and AIA portal to allow fast and secure communication. [49]
In late 2020 and until April 2021, Nadine has spent 6 months at Bright Hill Evergreen Home in Singapore to assist elderly in playing Bingo and interacting with them. With the ethical committee agreement of NTU, a thorough study has been done for the first time on the interaction of Nadine social robot with light dementia patients. [50]
An android is a humanoid robot or other artificial being often made from a flesh-like material. Historically, androids existed only in the domain of science fiction and were frequently seen in film and television, but advances in robot technology have allowed the design of functional and realistic humanoid robots.
A humanoid robot is a robot resembling the human body in shape. The design may be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body. Androids are humanoid robots built to aesthetically resemble humans.
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computing published by MIT Press. One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role. Like other robots, a social robot is physically embodied. Some synthetic social agents are designed with a screen to represent the head or 'face' to dynamically communicate with users. In these cases, the status as a social robot depends on the form of the 'body' of the social agent; if the body has and uses some physical motors and sensor abilities, then the system could be considered a robot.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
Social affordance is a type of affordance. It refers to the properties of an object or environment that permit social actions. Social affordance is most often used in the context of a social technology such as Wiki, Chat and Facebook applications and refers to sociotechnical affordances. Social affordances emerge from the coupling between the behavioral and cognitive capacities of a given organism and the objective properties of its environment.
In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do.
Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, psychology and philosophy. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.
A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
Sara Beth (Greene) Kiesler is the Hillman Professor Emerita of Computer Science and Human Computer Interaction in the Human-Computer Interaction Institute at Carnegie Mellon University. She is also a program director in the Directorate for Social, Behavioral & Economic Sciences at the US National Science Foundation, where her responsibilities include programs on Secure and Trustworthy Cyberspace, The Future of Work at the Human-Technology Frontier, Smart and Connected Communities, and Securing American Infrastructure. She received an M.A. degree in psychology from Stanford in 1963, and a Ph.D., also in psychology, from Ohio State University in 1965.
A pedagogical agent is a concept borrowed from computer science and artificial intelligence and applied to education, usually as part of an intelligent tutoring system (ITS). It is a simulated human-like interface between the learner and the content, in an educational environment. A pedagogical agent is designed to model the type of interactions between a student and another person. Mabanza and de Wet define it as "a character enacted by a computer that interacts with the user in a socially engaging manner". A pedagogical agent can be assigned different roles in the learning environment, such as tutor or co-learner, depending on the desired purpose of the agent. "A tutor agent plays the role of a teacher, while a co-learner agent plays the role of a learning companion".
Emotions in virtual communication are expressed and understood in a variety of different ways from those in face-to-face interactions. Virtual communication continues to evolve as technological advances emerge that give way to new possibilities in computer-mediated communication (CMC). The lack of typical auditory and visual cues associated with human emotion gives rise to alternative forms of emotional expression that are cohesive with many different virtual environments. Some environments provide only space for text based communication, where emotions can only be expressed using words. More newly developed forms of expression provide users the opportunity to portray their emotions using images.
Nadia Magnenat Thalmann is a computer graphics scientist and robotician and is the founder and head of MIRALab at the University of Geneva. She has chaired the Institute for Media Innovation at Nanyang Technological University (NTU), Singapore from 2009 to 2021.
Prof. Daniel Thalmann is a Swiss and Canadian computer scientist and a pioneer in Virtual humans. He is currently Honorary Professor at EPFL, Switzerland and Director of Research Development at MIRALab Sarl in Geneva, Switzerland.
Artificial empathy or computational empathy is the development of AI systems—such as companion robots or virtual agents—that can detect emotions and respond to them in an empathic way.
Joëlle Coutaz is a French computer scientist, specializing in human-computer interaction (HCI). Her career includes research in the fields of operating systems and HCI, as well as being a professor at the University of Grenoble. Coutaz is considered a pioneer in HCI in France, and in 2007, she was awarded membership to SIGCHI. She was also involved in organizing CHI conferences and was a member on the editorial board of ACM Transactions on Computer-Human Interaction.
Jodi L. Forlizzi is a professor and Geschke Director, as well as an interaction designer and researcher, at the Human-Computer Interaction Institute at Carnegie Mellon University. On August 29, 2022, Forlizzi was named a Herbert A. Simon Professor at Carnegie Mellon. Her research ranges from understanding the limits of human attention to understanding how products and services evoke social behavior. Current research interests include interaction design, assistive, social, and aesthetic technology projects and systems, and notification systems. In 2014, Forlizzi was inducted into the CHI Academy for her notable works and contributions to the field of human-computer interaction.
Aude G. Billard is a Swiss physicist in the fields of machine learning and human-robot interactions. As a full professor at the School of Engineering at Swiss Federal Institute of Technology in Lausanne (EPFL), Billard’s research focuses on applying machine learning to support robot learning through human guidance. Billard’s work on human-robot interactions has been recognized numerous times by the Institute of Electrical and Electronics Engineers (IEEE) and she currently holds a leadership position on the executive committee of the IEEE Robotics and Automation Society (RAS) as the vice president of publication activities.
A virtual human is a software fictional character or human being. Virtual human have been created as tools and artificial companions in simulation, video games, film production, human factors and ergonomic and usability studies in various industries, clothing industry, telecommunications (avatars), medicine, etc. These applications require domain-dependent simulation fidelity. A medical application might require an exact simulation of specific internal organs; film industry requires highest aesthetic standards, natural movements, and facial expressions; ergonomic studies require faithful body proportions for a particular population segment and realistic locomotion with constraints, etc.