Leonardo (robot)

Last updated
Leonardo's body by Stan Winston Studios Leonardo (robot body).jpg
Leonardo's body by Stan Winston Studios

Leonardo is a 2.5 foot social robot, the first [1] created by the Personal Robots Group of the Massachusetts Institute of Technology. Its development is credited to Cynthia Breazeal. The body is by Stan Winston Studios, leaders in animatronics. [2] Its body was completed in 2002. [3] It was the most complex robot the studio had ever attempted as of 2001. [4] Other contributors to the project include NevenVision, Inc., Toyota, NASA's Lyndon B. Johnson Space Center, and the Navy Research Lab. It was created to facilitate the study of human–robot interaction and collaboration. A DARPA Mobile Autonomous Robot Software (MARS) grant, Office of Naval Research Young Investigators Program grant, Digital Life, and Things that Think consortia have partially funded the project. The MIT Media Lab Robotic Life Group, who also studied Robonaut 1, set out to create a more sophisticated social-robot in Leonardo. They gave Leonardo a different visual tracking system and programs based on infant psychology that they hope will make for better human-robot collaboration. One of the goals of the project was to make it possible for untrained humans to interact with and teach the robot much more quickly with fewer repetitions. Leonardo was awarded a spot in Wired Magazine’s 50 Best Robots Ever list in 2006. [3]

Contents

Construction

There are approximately sixty motors in the small space of the robot body that make the expressive movement of the robot possible. The Personal Robot Group developed the motor control systems (with both 8-axis and 16-axis control packages) that they've used for Leonardo. Leonardo does not resemble any real creature, but instead has the appearance of a fanciful being[ citation needed ]. Its face was designed to be expressive and communicative since it is a social robot. The fanciful, purposefully young look is supposed to encourage humans to interact with it in the same way they would with a child or pet. [4]

A camera mounted in the robot's right eye captures faces. A facial feature tracker developed by the Neven Vision corporation isolates the faces from the captures. A buffer of up to 200 views of the face is used to create a model of the person whenever they introduce themself via speech. Additionally, Leonardo can track objects and faces visually using a collection of visual feature detectors that include color, skin tone, shape, and motion. [5]

The group plans that Leonardo will have skin that can detect temperature, proximity, and pressure. To accomplish this, they are experimenting with force-sensing resistors and quantum tunnelling composites. The sensors are layered over with silicon like is used in makeup effects to maintain the aesthetics of the robot. [6]

Purpose

The goal of creating Leonardo was to make a social robot. Its motors, sensors, and cameras allow it to mimic human expression, interact with limited objects, and track objects. This helps humans react to the robot in a more familiar way. Through this reaction, humans can engage the robot in more naturally social ways. Leonardo's programming blends with psychological theory so that he learns more naturally, interacts more naturally, and collaborates more naturally with humans.

Learning

Leonardo learns through spatial scaffolding. One of the ways a teacher teaches is by positioning objects near to the student that they expect the student to use. This same technique, spatial scaffolding, can be used with Leonardo, who is taught to build a sailboat from virtual blocks, using only the red and blue blocks. Whenever it tries to use a green block, the teacher pulls the “forbidden” color away and moves the red and blue blocks into the robot's space. Leonardo learns, in this way, to build the boat using red and blue blocks only. [7]

Leonardo can also track what a human is looking at. This allows the robot to interact with a human and objects in the environment. Naturally, humans will follow a pointing gesture and/or gaze and understand that what is being pointed at or looked at is the object the other human is concerned with and about to discuss or do something with. The Personal Robots Group has used Leonardo's tracking ability and programmed the robot so it can act human-like, bringing its gaze to an object the human is paying attention to. Matching the human's gaze is one way Leonardo seems to exhibit more natural behavior. [8] Sharing attention like this is one of the ways that allows the robot to learn from a human. The robot's expressions, being able to give feedback on its “understanding” is also vital.

Another way that Leo learns is by mimicry. The same way infants learn to understand and manipulate their world is helpful for the social robot. By mimicking human facial expressions and body movement, Leo can distinguish between self and other. This ability is important for humans in taking each other's perspectives, and it's the same for a social robot. Being able to understand that “others” don't have the same knowledge it has lets the robot view its environment more accurately and make better decisions based in its programming of what to do in a given situation. It also allows the robot to distinguish between a human's intentions and their actual actions, since humans are not exact. This would allow a human without special training to teach the robot. [7]

Leonardo can explore on its own, in addition to being trained with a human, which saves time and is a key factor in the success of a personal robot. It must be able to learn quickly using the mechanisms humans already use (like spatial scaffolding, shared attention, mimicry, and perspective taking). It also cannot require an extensive amount of time. And finally, it should be a pleasure to interact with, which is why aesthetics and expression are so important. These are all important steps in bringing the robot into a home.

Interacting

Shared attention and perspective taking are two mechanisms Leonardo has access to that help it interact naturally with humans. Leonardo also can achieve something like empathy, however, by examining the data it gets from mimicking human facial expressions, body language, and speech. In a similar way, humans can understand what other humans might be feeling based on the same data, Leonardo has been programmed according to the rules of simulation theory, allowing it to render something like empathy. [9] In these ways, social interaction with Leonardo seems more human-like, making it more likely humans will be able to work with the robot in a team.

Collaborating

Leonardo can work together with a human to solve a common problem as much as his body allows. He's more effective at working shoulder-to-shoulder with a human because of the theory of mind work that is blended with his programming. In a task where one human wants cookies and another crackers from two locked locations and one of them has switched the locations, Leonardo can watch the first human trying to get to where he thinks the cookies are and open a box with cookies, helping him achieve his goal. All of Leonardo's social skills work together so it can work alongside humans. When a human asks it to do a task, it can indicate what it knows or doesn't know and what it can and cannot do. Communicating through expression and gesture and through perceiving expression, gesture, and speech, the robot is able to work as part of a team. [10]

Contributors

See also

Bibliography

Related Research Articles

<span class="mw-page-title-main">Cog (project)</span>

Cog was a project at the Humanoid Robotics Group of the Massachusetts Institute of Technology. It was based on the hypothesis that human-level intelligence requires gaining experience from interacting with humans, like human infants do. This in turn required many interactions with humans over a long period. Because Cog's behavior responded to what humans would consider appropriate and socially salient environmental stimuli, the robot was expected to act more human. This behavior also provided the robot with a better context for deciphering and imitating human behavior. This was intended to allow the robot to learn socially, as humans do.

<span class="mw-page-title-main">MIT Media Lab</span> Research laboratory at the Massachusetts Institute of Technology

The MIT Media Lab is a research laboratory at the Massachusetts Institute of Technology, growing out of MIT's Architecture Machine Group in the School of Architecture. Its research does not restrict to fixed academic disciplines, but draws from technology, media, science, art, and design. As of 2014, Media lab's research groups include neurobiology, biologically inspired fabrication, socially engaging robots, emotive computing, bionics, and hyperinstruments.

<span class="mw-page-title-main">Kismet (robot)</span>

Kismet is a robot head which was made in the 1990s at Massachusetts Institute of Technology (MIT) by Dr. Cynthia Breazeal as an experiment in affective computing; a machine that can recognize and simulate emotions. The name Kismet comes from a Turkish word meaning "fate" or sometimes "luck".

<span class="mw-page-title-main">Cynthia Breazeal</span> American computer scientist

Cynthia Breazeal is an American robotics scientist and entrepreneur. She is a former chief scientist and chief experience officer of Jibo, a company she co-founded in 2012 that developed personal assistant robots. Currently, she is a professor of media arts and sciences at MIT and the director of the Personal Robots group at the Media Lab. Her most recent work has focused on the theme of living everyday life in the presence of AI, and gradually gaining insight into the long-term impacts of social robots.

<span class="mw-page-title-main">Social robot</span>

A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role. Like other robots, a social robot is physically embodied Some synthetic social agents are designed with a screen to represent the head or 'face' to dynamically communicate with users. In these cases, the status as a social robot depends on the form of the 'body' of the social agent; if the body has and uses some physical motors and sensor abilities, then the system could be considered a robot.

Domo is an experimental robot made by the Massachusetts Institute of Technology designed to interact with humans. The brainchild of Jeff Weber and Aaron Edsinger, cofounders of Meka Robotics, its name comes from the Japanese phrase for "thank you very much", domo arigato, as well as the Styx song, "Mr. Roboto". The Domo project was originally funded by NASA, and has now been joined by Toyota in funding robot's development.

Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development.

<span class="mw-page-title-main">Tangible user interface</span>

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

<span class="mw-page-title-main">Constructionism (learning theory)</span> Learning theory involving the construction of mental models

Constructionist learning is the creation by learners of mental models to understand the world around them. Constructionism advocates student-centered, discovery learning where students use what they already know, to acquire more knowledge. Students learn through participation in project-based learning where they make connections between different ideas and areas of knowledge facilitated by the teacher through coaching rather than using lectures or step-by-step guidance. Further, constructionism holds that learning can happen most effectively when people are active in making tangible objects in the real world. In this sense, constructionism is connected with experiential learning and builds on Jean Piaget's epistemological theory of constructivism.

The Cyberflora project is a project developed by the Media Lab at Massachusetts Institute of Technology. The project is part of the Anima Machina program at MIT - a program that was developed by Assistant Professor of Media Arts and Sciences and Director of the Robotic Life Group Cynthia Breazeal. The Cyberflora project allowed Breazeal and students involved in the media lab to investigate emotional intelligence in a breed of robots that combines both plant and animal characteristics.

Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties and opportunities for guiding the learning process.

<span class="mw-page-title-main">Rosalind Picard</span> American computer scientist

Rosalind Wright Picard is an American scholar and inventor who is Professor of Media Arts and Sciences at MIT, founder and director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of the startups Affectiva and Empatica.

In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands.

<span class="mw-page-title-main">Robotics</span> Design, construction, use, and application of robots

Robotics is an interdisciplinary branch of electronics and communication, computer science and engineering. Robotics involves the design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics engineering, electronics, biomedical engineering, computer engineering, control systems engineering, software engineering, mathematics, etc.

Sociorobotics is a field of research studying the implications, complexities and subsequent design of artificial social, spatial, cultural and haptic behaviours, protocols and interactions of robots with each other and with humans in equal measure. Intrinsically taking into account the structured and unstructured spaces of habitation, including industrial, commercial, healthcare, eldercare and domestic environments. This emergent perspective to robotic research encompasses and surpasses the conventions of Social robotics and Artificial society/social systems research. Which do not appear to acknowledge that numerous robots and humans are increasingly inhabiting the same spaces which require similar performances and agency of social behaviour, particularly regarding the commercial emergence of workplace, personal and companion robotics.

<span class="mw-page-title-main">Embodied cognition</span> Interdisciplinary theory

Embodied cognition is the theory that many features of cognition, whether human or otherwise, are shaped by aspects of an organism's entire body. The cognitive features include high-level mental constructs and performance on various cognitive tasks. The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built the functional structure of organism's brain and body.

Katherine 'Kate' Irene Maynard Darling is an American-Swiss academic. She works on the legal and ethical implications of technology. As of 2019, she is a Research Specialist at the MIT Media Lab.

Vivian Chu is an American roboticist and entrepreneur, specializing in the field of human-robot interaction. She is Chief Technology Officer at Diligent Robotics, a company she co-founded in 2017 for creating autonomous, mobile, socially intelligent robots.

Andrea L. Thomaz is a senior research scientist in the Department of Electrical and Computer Engineering at The University of Texas at Austin and Director of Socially Intelligent Machines Lab. She specializes in Human-Robot Interaction, Artificial Intelligence and Interactive Machine Learning.

References

  1. "Furry Robots, Foldable Cars and More Innovations from MIT's Media Lab". PBS. 2011-05-20. Archived from the original on 2016-03-04. Retrieved 2017-09-02.
  2. "Leonardo Project Home Page". MIT Media Lab Personal Robots Group. Archived from the original on 2012-02-14. Retrieved 2012-02-27.
  3. 1 2 Robert Capps (January 2006). "The 50 Best Robots Ever". Wired Magazine.
  4. 1 2 "Leonardo Project Body Page". MIT Media Lab Personal Robots Group. Archived from the original on 2012-03-24. Retrieved 2012-02-27.
  5. "Leonardo Project Vision Page". MIT Media Lab Personal Robots Group. Archived from the original on 2012-03-24. Retrieved 2012-02-27.
  6. "Leonardo Project Skin Page". MIT Media Lab Personal Robots Group. Archived from the original on 2008-03-02. Retrieved 2012-02-27.
  7. 1 2 "Leonardo Project Social Learning Page". MIT Media Lab Personal Robots Group. Archived from the original on 2012-03-24. Retrieved 2012-02-27.
  8. Andrew Brooks and Cynthia Breazeal (2006). "Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground". Salt Lake City: Human Robot Interaction.{{cite journal}}: Cite journal requires |journal= (help)
  9. "Leonardo Project Social Cognition Page". MIT Media Lab Personal Robots Group. Archived from the original on 2012-03-24. Retrieved 2012-02-27.
  10. "Project Leonardo Teamwork Page". MIT Media Lab Personal Robots Group. Archived from the original on 2012-03-24. Retrieved 2012-02-27.

Further reading