Vivian Chu | |
---|---|
Born | c. 1987 (age 35–36) California |
Nationality | American |
Alma mater | University of Pennsylvania University of California, Berkeley Georgia Tech |
Known for | Co-founder of Diligent Robotics, designing AI software for service robots |
Awards | 2022 Fortune 40 under 40, 2021 and 2022 Fast Company Queer 50, 2019 MIT Technology Review 35 Innovators Under 35 |
Scientific career | |
Fields | Robotics |
Institutions | Diligent Robotics |
Vivian Chu (born c. 1987) is an American roboticist and entrepreneur, specializing in the field of human-robot interaction. She is Chief Technology Officer at Diligent Robotics, a company she co-founded in 2017 for creating autonomous, mobile, socially intelligent robots.
Chu was born in San Jose, California. [1] Growing up, she lived with her parents, who were both software engineers, and her grandparents. [2]
She received her bachelor's degree in electrical engineering and computer science from the University of California, Berkeley in 2009. [3] During her time at Berkeley, she worked as a research assistant in the lab of Dennis K. Lieu, where she worked on integrated flywheels in triple hybrid drive trains. Upon graduation, she worked for IBM Almaden Research, an innovation lab for disruptive technology, [4] where her research centered on natural language processing and intelligent information integration. [3]
In 2011, Chu left IBM Almaden to pursue a master's degree at the University of Pennsylvania. [1] At Penn, she worked under the mentorship of Katherine Kuchenbecker in the Haptics Research Group as a part of the GRASP Lab. [5] She focused on haptic technology to enable robots to both interact with their environment and understand the abstract terms that humans would use to describe the feeling of that interaction. [6] For example, a human may say a carpet is fuzzy, but Chu's algorithms would enable a robot to sense the rug, perform a computation, and also associate that “feeling” with the adjective or descriptor of fuzzy. [6] Chu and her colleagues were able to train PR2 robots equipped with haptic sensors to touch objects and relate the information from the sensor with the human-provided adjective for the haptic quality of the object. [6] The robot was able to learn these associations and then later generalize its learning to objects it had not yet touched and provide an adjective descriptor similar to one a human might use. [6] This work were reported in Chu's first author paper in 2013, which was awarded Best Paper in Cognitive Robotics at the IEEE International Conference on Robotics. [7]
After completing her Master's in 2013, Chu had a summer internship at Honda Research Institute and continued graduate training at Georgia Tech. [3] She worked towards a PhD in Robotics under the mentorship of Andrea L. Thomaz in the Socially Intelligent Machines Lab and under the mentorship of Sonia Chernova in the Robot Autonomy and Interactive Learning Lab. [8] Her work focused on building algorithms that enable robots to reason about action effects and interact with their environments in an adaptable way. [8] Chu was inspired by a talk in developmental psychology discussing how children learn to interact with their environments. [9] She figured that she could approach robot learning in this way as well, giving robots the basic building blocks of cognition so that they could play with objects in the environment and learn the appropriate ways to interact with them. [9] Chu based her design on applying human-guided robot self-exploration to learn affordances. She built algorithms that enabled robots with both self guided and supervised learning of the affordances of objects in the environment, and showed that the combination of both self and supervised learning allows for the best robot performance. [10] Chu and Thomaz filed a patent in 2017 for this technology, which is also when she completed her PhD. [3]
In 2015, Chu spent a summer as an intern at Google[x] under the mentorship of Leila Takayama. [11] She then began working alongside Andrea Thomaz to create a company to build socially intelligent robots that can assist people with chores at work and home. [12] In 2017, they co-founded Diligent Robotics. [12] After graduating from her PhD in 2018, she became the full-time Chief Technology Officer at the firm. [13] She leads a diverse team of roboticists who build robots that feature autonomous mobile manipulation, social intelligence, and human-guided learning abilities, inspired by Chu's graduate discoveries. [13]
Diligent Robotics' first clinical assistant was Poli, [14] a one-arm robot that was able to pre-fetch supply kits to allow nursing staff to spend more time with patients. [15] Poli was piloted at Seton Medical Center at the University of Texas in Austin. [14] The firm's second healthcare support robot, Moxi, [16] is a refurbished and updated version of Poli. [17] [18] It possesses more human-like features including a face that can visually communicate social cues and a head and torso. [17]
In 2020, Diligent Robotics raised a $10 million Series A. In 2022, the company raised more than $30 million for their Series B, led by Tiger Global, for a total of nearly $50 million since founding. [19] It has won accolades including being named as Time's 100 Best Inventions (2019), [20] World Economic Forum Technology Pioneer (2021), [21] and Newsweek America's Greatest Disruptors (2021). [22]
Haptic technology is technology that can create an experience of touch by applying forces, vibrations, or motions to the user. These technologies can be used to create virtual objects in a computer simulation, to control virtual objects, and to enhance remote control of machines and devices (telerobotics). Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface. The word haptic, from the Greek: ἁπτικός (haptikos), means "tactile, pertaining to the sense of touch". Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels.
Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.
Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development.
Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural-language understanding, design, and psychology. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.
Leonardo is a 2.5 foot social robot, the first created by the Personal Robots Group of the Massachusetts Institute of Technology. Its development is credited to Cynthia Breazeal. The body is by Stan Winston Studios, leaders in animatronics. Its body was completed in 2002. It was the most complex robot the studio had ever attempted as of 2001. Other contributors to the project include NevenVision, Inc., Toyota, NASA's Lyndon B. Johnson Space Center, and the Navy Research Lab. It was created to facilitate the study of human–robot interaction and collaboration. A DARPA Mobile Autonomous Robot Software (MARS) grant, Office of Naval Research Young Investigators Program grant, Digital Life, and Things that Think consortia have partially funded the project. The MIT Media Lab Robotic Life Group, who also studied Robonaut 1, set out to create a more sophisticated social-robot in Leonardo. They gave Leonardo a different visual tracking system and programs based on infant psychology that they hope will make for better human-robot collaboration. One of the goals of the project was to make it possible for untrained humans to interact with and teach the robot much more quickly with fewer repetitions. Leonardo was awarded a spot in Wired Magazine’s 50 Best Robots Ever list in 2006.
Ayanna MacCalla Howard is an American roboticist, entrepreneur and educator currently serving as the dean of the College of Engineering at Ohio State University. Assuming the post in March 2021, Howard became the first woman to lead the Ohio State College of Engineering.
Haptic perception means literally the ability "to grasp something". Perception in this case is achieved through the active exploration of surfaces and objects by a moving subject, as opposed to passive contact by a static subject during tactile perception.
Daniela L. Rus is a roboticist and computer scientist, Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology.
A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are generally modeled after the biological sense of cutaneous touch which is capable of detecting stimuli resulting from mechanical stimulation, temperature, and pain. Tactile sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touchscreen devices on mobile phones and computing.
Maja Matarić is an American computer scientist, roboticist and AI researcher, and the Chan Soon-Shiong Distinguished Professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California. She is known for her work in human-robot interaction for socially assistive robotics, a new field she pioneered, which focuses on creating robots capable of providing personalized therapy and care that helps people help themselves, through social rather than physical interaction. Her work has focused on aiding special needs populations including the elderly, stroke patients, and children with autism, and has been deployed and evaluated in hospitals, therapy centers, schools, and homes. She is also known for her earlier work on robot learning from demonstration, swarm robotics, robot teams, and robot navigation.
Sociorobotics is a field of research studying the implications, complexities and subsequent design of artificial social, spatial, cultural and haptic behaviours, protocols and interactions of robots with each other and with humans in equal measure. Intrinsically taking into account the structured and unstructured spaces of habitation, including industrial, commercial, healthcare, eldercare and domestic environments. This emergent perspective to robotic research encompasses and surpasses the conventions of Social robotics and Artificial society/social systems research. Which do not appear to acknowledge that numerous robots and humans are increasingly inhabiting the same spaces which require similar performances and agency of social behaviour, particularly regarding the commercial emergence of workplace, personal and companion robotics.
Oussama Khatib is a roboticist and a professor of computer science at Stanford University, and a Fellow of the IEEE. He is credited with seminal work in areas ranging from robot motion planning and control, human-friendly robot design, to haptic interaction and human motion synthesis. His work's emphasis has been to develop theories, algorithms, and technologies, that control robot systems by using models of their physical dynamics. These dynamic models are used to derive optimal controllers for complex robots that interact with the environment in real-time.
Domenico Prattichizzo is an Italian scientist with a strong and international recognized expertise in the fields of Haptics, Robotics and, Wearable technology. His researches find their main applications in virtual and augmented reality scenarios and in the rehabilitation of people with upper and lower limbs, visual and cognitive impairments.
Jodi L. Forlizzi is a professor and Geschke Director, as well as an interaction designer and researcher, at the Human-Computer Interaction Institute at Carnegie Mellon University. On August 29, 2022, Forlizzi was named a Herbert A. Simon Professor at Carnegie Mellon. Her research ranges from understanding the limits of human attention to understanding how products and services evoke social behavior. Current research interests include interaction design, assistive, social, and aesthetic technology projects and systems, and notification systems. In 2014, Forlizzi was inducted into the CHI Academy for her notable works and contributions to the field of human-computer interaction.
Carol Elizabeth Reiley is an American business executive, computer scientist, and model. She is a pioneer in teleoperated and autonomous robot systems in surgery, space exploration, disaster rescue, and self-driving cars. Reiley has worked at Intuitive Surgical, Lockheed Martin, and General Electric. She co-founded, invested in, and was president of Drive.ai, and is now CEO of a healthcare startup, a creative advisor for the San Francisco Symphony, and a brand ambassador for Guerlain Cosmetics. She is a published children's book author, the first female engineer on the cover of MAKE magazine, and is ranked by Forbes, Inc, and Quartz as a leading entrepreneur and influential scientist.
Susan J. Lederman is a Canadian experimental psychologist. She is a professor emerita in the Department of Psychology at Queen's University in Kingston, Ontario, Canada. She is recognized for her contributions to the field of haptics.
Aude G. Billard is a Swiss physicist in the fields of machine learning and human-robot interactions. As a full professor at the School of Engineering at Swiss Federal Institute of Technology in Lausanne (EPFL), Billard’s research focuses on applying machine learning to support robot learning through human guidance. Billard’s work on human-robot interactions has been recognized numerous times by the Institute of Electrical and Electronics Engineers (IEEE) and she currently holds a leadership position on the executive committee of the IEEE Robotics and Automation Society (RAS) as the vice president of publication activities.
John Kenneth Salisbury, Jr. is an American Roboticist and Research Professor Emeritus at Stanford University’s Computer Science Department and Stanford School of Medicine’s Department of Surgery. Salisbury is a researcher in the fields of robotics, haptics, and medical robotics. He is an inventor of over 50 patents and recipient of the 2011 IEEE Inaba Award for "Commercialization of Products in Medical Robotics, Robotics, and Haptics".
Katherine Julianne Kuchenbecker is an American researcher in haptic technology and robot-assisted surgery, and a former high school and college volleyball player. She is director of the Haptic Intelligence department at the Max Planck Institute for Intelligent Systems in Germany.
Andrea L. Thomaz is a senior research scientist in the Department of Electrical and Computer Engineering at The University of Texas at Austin and Director of Socially Intelligent Machines Lab. She specialises in Human-Robot Interaction, Artificial Intelligence and Interactive Machine Learning.