Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc. [1] [2] [3] [4]
A cloud for robots potentially has at least six significant components: [5]
Approach: Lifelong Learning. [10] Leveraging lifelong learning to build a cloud brain for robots was proposed by CAS. The author was motivated by the problem of how to make robots fuse and transfer their experience so that they can effectively use prior knowledge and quickly adapt to new environments. To address the problem, they present a learning architecture for navigation in cloud robotic systems: Lifelong Federated Reinforcement Learning (LFRL). In the work, they propose a knowledge fusion algorithm for upgrading a shared model deployed on the cloud. Then, effective transfer learning methods in LFRL are introduced. LFRL is consistent with human cognitive science and fits well in cloud robotic systems. Experiments show that LFRL greatly improves the efficiency of reinforcement learning for robot navigation. The cloud robotic system deployment also shows that LFRL is capable of fusing prior knowledge.
Approach: Federated Learning. [11] Leveraging lifelong learning to build a cloud brain for robots was proposed in 2020. Humans are capable of learning a new behavior by observing others to perform the skill. Similarly, robots can also implement this by imitation learning. Furthermore, if with external guidance, humans can master the new behavior more efficiently. So, how can robots achieve this? To address the issue, The authors present a novel framework named FIL. It provides a heterogeneous knowledge fusion mechanism for cloud robotic systems. Then, a knowledge fusion algorithm in FIL is proposed. It enables the cloud to fuse heterogeneous knowledge from local robots and generate guide models for robots with service requests. After that, we introduce a knowledge transfer scheme to facilitate local robots acquiring knowledge from the cloud. With FIL, a robot is capable of utilizing knowledge from other robots to increase its imitation learning in accuracy and efficiency. Compared with transfer learning and meta-learning, FIL is more suitable to be deployed in cloud robotic systems. They conduct experiments of a self-driving task for robots (cars). The experimental results demonstrate that the shared model generated by FIL increases imitation learning efficiency of local robots in cloud robotic systems.
Approach: Peer-assisted Learning. [12] Leveraging peer-assisted learning to build a cloud brain for robots was proposed by UM. A technological revolution is occurring in the field of robotics with the data-driven deep learning technology. However, building datasets for each local robot is laborious. Meanwhile, data islands between local robots make data unable to be utilized collaboratively. To address this issue, the work presents Peer-Assisted Robotic Learning (PARL) in robotics, which is inspired by the peer-assisted learning in cognitive psychology and pedagogy. PARL implements data collaboration with the framework of cloud robotic systems. Both data and models are shared by robots to the cloud after semantic computing and training locally. The cloud converges the data and performs augmentation, integration, and transferring. Finally, fine tune this larger shared dataset in the cloud to local robots. Furthermore, we propose the DAT Network (Data Augmentation and Transferring Network) to implement the data processing in PARL. DAT Network can realize the augmentation of data from multi-local robots. The authors conduct experiments on a simplified self-driving task for robots (cars). DAT Network has a significant improvement in the augmentation in self-driving scenarios. Along with this, the self-driving experimental results also demonstrate that PARL is capable of improving learning effects with data collaboration of local robots.
RoboEarth [13] was funded by the European Union's Seventh Framework Programme for research, technological development projects, specifically to explore the field of cloud robotics. The goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction. RoboEarth offers a Cloud Robotics infrastructure. RoboEarth's World-Wide-Web style database stores knowledge generated by humans – and robots – in a machine-readable format. Data stored in the RoboEarth knowledge base include software components, maps for navigation (e.g., object locations, world models), task knowledge (e.g., action recipes, manipulation strategies), and object recognition models (e.g., images, object models). The RoboEarth Cloud Engine includes support for mobile robots, autonomous vehicles, and drones, which require much computation for navigation. [14]
Rapyuta [15] is an open source cloud robotics framework based on RoboEarth Engine developed by the robotics researcher at ETHZ. Within the framework, each robot connected to Rapyuta can have a secured computing environment (rectangular boxes) giving them the ability to move their heavy computation into the cloud. In addition, the computing environments are tightly interconnected with each other and have a high bandwidth connection to the RoboEarth knowledge repository. [16]
KnowRob [17] is an extensional project of RoboEarth. It is a knowledge processing system that combines knowledge representation and reasoning methods with techniques for acquiring knowledge and for grounding the knowledge in a physical system and can serve as a common semantic framework for integrating information from different sources.
RoboBrain [18] is a large-scale computational system that learns from publicly available Internet resources, computer simulations, and real-life robot trials. It accumulates everything robotics into a comprehensive and interconnected knowledge base. Applications include prototyping for robotics research, household robots, and self-driving cars. The goal is as direct as the project's name—to create a centralised, always-online brain for robots to tap into. The project is dominated by Stanford University and Cornell University. And the project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy. [19]
MyRobots is a service for connecting robots and intelligent devices to the Internet. [20] It can be regarded as a social network for robots and smart objects (i.e. Facebook for robots). With socialising, collaborating and sharing, robots can benefit from those interactions too by sharing their sensor information giving insight on their perspective of their current state.
COALAS [21] is funded by the INTERREG IVA France (Channel) – England European cross-border co-operation programme. The project aims to develop new technologies for disabled people through social and technological innovation and through the users' social and psychological integrity. Objectives is to produce a cognitive ambient assistive living system with Healthcare cluster in cloud with domestic service robots like humanoid, intelligent wheelchair which connect with the cloud. [7]
ROS (Robot Operating System) provides an eco-system to support cloud robotics. ROS is a flexible and distributed framework for robot software development. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behaviour across a wide variety of robotic platforms. A library for ROS that is a pure Java implementation, called rosjava, allows Android applications to be developed for robots. Since Android has a booming market and billion users, it would be significant in the field of Cloud Robotics. [22]
DAVinci Project is a proposed software framework that seeks to explore the possibilities of parallelizing some of the robotics algorithms as Map/Reduce tasks in Hadoop. [23] The project aims to build a cloud computing environment capable of providing a compute cluster built with commodity hardware exposing a suite of robotic algorithms as a SaaS and share data co-operatively across the robotic ecosystem. [23] This initiative is not available publicly. [24]
C2RO (C2RO Cloud Robotics) is a platform that processes real-time applications such as collision avoidance and object recognition in the cloud. Previously, high latency times prevented these applications from being processed in the cloud thus requiring on-system computational hardware (e.g. Graphics Processing Unit or GPU). C2RO published a peer-reviewed paper at IEEE PIMRC17 showing its platform could make autonomous navigation and other AI services available on robots- even those with limited computational hardware (e.g. a Raspberry Pi)- from the cloud. [25] C2RO eventually claimed to be the first platform to demonstrate cloud-based SLAM (simultaneous localization and mapping) at RoboBusiness in September 2017.
Noos is a cloud robotics service, providing centralised intelligence to robots that are connected to it. The service went live in December 2017. By using the Noos-API, developers could access services for computer vision, deep learning, and SLAM. Noos was developed and maintained by Ortelio Ltd .
Rocos is a centralized cloud robotics platform that provides the developer tooling and infrastructure to build, test, deploy, operate and automate robot fleets at scale. Founded in October 2017, the platform went live in January 2019.
Though robots can benefit from various advantages of cloud computing, cloud is not the solution to all of robotics. [26]
The research and development of cloud robotics has following potential issues and challenges: [26]
The term "Cloud Robotics" first appeared in the public lexicon as part of a talk given by James Kuffner in 2010 at the IEEE/RAS International Conference on Humanoid Robotics entitled "Cloud-enabled Robots". [28] Since then, "Cloud Robotics" has become a general term encompassing the concepts of information sharing, distributed intelligence, and fleet learning that is possible via networked robots and modern cloud computing. Kuffner was part of Google when he delivered his presentation and the technology company has teased its various cloud robotics initiatives until 2019 when it launched the Google Cloud Robotics Platform for developers. [29]
From the early days of robot development, it was common to have computation done on a computer that was separated from the actual robot mechanism, but connected by wires for power and control. As wireless communication technology developed, new forms of experimental "remote brain" robots were developed controlled by small, onboard compute resources for robot control and safety, that were wirelessly connected to a more powerful remote computer for heavy processing. [30]
The term "cloud computing" was popularized with the launch of Amazon EC2 in 2006. It marked the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization and service-oriented architecture. [31] In a correspondence with Popular Science in July 2006, Kuffner wrote that after a robot was programmed or successfully learned to perform a task it could share its model and relevant data with all other cloud-connected robots: [32]
"...the robot could then 'publish' its refined model to some website or universal repository of knowledge that all future robots could download and utilize. My vision is to have a 'robot knowledge database' that will over time improve the capabilities of all future robotic systems. It would serve as a warehouse of information and statistics about the physical world that robots can access and use to improve their reasoning about the consequences of possible actions and make better action plans in terms of accuracy, safety, and robustness. It could also serve as a kind of 'skill library'. For example, if I successfully programmed my butler robot how to cook a perfect omelette, I could 'upload' the software for omelette cooking to a server that all robots could then download whenever they were asked to cook an omelette. There could be a whole community of robot users uploading skill programs, much like the current 'shareware' and 'freeware' software models that are popular for PC users."
— James Kuffner, (July 2006)
Some publications and events related to Cloud Robotics (in chronological order):
Home automation or domotics is building automation for a home. A home automation system will monitor and/or control home attributes such as lighting, climate, entertainment systems, and appliances. It may also include home security such as access control and alarm systems.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, generative artificial neural networks have been able to surpass many previous approaches in performance.
Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
Neuroinformatics is the field that combines informatics and neuroscience. Neuroinformatics is related with neuroscience data and information processing by artificial neural networks. There are three main directions where neuroinformatics has to be applied:
Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties and opportunities for guiding the learning process.
Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data, so that a user of a cloud application is likely to be physically closer to a server than if all servers were in one place. This is meant to make applications faster. More broadly, it refers to any design that pushes computation physically closer to a user, so as to reduce the latency compared to when an application runs on a single data centre. In the extreme case, this may simply refer to client-side computing.
Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots. Robotics is related to the sciences of electronics, engineering, mechanics, and software. The word "robot" was introduced to the public by Czech writer Karel Čapek in his play R.U.R., published in 1920. The term "robotics" was coined by Isaac Asimov in his 1941 science fiction short-story "Liar!"
Cyber–Physical System (CPS) are integrations of computation with physical processes. In cyber–physical systems, physical and software components are deeply intertwined, able to operate on different spatial and temporal scales, exhibit multiple and distinct behavioral modalities, and interact with each other in ways that change with context. CPS involves transdisciplinary approaches, merging theory of cybernetics, mechatronics, design and process science. The process control is often referred to as embedded systems. In embedded systems, the emphasis tends to be more on the computational elements, and less on an intense link between the computational and physical elements. CPS is also similar to the Internet of Things (IoT), sharing the same basic architecture; nevertheless, CPS presents a higher combination and coordination between physical and computational elements.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.
The following outline is provided as an overview of and topical guide to robotics:
Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combined open-ended machine learning research with information systems and large-scale computing resources. The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects. The team aims to create research opportunities in machine learning and natural language processing. The team was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.
Mohanarajah Gajamohan is Swiss-based Sri Lankan robot scientist. Gajamohan has made significant contributions to cloud robotics by being the chief developer of Rapyuta robot database. His other notable project being Cubli self-balancing cube.
Smart manufacturing is a broad category of manufacturing that employs computer-integrated manufacturing, high levels of adaptability and rapid design changes, digital information technology, and more flexible technical workforce training. Other goals sometimes include fast changes in production levels based on demand, optimization of the supply chain, efficient production and recyclability. In this concept, as smart factory has interoperable systems, multi-scale dynamic modelling and simulation, intelligent automation, strong cyber security, and networked sensors.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
An AI accelerator, deep learning processor, or neural processing unit is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFET transistors.
Amir Hussain is a cognitive scientist, the director of Cognitive Big Data and Cybersecurity (CogBID) Research Lab at Edinburgh Napier University He is a professor of computing science. He is founding Editor-in-Chief of Springer Nature's internationally leading Cognitive Computation journal and the new Big Data Analytics journal. He is founding Editor-in-Chief for two Springer Book Series: Socio-Affective Computing and Cognitive Computation Trends, and also serves on the Editorial Board of a number of other world-leading journals including, as Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Systems, Man, and Cybernetics (Systems) and the IEEE Computational Intelligence Magazine.
Fog robotics can be defined as an architecture which consists of storage, networking functions, control with fog computing closer to robots.
Aude G. Billard is a Swiss physicist in the fields of machine learning and human-robot interactions. As a full professor at the School of Engineering at Swiss Federal Institute of Technology in Lausanne (EPFL), Billard’s research focuses on applying machine learning to support robot learning through human guidance. Billard’s work on human-robot interactions has been recognized numerous times by the Institute of Electrical and Electronics Engineers (IEEE) and she currently holds a leadership position on the executive committee of the IEEE Robotics and Automation Society (RAS) as the vice president of publication activities.
{{cite journal}}
: Cite journal requires |journal=
(help)CS1 maint: numeric names: authors list (link)