Real-time Control System (RCS) is a reference model architecture, suitable for many software-intensive, real-time computing control problem domains. It defines the types of functions needed in a real-time intelligent control system, and how these functions relate to each other.
RCS is not a system design, nor is it a specification of how to implement specific systems. RCS prescribes a hierarchical control model based on a set of well-founded engineering principles to organize system complexity. All the control nodes at all levels share a generic node model. [1]
Also RCS provides a comprehensive methodology for designing, engineering, integrating, and testing control systems. Architects iteratively partition system tasks and information into finer, finite subsets that are controllable and efficient. RCS focuses on intelligent control that adapts to uncertain and unstructured operating environments. The key concerns are sensing, perception, knowledge, costs, learning, planning, and execution. [1]
A reference model architecture is a canonical form, not a system design specification. The RCS reference model architecture combines real-time motion planning and control with high level task planning, problem solving, world modeling, recursive state estimation, tactile and visual image processing, and acoustic signature analysis. In fact, the evolution of the RCS concept has been driven by an effort to include the best properties and capabilities of most, if not all, the intelligent control systems currently known in the literature, from subsumption to SOAR, from blackboards to object-oriented programming. [2]
RCS (real-time control system) is developed into an intelligent agent architecture designed to enable any level of intelligent behavior, up to and including human levels of performance. RCS was inspired by a theoretical model of the cerebellum, the portion of the brain responsible for fine motor coordination and control of conscious motions. It was originally designed for sensory-interactive goal-directed control of laboratory manipulators. Over three decades, it has evolved into a real-time control architecture for intelligent machine tools, factory automation systems, and intelligent autonomous vehicles. [3]
RCS applies to many problem domains including manufacturing examples and vehicle systems examples. Systems based on the RCS architecture have been designed and implemented to varying degrees for a wide variety of applications that include loading and unloading of parts and tools in machine tools, controlling machining workstations, performing robotic deburring and chamfering, and controlling space station telerobots, multiple autonomous undersea vehicles, unmanned land vehicles, coal mining automation systems, postal service mail handling systems, and submarine operational automation systems. [2]
RCS has evolved through a variety of versions over a number of years as understanding of the complexity and sophistication of intelligent behavior has increased. The first implementation was designed for sensory-interactive robotics by Barbera in the mid 1970s. [4]
In RCS-1, the emphasis was on combining commands with sensory feedback so as to compute the proper response to every combination of goals and states. The application was to control a robot arm with a structured light vision system in visual pursuit tasks. RCS-1 was heavily influenced by biological models such as the Marr-Albus model, [5] and the Cerebellar Model Arithmetic Computer (CMAC). [6] of the cerebellum. [2]
CMAC becomes a state machine when some of its outputs are fed directly back to the input, so RCS-1 was implemented as a set of state-machines arranged in a hierarchy of control levels. At each level, the input command effectively selects a behavior that is driven by feedback in stimulus-response fashion. CMAC thus became the reference model building block of RCS-1, as shown in the figure.
A hierarchy of these building blocks was used to implement a hierarchy of behaviors such as observed by Tinbergen [7] and others. RCS-1 is similar in many respects to Brooks' subsumption architecture, [8] except that RCS selects behaviors before the fact through goals expressed in commands, rather than after the fact through subsumption. [2]
The next generation, RCS-2, was developed by Barbera, Fitzgerald, Kent, and others for manufacturing control in the NIST Automated Manufacturing Research Facility (AMRF) during the early 1980s. [9] [10] [11] The basic building block of RCS-2 is shown in the figure.
The H function remained a finite state machine state-table executor. The new feature of RCS-2 was the inclusion of the G function consisting of a number of sensory processing algorithms including structured light and blob analysis algorithms. RCS-2 was used to define an eight level hierarchy consisting of Servo, Coordinate Transform, E-Move, Task, Workstation, Cell, Shop, and Facility levels of control.
Only the first six levels were actually built. Two of the AMRF workstations fully implemented five levels of RCS-2. The control system for the Army Field Material Handling Robot (FMR) [12] was also implemented in RCS-2, as was the Army TMAP semi-autonomous land vehicle project. [2]
RCS-3 was designed for the NBS/DARPA Multiple Autonomous Undersea Vehicle (MAUV) project [13] and was adapted for the NASA/NBS Standard Reference Model Telerobot Control System Architecture (NASREM) developed for the space station Flight Telerobotic Servicer [14] The basic building block of RCS-3 is shown in the figure.
The principal new features introduced in RCS-3 are the World Model and the operator interface. The inclusion of the World Model provides the basis for task planning and for model-based sensory processing. This led to refinement of the task decomposition (TD) modules so that each have a job assigner, and planner and executor for each of the subsystems assigned a job. This corresponds roughly to Saridis' [15] three level control hierarchy. [2]
RCS-4 is developed since the 1990s by the NIST Robot Systems Division. The basic building block is shown in the figure). The principal new feature in RCS-4 is the explicit representation of the Value Judgment (VJ) system. VJ modules provide to the RCS-4 control system the type of functions provided to the biological brain by the limbic system. The VJ modules contain processes that compute cost, benefit, and risk of planned actions, and that place value on objects, materials, territory, situations, events, and outcomes. Value state-variables define what goals are important and what objects or regions should be attended to, attacked, defended, assisted, or otherwise acted upon. Value judgments, or evaluation functions, are an essential part of any form of planning or learning. The application of value judgments to intelligent control systems has been addressed by George Pugh. [16] The structure and function of VJ modules are developed more completely developed in Albus (1991). [2] [17]
RCS-4 also uses the term behavior generation (BG) in place of the RCS-3 term task 5 decomposition (TD). The purpose of this change is to emphasize the degree of autonomous decision making. RCS-4 is designed to address highly autonomous applications in unstructured environments where high bandwidth communications are impossible, such as unmanned vehicles operating on the battlefield, deep undersea, or on distant planets. These applications require autonomous value judgments and sophisticated real-time perceptual capabilities. RCS-3 will continue to be used for less demanding applications, such as manufacturing, construction, or telerobotics for near-space, or shallow undersea operations, where environments are more structured and communication bandwidth to a human interface is less restricted. In these applications, value judgments are often represented implicitly in task planning processes, or in human operator input. [2]
In the figure, an example of the RCS methodology for designing a control system for autonomous onroad driving under everyday traffic conditions is summarized in six steps. [18]
The result of step 3 is that each organizational unit has for each input command a state-table of ordered production rules, each suitable for execution by an extended finite state automaton (FSA). The sequence of output subcommands required to accomplish the input command is generated by situations (i.e., branching conditions) that cause the FSA to transition from one output subcommand to the next. [18]
Based on the RCS Reference Model Architecture the NIST has developed a Real-time Control System Software Library. This is an archive of free C++, Java and Ada code, scripts, tools, makefiles, and documentation developed to aid programmers of software to be used in real-time control systems, especially those using the Reference Model Architecture for Intelligent Systems Design. [19]
Subsumption architecture is a reactive robotic architecture heavily associated with behavior-based robotics which was very popular in the 1980s and 90s. The term was introduced by Rodney Brooks and colleagues in 1986. Subsumption has been widely influential in autonomous robotics and elsewhere in real-time AI.
James Sacra Albus was an American engineer, Senior NIST Fellow and founder and former chief of the Intelligent Systems Division of the Manufacturing Engineering Laboratory at the National Institute of Standards and Technology (NIST).
Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University.
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
Hybrid intelligent system denotes a software system which employs, in parallel, a combination of methods and techniques from artificial intelligence subfields, such as:
Cognitive Robotics or Cognitive Technology is a subfield of robotics concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition, consisting of Robotic Process Automation, Artificial Intelligence, Machine Learning, Deep Learning, Optical Character Recognition, Image Processing, Process Mining, Analytics, Software Development and System Integration.
The following outline is provided as an overview of and topical guide to automation:
The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory.
In artificial intelligence, a procedural reasoning system (PRS) is a framework for constructing real-time reasoning systems that can perform complex tasks in dynamic environments. It is based on the notion of a rational agent or intelligent agent using the belief–desire–intention software model.
A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
The Real-time Control System (RCS) is a software system developed by NIST based on the Real-time Control System Reference Model Architecture, that implements a generic Hierarchical control system. The RCS Software Library is an archive of free C++, Java and Ada code, scripts, tools, makefiles, and documentation developed to aid programmers of software to be used in real-time control systems.
Cyber-Physical Systems (CPS) are mechanisms controlled and monitored by computer algorithms, tightly integrated with the internet and its users. In cyber-physical systems, physical and software components are deeply intertwined, able to operate on different spatial and temporal scales, exhibit multiple and distinct behavioral modalities, and interact with each other in ways that change with context. CPS involves transdisciplinary approaches, merging theory of cybernetics, mechatronics, design and process science. The process control is often referred to as embedded systems. In embedded systems, the emphasis tends to be more on the computational elements, and less on an intense link between the computational and physical elements. CPS is also similar to the Internet of Things (IoT), sharing the same basic architecture; nevertheless, CPS presents a higher combination and coordination between physical and computational elements.
Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots. Robotics is related to the sciences of electronics, engineering, mechanics, and software.
The 4D/RCS Reference Model Architecture is a reference model for military unmanned vehicles on how their software components should be identified and organized.
Real-time control system may refer to:
The following outline is provided as an overview of and topical guide to robotics:
LinuxCNC is a free, open-source Linux software system that implements computer numerical control (CNC) capability using general purpose computers to control CNC machines. It's mainly intended to run on PC AMD x86-64 systems. Designed by various volunteer developers at linuxcnc.org, it is typically bundled as an ISO file with a modified version of Debian Linux which provides the required real-time kernel.
Adaptive collaborative control is the decision-making approach used in hybrid models consisting of finite-state machines with functional models as subcomponents to simulate behavior of systems formed through the partnerships of multiple agents for the execution of tasks and the development of work products. The term “collaborative control” originated from work developed in the late 1990s and early 2000 by Fong, Thorpe, and Baur (1999). It is important to note that according to Fong et al. in order for robots to function in collaborative control, they must be self-reliant, aware, and adaptive. In literature, the adjective “adaptive” is not always shown but is noted in the official sense as it is an important element of collaborative control. The adaptation of traditional applications of control theory in teleoperations sought initially to reduce the sovereignty of “humans as controllers/robots as tools” and had humans and robots working as peers, collaborating to perform tasks and to achieve common goals. Early implementations of adaptive collaborative control centered on vehicle teleoperation. Recent uses of adaptive collaborative control cover training, analysis, and engineering applications in teleoperations between humans and multiple robots, multiple robots collaborating among themselves, unmanned vehicle control, and fault tolerant controller design.
Smart manufacturing is a broad category of manufacturing that employs computer-integrated manufacturing, high levels of adaptability and rapid design changes, digital information technology, and more flexible technical workforce training. Other goals sometimes include fast changes in production levels based on demand, optimization of the supply chain, efficient production and recyclability. In this concept, as smart factory has interoperable systems, multi-scale dynamic modelling and simulation, intelligent automation, strong cyber security, and networked sensors.
Gazebo is an open-source 2D/3D robotics simulator that began development in 2002. In 2017, development forked into two versions, known as "Gazebo", the original monolithic architecture, and "Ignition", which had moved to becoming a modernized collection of loosely coupled libraries. Following a trademark obstacle in 2022 regarding their use of the name "Ignition", Open Robotics took the opportunity to switch the version names, dubbing the original fork "Gazebo Classic" and the new, modern fork "Gazebo".