Real-Time Path Planning is a term used in robotics that consists of motion planning methods that can adapt to real time changes in the environment. This includes everything from primitive algorithms that stop a robot when it approaches an obstacle to more complex algorithms that continuously takes in information from the surroundings and creates a plan to avoid obstacles. [1]
These methods are different from something like a Roomba robot vacuum as the Roomba may be able to adapt to dynamic obstacles but it does not have a set target. A better example would be Embark self-driving semi-trucks that have a set target location and can also adapt to changing environments.
The targets of path planning algorithms are not limited to locations alone. Path planning methods can also create plans for stationary robots to change their poses. An example of this can be seen in various robotic arms, where path planning allows the robotic system to change its pose without colliding with itself. [2]
As a subset of motion planning, it is an important part of robotics as it allows robots to find the optimal path to a target. This ability to find an optimal path also plays an important role in other fields such as video games and gene sequencing.
In order to create a path from a target point to a goal point there must be classifications about the various areas within the simulated environment. This allows a path to be created in a 2D or 3D space where the robot can avoid obstacles.
The work space is an environment that contains the robot and various obstacles. This environment can be either 2-dimensional or 3-dimensional. [3]
The configuration of a robot is determined by its current position and pose. The configuration space is the set of all configurations of the robot. By containing all the possible configurations of the robot, it also represents all transformations that can be applied to the robot. [3]
Within the configuration sets there are additional sets of configurations that are classified by the various algorithms.
The free space is the set of all configurations within the configuration space that does not collide with obstacles. [4]
The target space is the configuration that we want the robot to accomplish.
The obstacle space is the set of configurations within the configuration space where the robot is unable to move to.
The danger space is the set of configurations where the robot can move through but does not want to. Oftentimes robots will try to avoid these configurations unless they have no other valid path or are under a time restraint. For example, a robot would not want to move through a fire unless there were no other valid paths to the target space. [4]
Global path planning refers to methods that require prior knowledge of the robot's environment. Using this knowledge it creates a simulated environment where the methods can plan a path. [1] [5]
The rapidly exploring random tree method works by running through all possible translations from a specific configuration . By running through all possible series of translations a path is created for the robot to reach the target from the starting configuration. [6]
Local path planning refers to methods that take in information from the surroundings in order to generate a simulated field where a path can be found. This allows a path to be found in the real-time as well as adapt to dynamic obstacles. [1] [5]
The probabilistic roadmap method connects nearby configurations in order to determine a path that goes from the starting to target configuration. The method is split into two different parts: preprocessing phase and query phase. In the preprocessing phase, algorithms evaluate various motions to see if they are located in free space. Then in the query phase, the algorithms connects the starting and target configurations through a variety of paths. After creating the paths, it uses Dijkstra's shortest path query to find the optimal path. [7] [8]
The evolutionary artificial potential field method uses a mix of artificial repulsive and attractive forces in order to plan a path for the robot. The attractive forces originate from the target which leads the path to the target in the end. The repulsive forces come from the various obstacles the robot will come across. Using this mix of attractive and repulsive forces, algorithms can find the optimal path. [9]
The indicative route method uses a control path towards the target and an attraction point located at the target. Algorithms are often used to find the control path, which is oftentimes the path with the shortest minimum-clearance path. As the robot stays on the control path the attraction point on the target configuration leads the robot towards the target. [10]
The modified indicative routes and navigation method gives various weights to different paths the robot can take from its current position. For example, a rock would be given a high weight such as 50 while an open path would be given a lower weight such as 2. This creates a variety of weighted regions in the environment which allows the robot to decide on a path towards the target. [11]
For many robots the number of degrees of freedom is no greater than three. Humanoid robots on the other hand have a similar number of degrees of freedom to a human body which increases the complexity of path planning. For example, a single leg of a humanoid robot can have around 12 degrees of freedom. The increased complexity comes from the greater possibility of the robot colliding with itself. Real-time path planning is important for the motion of humanoid robots as it allows various parts of the robot to move at the same time while avoiding collisions with the other parts of the robot. [12]
For example, if we were to look at our own arms we can see that our hands can touch our shoulders. For a robotic arm this may pose a risk if the parts of the arms were to collide unintentionally with each other. This is why path planning algorithms are needed to prevent these accidental collisions.
Self-driving vehicles are a form of mobile robots that utilizes real-time path planning. Oftentimes a vehicle will first use global path planning to decide which roads to take to the target. When these vehicles are on the road they have to constantly adapt to the changing environment. This is where local path planning methods allow the vehicle to plan a safe and fast path to the target location. [13]
An example of this would be the Embark self-driving semi-trucks, which uses an array of sensors to take in information about their environment. The truck will have a predetermined target location and will use global path planning to have a path to the target. While the truck is on the road it will use its sensors alongside local path planning methods to navigate around obstacles to safely reach the target location. [14]
Oftentimes in video games there are a variety of non-player characters that are moving around the game which requires path planning. These characters must have paths planned for them as they need to know where to move to and how to move there.
For example, in the game Minecraft there are hostile mobs that track and follow the player in order to kill the player. This requires real-time path planning as the mob must avoid various obstacles while following the player. Even if the player were to add additional obstacles in the way of the mob, the mob would change its path to still reach the player.
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
A* is a graph traversal and pathfinding algorithm that is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. Given a weighted graph, a source node and a goal node, the algorithm finds the shortest path from source to goal.
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.
Trajectory optimization is the process of designing a trajectory that minimizes some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).
Motion planning, also path planning is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games.
Obstacle avoidance, in robotics, is a critical aspect of autonomous navigation and control systems. It is the capability of a robot or an autonomous system/machine to detect and circumvent obstacles in its path to reach a predefined destination. This technology plays a pivotal role in various fields, including industrial automation, self-driving cars, drones, and even space exploration. Obstacle avoidance enables robots to operate safely and efficiently in dynamic and complex environments, reducing the risk of collisions and damage.
Navigation function usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.
The Euclidean shortest path problem is a problem in computational geometry: given a set of polyhedral obstacles in a Euclidean space, and two points, find the shortest path between the points that does not intersect any of the obstacles.
Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms, computational models of biological neural networks and actual biological systems. Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.
A rapidly exploring random tree (RRT) is an algorithm designed to efficiently search nonconvex, high-dimensional spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. RRTs were developed by Steven M. LaValle and James J. Kuffner Jr. They easily handle problems with obstacles and differential constraints and have been widely used in autonomous robotic motion planning.
The probabilistic roadmap planner is a motion planning algorithm in robotics, which solves the problem of determining a path between a starting configuration of the robot and a goal configuration while avoiding collisions.
D* is any one of the following three related incremental search algorithms:
In robotics, Vector Field Histogram (VFH) is a real time motion planning algorithm proposed by Johann Borenstein and Yoram Koren in 1991. The VFH utilizes a statistical representation of the robot's environment through the so-called histogram grid, and therefore places great emphasis on dealing with uncertainty from sensor and modeling errors. Unlike other obstacle avoidance algorithms, VFH takes into account the dynamics and shape of the robot, and returns steering commands specific to the platform. While considered a local path planner, i.e., not designed for global path optimality, the VFH has been shown to produce near optimal paths.
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
In robotics and motion planning, a velocity obstacle, commonly abbreviated VO, is the set of all velocities of a robot that will result in a collision with another robot at some moment in time, assuming that the other robot maintains its current velocity. If the robot chooses a velocity inside the velocity obstacle then the two robots will eventually collide, if it chooses a velocity outside the velocity obstacle, such a collision is guaranteed not to occur.
Any-angle path planning algorithms are pathfinding algorithms that search for a Euclidean shortest path between two points on a grid map while allowing the turns in the path to have any angle. The result is a path that cuts directly through open areas and has relatively few turns. More traditional pathfinding algorithms such as A* either lack in performance or produce jagged, indirect paths.
Oussama Khatib is a roboticist and a professor of computer science at Stanford University, and a Fellow of the IEEE. He is credited with seminal work in areas ranging from robot motion planning and control, human-friendly robot design, to haptic interaction and human motion synthesis. His work's emphasis has been to develop theories, algorithms, and technologies, that control robot systems by using models of their physical dynamics. These dynamic models are used to derive optimal controllers for complex robots that interact with the environment in real-time.
The Robotics Collaborative Technology Alliance (R-CTA) was a research program initiated and sponsored by the US Army Research Laboratory. The purpose was to "bring together government, industrial, and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles." Collaborative Technology and Research Alliances was a term for partnerships between Army laboratories and centers, private industry and academia for performing research and technology development intended to benefit the US Army. The partnerships were funded by the US Army.
The problem of Multi-Agent Pathfinding (MAPF) is an instance of multi-agent planning and consists in the computation of collision-free paths for a group of agents from their location to an assigned target. It is an optimization problem, since the aim is to find those paths that optimize a given objective function, usually defined as the number of time steps until all agents reach their goal cells. MAPF is the multi-agent generalization of the pathfinding problem, and it is closely related to the shortest path problem in the context of graph theory.
Linear-quadratic regulator rapidly exploring random tree (LQR-RRT) is a sampling based algorithm for kinodynamic planning. A solver is producing random actions which are forming a funnel in the state space. The generated tree is the action sequence which fulfills the cost function. The restriction is, that a prediction model, based on differential equations, is available to simulate a physical system. The method is an extension of the rapidly exploring random tree, a widely used approach to motion planning.