Navigation function

Last updated

Navigation function usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.

Contents

Potential functions as navigation functions

A potential function. Imagine dropping a marble on the surface. It will avoid the three obstacles and eventually reach the goal position in the center. Pf as navigation function.png
A potential function. Imagine dropping a marble on the surface. It will avoid the three obstacles and eventually reach the goal position in the center.

Potential functions assume that the environment or work space is known. Obstacles are assigned a high potential value, and the goal position is assigned a low potential. To reach the goal position, a robot only needs to follow the negative gradient of the surface.

We can formalize this concept mathematically as following: Let be the state space of all possible configurations of a robot. Let denote the goal region of the state space.

Then a potential function is called a (feasible) navigation function if [1]

  1. if and only if no point in is reachable from .
  2. For every reachable state, , the local operator produces a state for which .

Probabilistic navigation function

Probabilistic navigation function is an extension of the classical navigation function for static stochastic scenarios. The function is defined by permitted collision probability, which limits the risk during motion. The Minkowski sum used for in the classical definition is replaced with a convolution of the geometries and the Probability Density Functionss of locations. Denoting the target position by , the Probabilistic navigation function is defined as: [2] where is a predefined constant like in the classical navigation function, which ensures the Morse nature of the function. is the distance to the target position , and takes into account all obstacles, defined as where is based on the probability for a collision at location . The probability for a collision is limited by a predetermined value , meaning: and,

where is the probability to collide with the i-th obstacle. A map is said to be a probabilistic navigation function if it satisfies the following conditions:

  1. It is a navigation function.
  2. The probability for a collision is bounded by a predefined probability .

While for certain applications, it suffices to have a feasible navigation function, in many cases it is desirable to have an optimal navigation function with respect to a given cost functional . Formalized as an optimal control problem, we can write

whereby is the state, is the control to apply, is a cost at a certain state if we apply a control , and models the transition dynamics of the system.

Applying Bellman's principle of optimality the optimal cost-to-go function is defined as

Together with the above defined axioms we can define the optimal navigation function as

  1. if and only if no point in is reachable from .
  2. For every reachable state, , the local operator produces a state for which .

Even if a navigation function is an example for reactive control, it can be utilized for optimal control problems as well which includes planning capabilities. [3]

Stochastic Navigation Function

If we assume the transition dynamics of the system or the cost function as subjected to noise, we obtain a stochastic optimal control problem with a cost and dynamics . In the field of reinforcement learning the cost is replaced by a reward function and the dynamics by the transition probabilities .

See also

Related Research Articles

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

Stress–energy tensor Tensor describing energy momentum density in spacetime

The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.

Least squares Approximation method in statistics

The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of each individual equation.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

Indicator function Mathematical function characterizing set membership

In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if A is a subset of some set X, one has if and otherwise, where is a common notation for the indicator function. Other common notations are and

Helmholtz free energy Thermodynamic potential

In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

<span class="mw-page-title-main">Partition function (quantum field theory)</span> Generating function for quantum correlation functions

In quantum field theory, partition functions are the generating functionals for all correlation functions, making them key objects of study in the path integral formalism. They are closely related to the partition function in statistical mechanics, where the latter are imaginary time versions of the former. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach to solving for partition functions gives rise to the familiar method of calculating correlation functions using Feynman diagrams.

Probability amplitude Complex number whose squared absolute value is a probability

In quantum mechanics, a probability amplitude is a complex number used for describing the behaviour of systems. The modulus squared of this quantity represents a probability density.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

Propagator Function in quantum field theory showing probability amplitudes of moving particles

In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. These may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.

In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations.

Stable distribution Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.

In economics, discrete choice models, or qualitative choice models, describe, explain, and predict choices between two or more discrete alternatives, such as entering or not entering the labor market, or choosing between modes of transport. Such choices contrast with standard consumption models in which the quantity of each good consumed is assumed to be a continuous variable. In the continuous case, calculus methods can be used to determine the optimum amount chosen, and demand can be modeled empirically using regression analysis. On the other hand, discrete choice analysis examines situations in which the potential outcomes are discrete, such that the optimum is not characterized by standard first-order conditions. Thus, instead of examining "how much" as in problems with continuous choice variables, discrete choice analysis examines "which one". However, discrete choice analysis can also be used to examine the chosen quantity when only a few distinct quantities must be chosen from, such as the number of vehicles a household chooses to own and the number of minutes of telecommunications service a customer decides to purchase. Techniques such as logistic regression and probit regression can be used for empirical analysis of discrete choice.

Newman–Penrose formalism Notation in general relativity

The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.

Truncated normal distribution

In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above. The truncated normal distribution has wide applications in statistics and econometrics. For example, it is used to model the probabilities of the binary outcomes in the probit model and to model censored data in the tobit model.

In statistics, the variance function is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

Pure inductive logic (PIL) is the area of mathematical logic concerned with the philosophical and mathematical foundations of probabilistic inductive reasoning. It combines classical predicate logic and probability theory. Probability values are assigned to sentences of a first-order relational language to represent degrees of belief that should be held by a rational agent. Conditional probability values represent degrees of belief based on the assumption of some received evidence.

References

  1. Lavalle, Steven, Planning Algorithms Chapter 8
  2. Hacohen, Shlomi; Shoval, Shraga; Shvalb, Nir (2019). "Probability Navigation Function for Stochastic Static Environments". International Journal of Control, Automation and Systems. 17 (8): 2097–2113(2019). doi:10.1007/s12555-018-0563-2. S2CID   164509949.
  3. Andrey V. Savkin; Alexey S. Matveev; Michael Hoy (25 September 2015). Safe Robot Navigation Among Moving and Steady Obstacles. Elsevier Science. pp. 47–. ISBN   978-0-12-803757-7.
Sources