Stutter bisimulation

Last updated

In theoretical computer science, a stutter bisimulation is a relationship between two transition systems, abstract machines that model computation. It is defined coinductively and generalizes the idea of bisimulations. A bisimulation matches up the states of a machine such that transitions correspond; a stutter bisimulation allows transitions to be matched to finite path fragments. [1]

Contents

Definition

In Principles of Model Checking , Baier and Katoen define a stutter bisimulation for a single transition system and later adapt it to a relation on two transition systems. A stutter bisimulation for is a binary relation R on S such that for all (s1,s2) in R:

  1. have the same labels
  2. If is a valid transition and then there exists a finite path fragment () such that each pair is in R and is in R
  3. If is a valid transition and is not then there exists a finite path fragment () such that each pair is in R and is in R

Generalizations

A generalization, the divergent stutter bisimulation, can be used to simplify the state space of a system with the tradeoff that statements using the linear temporal logic operator "next" may change truth value. [2] A robust stutter bisimulation allows uncertainty over transitions in the system. [3]

Related Research Articles

In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, of the topological space. The fundamental group is the first and simplest homotopy group. The fundamental group is a homotopy invariant—topological spaces that are homotopy equivalent have isomorphic fundamental groups. The fundamental group of a topological space is denoted by .

In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures in combinatorics through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.

In computer science, an LL parser is a top-down parser for a restricted context-free language. It parses the input from Left to right, performing Leftmost derivation of the sentence.

<span class="mw-page-title-main">Markov chain</span> Random process independent of past history

In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov.

Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.

In mathematics, a Coxeter group, named after H. S. M. Coxeter, is an abstract group that admits a formal description in terms of reflections. Indeed, the finite Coxeter groups are precisely the finite Euclidean reflection groups; for example, the symmetry group of each regular polyhedron is a finite Coxeter group. However, not all Coxeter groups are finite, and not all can be described in terms of symmetries and Euclidean reflections. Coxeter groups were introduced in 1934 as abstractions of reflection groups, and finite Coxeter groups were classified in 1935.

In theoretical computer science a bisimulation is a binary relation between state transition systems, associating systems that behave in the same way in that one system simulates the other and vice versa.

In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.

In group theory, restriction forms a representation of a subgroup using a known representation of the whole group. Restriction is a fundamental construction in representation theory of groups. Often the restricted representation is simpler to understand. Rules for decomposing the restriction of an irreducible representation into irreducible representations of the subgroup are called branching rules, and have important applications in physics. For example, in case of explicit symmetry breaking, the symmetry group of the problem is reduced from the whole group to one of its subgroups. In quantum mechanics, this reduction in symmetry appears as a splitting of degenerate energy levels into multiplets, as in the Stark or Zeeman effect.

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

Ramanujan summation is a technique invented by the mathematician Srinivasa Ramanujan for assigning a value to divergent infinite series. Although the Ramanujan summation of a divergent series is not a sum in the traditional sense, it has properties that make it mathematically useful in the study of divergent infinite series, for which conventional summation is undefined.

The block Wiedemann algorithm for computing kernel vectors of a matrix over a finite field is a generalization by Don Coppersmith of an algorithm due to Doug Wiedemann.

In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics, due to Cees M. Fortuin, Pieter W. Kasteleyn, and Jean Ginibre. Informally, it says that in many random systems, increasing events are positively correlated, while an increasing and a decreasing event are negatively correlated. It was obtained by studying the random cluster model.

In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem and Kirwan convexity theorem.

<span class="mw-page-title-main">Stuttering equivalence</span>

In theoretical computer science, stuttering equivalence, a relation written as

Input-to-state stability (ISS) is a stability notion widely used to study stability of nonlinear control systems with external inputs. Roughly speaking, a control system is ISS if it is globally asymptotically stable in the absence of external inputs and if its trajectories are bounded by a function of the size of the input for all sufficiently large times. The importance of ISS is due to the fact that the concept has bridged the gap between input–output and state-space methods, widely used within the control systems community.

In mathematics, specifically group theory, finite groups of prime power order , for a fixed prime number and varying integer exponents , are briefly called finitep-groups.

<span class="mw-page-title-main">Birth process</span> Type of continuous process in probability theory

In probability theory, a birth process or a pure birth process is a special case of a continuous-time Markov process and a generalisation of a Poisson process. It defines a continuous process which takes values in the natural numbers and can only increase by one or remain unchanged. This is a type of birth–death process with no deaths. The rate at which births occur is given by an exponential random variable whose parameter depends only on the current value of the process

In model checking, a branch of computer science, linear time properties are used to describe requirements of a model of a computer system. Example properties include "the vending machine does not dispense a drink until money has been entered" or "the computer program eventually terminates". Fairness properties can be used to rule out unrealistic paths of a model. For instance, in a model of two traffic lights, the liveness property "both traffic lights are green infinitely often" may only be true under the unconditional fairness constraint "each traffic light changes colour infinitely often".

In mathematics, the theory of finite sphere packing concerns the question of how a finite number of equally-sized spheres can be most efficiently packed. The question of packing finitely many spheres has only been investigated in detail in recent decades, with much of the groundwork being laid by László Fejes Tóth.

References

  1. Principles of Model Checking (pages 536–580), by Christel Baier and Joost-Pieter Katoen, The MIT Press, Cambridge, Massachusetts.
  2. Mohajerani, Sahar; Malik, Robi; Wintenberg, Andrew; Lafortune, Stéphane; Ozay, Necmiye (2021). "Divergent stutter bisimulation abstraction for controller synthesis with linear temporal logic specifications". Automatica. 130. doi:10.1016/j.automatica.2021.109723. hdl: 10289/14366 .
  3. Krook, Jonas; Malik, Robi; Mohajerani, Sahar; Fabian, Martin (2024). "Robust stutter bisimulation for abstraction and controller synthesis with disturbance". Automatica. 160. doi: 10.1016/j.automatica.2023.111394 . hdl: 10289/16942 .