The X-machine (XM) is a theoretical model of computation introduced by Samuel Eilenberg in 1974. [1] The X in "X-machine" represents the fundamental data type on which the machine operates; for example, a machine that operates on databases (objects of type database) would be a database-machine. The X-machine model is structurally the same as the finite-state machine, except that the symbols used to label the machine's transitions denote relations of type X→X. Crossing a transition is equivalent to applying the relation that labels it (computing a set of changes to the data type X), and traversing a path in the machine corresponds to applying all the associated relations, one after the other.
Eilenberg's original X-machine was a completely general theoretical model of computation (subsuming the Turing machine, for example), which admitted deterministic, non-deterministic and non-terminating computations. His seminal work [1] published many variants of the basic X-machine model, each of which generalized the finite-state machine in a slightly different way.
In the most general model, an X-machine is essentially a "machine for manipulating objects of type X". Suppose that X is some datatype, called the fundamental datatype, and that Φ is a set of (partial) relations φ: X → X. An X-machine is a finite-state machine whose arrows are labelled by relations in Φ. In any given state, one or more transitions may be enabled if the domain of the associated relation φi accepts (a subset of) the current values stored in X. In each cycle, all enabled transitions are assumed to be taken. Each recognised path through the machine generates a list φ1 ... φn of relations. We call the composition φ1o ... o φn of these relations the path relation corresponding to that path. The behaviour of the X-machine is defined to be the union of all the behaviours computed by its path relations. In general, this is non-deterministic, since applying any relation computes a set of outcomes on X. In the formal model, all possible outcomes are considered together, in parallel.
For practical purposes, an X-machine should describe some finite computation. An encoding function α: Y → X converts from some input data type Y into the initial state of X, and a decoding function β: X → Z, converts back from the final state(s) of X into some output data type Z. Once the initial state of X is populated, the X-machine runs to completion, and the outputs are then observed. In general, a machine may deadlock (be blocked), or livelock (never halt), or perform one or more complete computations. For this reason, more recent research has focused on deterministic X-machines, whose behaviour can be controlled and observed more precisely.
A compiler with a peep-hole optimizer can be thought of as a machine for optimizing program structure. In this Optimizer-machine, the encoding function α takes source code from the input-type Y (the program source) and loads it into the memory-type X (a parse tree). Suppose that the machine has several states, called FindIncrements, FindSubExprs and Completed. The machine starts in the initial state FindIncrements, which is linked to other states via the transitions:
FindIncrements →DoIncrementFindIncrementsFindIncrements →SkipIncrementFindSubExprsFindSubExprs →DoSubExprFindSubExprsFindSubExprs →SkipSubExprCompleted
The relation DoIncrement maps a parsed subtree corresponding to "x := x + 1" into the optimized subtree "++x". The relation DoSubExpr maps a parse tree containing multiple occurrences of the same expression "x + y ... x + y" into an optimized version with a local variable to store the repeated computation "z := x + y; ... z ... z". These relations are only enabled if X contains the domain values (subtrees) on which they operate. The remaining relations SkipIncrement and SkipSubExpr are nullops (identity relations) enabled in the complementary cases.
So, the Optimizer-machine will run to completion, first converting trivial additions into in-place increments (while in the FindIncrements state), then it will move on to the FindSubExprs state and perform a series of common sub-expression removals, after which it will move to the final state Completed. The decoding function β will then map from the memory-type X (the optimized parse-tree) into the output-type Z (optimized machine code).
When referring to Eilenberg's original model, "X-machine" is typically written with a lower-case "m", because the sense is "any machine for processing X". When referring to later specific models, the convention is to use a capital "M" as part of the proper name of that variant.
Interest in the X-machine was revived in the late 1980s by Mike Holcombe, [2] who noticed that the model was ideal for software formal specification purposes, because it cleanly separates control flow from processing. Provided one works at a sufficiently abstract level, the control flows in a computation can usually be represented as a finite-state machine, so to complete the X-machine specification all that remains is to specify the processing associated with each of the machine's transitions. The structural simplicity of the model makes it extremely flexible; other early illustrations of the idea included Holcombe's specification of human-computer interfaces, [3] his modelling of processes in cell biochemistry, [4] and Stannett's modelling of decision-making in military command systems. [5]
X-machines have received renewed attention since the mid-1990s, when Gilbert Laycock's deterministic Stream X-Machine [6] was found to serve as the basis for specifying large software systems that are completely testable. [7] Another variant, the Communicating Stream X-Machine offers a useful testable model for biological processes [8] and future swarm-based satellite systems. [9]
X-machines have been applied to lexical semantics by Andras Kornai, who models word meaning by `pointed' machines that have one member of the base set X distinguished. [10] Application to other branches of linguistics, in particular to a contemporary reformulation of Pāṇini were pioneered by Gerard Huet and his co-workers [11] [12]
The X-machine is rarely encountered in its original form, but underpins several subsequent models of computation. The most influential model on theories of software testing has been the Stream X-Machine. NASA has recently discussed using a combination of Communicating Stream X-Machines and the process calculus WSCSS in the design and testing of swarm satellite systems. [9]
The earliest variant, the continuous-time Analog X-Machine (AXM), was introduced by Mike Stannett in 1990 as a potentially "super-Turing" model of computation; [13] it is consequently related to work in hypercomputation theory. [14]
The most commonly encountered X-machine variant is Gilbert Laycock's 1993 Stream X-Machine ( SXM ) model, [6] which forms the basis for Mike Holcombe and Florentin Ipate's theory of complete software testing, which guarantees known correctness properties, once testing is over. [7] [15] The Stream X-Machine differs from Eilenberg's original model, in that the fundamental data type X is of the form Out* × Mem × In*, where In* is an input sequence, Out* is an output sequence, and Mem is the (rest of the) memory.
The advantage of this model is that it allows a system to be driven, one step at a time, through its states and transitions, while observing the outputs at each step. These are witness values, that guarantee that particular functions were executed on each step. As a result, complex software systems may be decomposed into a hierarchy of Stream X-Machines, designed in a top-down way and tested in a bottom-up way. This divide-and-conquer approach to design and testing is backed by Florentin Ipate's proof of correct integration, [16] which proves how testing the layered machines independently is equivalent to testing the composed system.
The earliest proposal for connecting several X-machines in parallel is Judith Barnard's 1995 Communicating X-machine (CXM or COMX) model, [17] [18] in which machines are connected via named communication channels (known as ports); this model exists in both discrete- and real-timed variants. [19] Earlier versions of this work were not fully formal and did not show full input/output relations.
A similar Communicating X-Machine approach using buffered channels was developed by Petros Kefalas. [20] [21] The focus of this work was on expressiveness in the composition of components. The ability to reassign channels meant that some of the testing theorems from Stream X-Machines did not carry over.
These variants are discussed in more detail on a separate page.
The first fully formal model of concurrent X-machine composition was proposed in 1999 by Cristina Vertan and Horia Georgescu, [22] based on earlier work on communicating automatata by Philip Bird and Anthony Cowling. [23] In Vertan's model, the machines communicate indirectly, via a shared communication matrix (essentially an array of pigeonholes), rather than directly via shared channels.
Bălănescu, Cowling, Georgescu, Vertan and others have studied the formal properties of this CSXM model in some detail. Full input/output relations can be shown. The communication matrix establishes a protocol for synchronous communication. The advantage of this is that it decouples each machine's processing from their communication, allowing the separate testing of each behaviour. This compositional model was proven equivalent to a standard Stream X-Machine, [24] so leveraging the earlier testing theory developed by Holcombe and Ipate.
This X-machine variant is discussed in more detail on a separate page.
Kirill Bogdanov and Anthony Simons developed several variants of the X-machine to model the behaviour of objects in object-oriented systems. [25] This model differs from the Stream X-Machine approach, in that the monolithic data type X is distributed over, and encapsulated by, several objects, which are serially composed; and systems are driven by method invocations and returns, rather than by inputs and outputs. Further work in this area concerned adapting the formal testing theory in the context of inheritance, which partitions the state-space of the superclass in extended subclass objects. [26]
A "CCS-augmented X-machine" (CCSXM) model was later developed by Simons and Stannett in 2002 to support complete behavioural testing of object-oriented systems, in the presence of asynchronous communication [27] This is expected to bear some similarity with NASA's recent proposal; but no definitive comparison of the two models has as yet been conducted.
A finite-state machine (FSM) or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed.
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
In computer science, a compiler-compiler or compiler generator is a programming tool that creates a parser, interpreter, or compiler from some form of formal description of a programming language and machine.
Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure Programing language.
Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
The Stream X-machine (SXM) is a model of computation introduced by Gilbert Laycock in his 1993 PhD thesis, The Theory and Practice of Specification Based Software Testing. Based on Samuel Eilenberg's X-machine, an extended finite-state machine for processing data of the type X, the Stream X-Machine is a kind of X-machine for processing a memory data type Mem with associated input and output streams In* and Out*, that is, where X = Out* × Mem × In*. The transitions of a Stream X-Machine are labelled by functions of the form φ: Mem × In → Out × Mem, that is, which compute an output value and update the memory, from the current memory and an input value.
The (Stream) X-Machine Testing Methodology is a complete functional testing approach to software- and hardware testing that exploits the scalability of the Stream X-Machine model of computation. Using this methodology, it is likely to identify a finite test-set that exhaustively determines whether the tested system's implementation matches its specification. This goal is achieved by a divide-and-conquer approach, in which the design is decomposed by refinement into a collection of Stream X-Machines, which are implemented as separate modules, then tested bottom-up. At each integration stage, the testing method guarantees that the tested components are correctly integrated.
The Communicating (Stream) X-Machine is a model of computation introduced by various researchers in the 1990s to model systems composed of communicating agents. The model exists in several variants, which are either based directly on Samuel Eilenberg's X-machine or on Gilbert Laycock's later Stream X-Machine.
Gérard Pierre Huet is a French computer scientist, linguist and mathematician. He is senior research director at INRIA and mostly known for his major and seminal contributions to type theory, programming language theory and to the theory of computation.
Incremental computing, also known as incremental computation, is a software feature which, whenever a piece of data changes, attempts to save time by only recomputing those outputs which depend on the changed data. When incremental computing is successful, it can be significantly faster than computing new outputs naively. For example, a spreadsheet software package might use incremental computation in its recalculation feature, to update only those cells containing formulas which depend on the changed cells.
In computing, algorithmic skeletons, or parallelism patterns, are a high-level parallel programming model for parallel and distributed computing.
The Test Template Framework (TTF) is a model-based testing (MBT) framework proposed by Phil Stocks and David Carrington in for the purpose of software testing. Although the TTF was meant to be notation-independent, the original presentation was made using the Z formal notation. It is one of the few MBT frameworks approaching unit testing.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process.
This glossary of computer science is a list of definitions of terms and concepts used in computer science, its sub-disciplines, and related fields, including terms relevant to software, data science, and computer programming.
{{cite web}}
: CS1 maint: archived copy as title (link)