Symbolic trajectory evaluation

Last updated

Symbolic trajectory evaluation (STE) is a lattice-based model checking technology that uses a form of symbolic simulation. STE is essentially used for computer hardware, that is circuit verification. The technique uses abstraction, meaning that details of the circuit behaviour are removed from the circuit model. It was first developed by Carl Seger and Randy Bryant in 1995 as an alternative to "classical" symbolic model checking.

Related Research Articles

Automated theorem proving is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

Pentium FDIV bug Bug in the Intel P5 Pentium floating-point unit

The Pentium FDIV bug is a hardware bug affecting the floating-point unit (FPU) of the early Intel Pentium processors. Because of the bug, the processor would return incorrect binary floating point results when dividing certain pairs of high-precision numbers. The bug was discovered in 1994 by Thomas R. Nicely, a professor of mathematics at Lynchburg College. Missing values in a lookup table used by the FPU's floating-point division algorithm led to calculations acquiring small errors. While these errors would in most use-cases only occur rarely and result in small deviations from the correct output values, in certain circumstances the errors can occur frequently and lead to more significant deviations.

In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, and most commonly, digital logic circuits.

In computer science, specifically software engineering and hardware engineering, formal methods are a particular kind of mathematically rigorous techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.

Verify or verification may refer to:

In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.

Model checking

In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification. This is typically associated with hardware or software systems, where the specification contains liveness requirements as well as safety requirements.

Software verification is a discipline of software engineering whose goal is to assure that software fully satisfies all the expected requirements.

Phil Kaufman Award was established by the EDA Consortium to recognize individuals for their impact on electronic design by their contributions to electronic design automation (EDA). It has been dubbed "The Nobel Prize of the EDA Industry".

Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.

In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.

And-inverter graph Graph representing an implementation of the logical functionality of a network

An and-inverter graph (AIG) is a directed, acyclic graph that represents a structural implementation of the logical functionality of a circuit or network. An AIG consists of two-input nodes representing logical conjunction, terminal nodes labeled with variable names, and edges optionally containing markers indicating logical negation. This representation of a logic function is rarely structurally efficient for large circuits, but is an efficient representation for manipulation of boolean functions. Typically, the abstract graph is represented as a data structure in software.

In computer science and mathematical logic, satisfiability modulo theories (SMT) is the problem of determining whether a mathematical formula is satisfiable. It generalizes the Boolean satisfiability problem (SAT) to more complex formulas involving real numbers, integers, and/or various data structures such as lists, arrays, bit vectors, and strings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certain formal theory in first-order logic with equality. SMT solvers, which aim to solve the SMT problem for a practical subset of inputs, have been used as a building block for a wide range of tools across computer science, including in automated theorem proving, program analysis, program verification, and software testing.

In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have an operational semantics which can be used directly for simulation.

NuSMV is a reimplementation and extension of SMV symbolic model checker, the first model checking tool based on Binary Decision Diagrams (BDDs). The tool has been designed as an open architecture for model checking. It is aimed at reliable verification of industrially sized designs, for use as a backend for other verification tools and as a research tool for formal verification techniques.

Randal Bryant American computer scientist (born 1952)

Randal E. Bryant is an American computer scientist and academic noted for his research on formally verifying digital hardware and software. Bryant has been a faculty member at Carnegie Mellon University since 1984. He served as the Dean of the School of Computer Science (SCS) at Carnegie Mellon from 2004 to 2014. Dr. Bryant retired and became a Founders University Professor Emeritus on June 30th, 2020.

Edmund M. Clarke American computer scientist

Edmund Melson Clarke, Jr. was an American computer scientist and academic noted for developing model checking, a method for formally verifying hardware and software designs. He was the FORE Systems Professor of Computer Science at Carnegie Mellon University. Clarke, along with E. Allen Emerson and Joseph Sifakis, received the 2007 ACM Turing Award.

Construction and Analysis of Distributed Processes

CADP is a toolbox for the design of communication protocols and distributed systems. CADP is developed by the CONVECS team at INRIA Rhone-Alpes and connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects.

David Lansing Dill is a computer scientist and academic noted for contributions to formal verification, electronic voting security, and computational systems biology.

References