Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design.
Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one. [1] Usually, the smaller circuit with the same function is cheaper, [2] takes less space, consumes less power, has shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit.
In terms of Boolean algebra, the optimization of a complex Boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one.
The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic in integrated circuits.
With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA) industry was to find the most simple circuit representation of the given design description. [nb 1] While two-level logic optimization had long existed in the form of the Quine–McCluskey algorithm, later followed by the Espresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption of Hardware description languages for circuit description, formalized the logic optimization domain as it exists today, including Logic Friday (graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic). [3]
The methods of logic circuit simplifications are equally applicable to Boolean expression minimization.
Today, logic optimization is divided into various categories:
Graphical methods represent the required logical function by a diagram representing the logic variables and value of the function. By manipulating or inspecting a diagram, much tedious calculation may be eliminated. Graphical minimization methods for two-level logic include:
This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: We need more consistent, simpler and prose-like summary on every method.(October 2021) |
The same methods of Boolean expression minimization (simplification) listed below may be applied to the circuit optimization.
For the case when the Boolean function is specified by a circuit (that is, we want to find an equivalent circuit of minimum size possible), the unbounded circuit minimization problem was long-conjectured to be -complete in time complexity, a result finally proved in 2008, [4] but there are effective heuristics such as Karnaugh maps and the Quine–McCluskey algorithm that facilitate the process.
Boolean function minimizing methods include:
Methods that find optimal circuit representations of Boolean functions are often referred to as exact synthesis in the literature. Due to the computational complexity, exact synthesis is tractable only for small Boolean functions. Recent approaches map the optimization problem to a Boolean satisfiability problem. [5] [6] This allows finding optimal circuit representations using a SAT solver.
A heuristic method uses established rules that solve a practical useful subset of the much larger possible set of problems. The heuristic method may not produce the theoretically optimum solution, but if useful, will provide most of the optimization desired with a minimum of effort. An example of a computer system that uses heuristic methods for logic optimization is the Espresso heuristic logic minimizer.
While a two-level circuit representation of circuits strictly refers to the flattened view of the circuit in terms of SOPs (sum-of-products) — which is more applicable to a PLA implementation of the design[ clarification needed ]— a multi-level representation is a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional representation (binary decision diagrams, algebraic decision diagrams) of the circuit. In sum-of-products (SOP) form, AND gates form the smallest unit and are stitched together using ORs, whereas in product-of-sums (POS) form it is opposite. POS form requires parentheses to group the OR terms together under AND gates, because OR has lower precedence than AND. Both SOP and POS forms translate nicely into circuit logic.
If we have two functions F1 and F2:
The above 2-level representation takes six product terms and 24 transistors in CMOS Rep.
A functionally equivalent representation in multilevel can be:
While the number of levels here is 3, the total number of product terms and literals reduce [ quantify ] because of the sharing of the term B + C.
Similarly, we distinguish between combinational circuits and sequential circuits. Combinational circuits produce their outputs based only on the current inputs. They can be represented by Boolean relations. Some examples are priority encoders, binary decoders, multiplexers, demultiplexers.
Sequential circuits produce their output based on both current and past inputs, depending on a clock signal to distinguish the previous inputs from the current inputs. They can be represented by finite state machines. Some examples are flip-flops and counters.
While there are many ways to minimize a circuit, this is an example that minimizes (or simplifies) a Boolean function. The Boolean function carried out by the circuit is directly related to the algebraic expression from which the function is implemented. [7] Consider the circuit used to represent . It is evident that two negations, two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need two inverters, two AND gates, and an OR gate.
The circuit can simplified (minimized) by applying laws of Boolean algebra or using intuition. Since the example states that is true when is false and the other way around, one can conclude that this simply means . In terms of logical gates, inequality simply means an XOR gate (exclusive or). Therefore, . Then the two circuits shown below are equivalent, as can be checked using a truth table:
A | B | (A | ∧ | B) | ∨ | (A | ∧ | B) | A | ≠ | B | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
F | F | F | F | T | F | T | F | F | F | F | F | ||
F | T | F | F | F | T | T | T | T | F | T | T | ||
T | F | T | T | T | T | F | F | F | T | T | F | ||
T | T | T | F | F | F | F | F | T | T | F | T |
In automata theory, combinational logic is a type of digital logic that is implemented by Boolean circuits, where the output is a pure function of the present input only. This is in contrast to sequential logic, in which the output depends not only on the present input but also on the history of the input. In other words, sequential logic has memory while combinational logic does not.
The Quine–McCluskey algorithm (QMC), also known as the method of prime implicants, is a method used for minimization of Boolean functions that was developed by Willard V. Quine in 1952 and extended by Edward J. McCluskey in 1956. As a general principle this approach had already been demonstrated by the logician Hugh McColl in 1878, was proved by Archie Blake in 1937, and was rediscovered by Edward W. Samson and Burton E. Mills in 1954 and by Raymond J. Nelson in 1955. Also in 1955, Paul W. Abrahams and John G. Nordahl as well as Albert A. Mullin and Wayne G. Kellner proposed a decimal variant of the method.
In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.
In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set. Alternative names are switching function, used especially in older computer science literature, and truth function, used in logic. Boolean functions are the subject of Boolean algebra and switching theory.
In Boolean algebra, any Boolean function can be expressed in the canonical disjunctive normal form (CDNF), minterm canonical form, or Sum of Products as a disjunction (OR) of minterms. The De Morgan dual is the canonical conjunctive normal form (CCNF), maxterm canonical form, or Product of Sums which is a conjunction (AND) of maxterms. These forms can be useful for the simplification of Boolean functions, which is of great importance in the optimization of Boolean formulas in general and digital circuits in particular.
In digital circuit design, register-transfer level (RTL) is a design abstraction which models a synchronous digital circuit in terms of the flow of digital signals (data) between hardware registers, and the logical operations performed on those signals.
In computer engineering, logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates, typically by a computer program called a synthesis tool. Common examples of this process include synthesis of designs specified in hardware description languages, including VHDL and Verilog. Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs, while others target the creation of ASICs. Logic synthesis is one step in circuit design in the electronic design automation, the others are place and route and verification and validation.
In semiconductor design, standard-cell methodology is a method of designing application-specific integrated circuits (ASICs) with mostly digital-logic features. Standard-cell methodology is an example of design abstraction, whereby a low-level very-large-scale integration (VLSI) layout is encapsulated into an abstract logic representation.
An Euler diagram is a diagrammatic means of representing sets and their relationships. They are particularly useful for explaining complex hierarchies and overlapping definitions. They are similar to another set diagramming technique, Venn diagrams. Unlike Venn diagrams, which show all possible relations between different sets, the Euler diagram shows only relevant relationships.
An and-inverter graph (AIG) is a directed, acyclic graph that represents a structural implementation of the logical functionality of a circuit or network. An AIG consists of two-input nodes representing logical conjunction, terminal nodes labeled with variable names, and edges optionally containing markers indicating logical negation. This representation of a logic function is rarely structurally efficient for large circuits, but is an efficient representation for manipulation of boolean functions. Typically, the abstract graph is represented as a data structure in software.
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.
A Karnaugh map is a diagram that can be used to simplify a Boolean algebra expression. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which itself was a rediscovery of Allan Marquand's 1881 logical diagram. It is also useful for understanding logic circuits. Karnaugh maps are also known as Marquand–Veitch diagrams, Svoboda charts -(albeit only rarely)- and Karnaugh–Veitch maps.
The ESPRESSO logic minimizer is a computer program using heuristic and specific algorithms for efficiently reducing the complexity of digital logic gate circuits. ESPRESSO-I was originally developed at IBM by Robert K. Brayton et al. in 1982. and improved as ESPRESSO-II in 1984. Richard L. Rudell later published the variant ESPRESSO-MV in 1986 and ESPRESSO-EXACT in 1987. Espresso has inspired many derivatives.
Maurice Karnaugh was an American physicist, mathematician, computer scientist, and inventor known for the Karnaugh map used in Boolean algebra.
AND-OR-invert (AOI) logic and AOI gates are two-level compound logic functions constructed from the combination of one or more AND gates followed by a NOR gate. Construction of AOI cells is particularly efficient using CMOS technology, where the total number of transistor gates can be compared to the same construction using NAND logic or NOR logic. The complement of AOI logic is OR-AND-invert (OAI) logic, where the OR gates precede a NAND gate.
State encoding assigns a unique pattern of ones and zeros to each defined state of a finite-state machine (FSM). Traditionally, design criteria for FSM synthesis were speed, area, or both. Following Moore's law, with technology advancement, density and speed of integrated circuits have increased exponentially. With this, power dissipation per area has inevitably increased, which has forced designers for portable computing devices and high-speed processors to consider power dissipation as a critical parameter during design consideration.
Finite state machines (FSMs) are widely used to implement control logic in various applications such as microprocessors, digital transmission, digital filters and digital signal processing. Even for designs containing a good number of datapath elements, the controller occupies a sizeable portion. As the devices are mostly portable and hand-held, reducing power dissipation has emerged as the primary concern of today's VLSI designers. While the datapath elements can be shut down when they are not being used, controllers are always active. As a result, the controller consumes a good amount of system power. Thus, power-efficient synthesis of FSM has come up as a very important problem domain, attracting a lot of research. The synthesis method must be able to reduce both dynamic power and leakage power consumed by the circuit.
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations.
{{cite book}}
: |work=
ignored (help)