This article includes a list of general references, but it lacks sufficient corresponding inline citations .(October 2024) |
The algorithmic state machine (ASM) is a method for designing finite state machines (FSMs) originally developed by Thomas E. Osborne at the University of California, Berkeley (UCB) since 1960, [1] introduced to and implemented at Hewlett-Packard in 1968, formalized and expanded since 1967 and written about by Christopher R. Clare since 1970. [2] [3] [4] It is used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but more structured and, thus, easier to understand. An ASM chart is a method of describing the sequential operations of a digital system.
The ASM method is composed of the following steps:
An ASM chart consists of an interconnection of four types of basic elements: state name, state box, decision box, and conditional outputs box. An ASM state, represented as a rectangle, corresponds to one state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box.
State Name: The name of the state is indicated inside the circle and the circle is placed in the top left corner or the name is placed without the circle.
State Box: The output of the state is indicated inside the rectangle box
Decision Box: A diamond indicates that the stated condition/expression is to be tested and the exit path is to be chosen accordingly. The condition expression contains one or more inputs to the FSM (Finite State Machine). An ASM condition check, indicated by a diamond with one input and two outputs (for true and false), is used to conditionally transfer between two State Boxes, to another Decision Box, or to a Conditional Output Box. The decision box contains the stated condition expression to be tested, the expression contains one or more inputs of the FSM.
Conditional Output Box: An oval denotes the output signals that are of Mealy type. These outputs depend not only on the state but also the inputs to the FSM.
Once the desired operation of a circuit has been described using RTL operations, the datapath components may be derived. Every unique variable that is assigned a value in the RTL program can be implemented as a register. Depending on the functional operation performed when assigning a value to a variable, the register for that variable may be implemented as a straightforward register, a shift register, a counter, or a register preceded by a combinational logic block. The combinational logic block associated with a register may implement an adder, subtractor, multiplexer, or some other type of combinational logic function.
Once the datapath is designed, the ASM chart is converted to a detailed ASM chart. The RTL notation is replaced by signals defined in the datapath.
In digital logic and computing, a counter is a device which stores the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the clock and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter.
A finite-state machine (FSM) or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed.
Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs includes important analog design considerations.
The IEEE 1164 standard is a technical standard published by the IEEE in 1993. It describes the definitions of logic values to be used in electronic design automation, for the VHDL hardware description language. It was sponsored by the Design Automation Standards Committee of the Institute of Electrical and Electronics Engineers (IEEE). The standardization effort was based on the donation of the Synopsys MVL-9 type declaration.
CORDIC, Volder's algorithm, Digit-by-digit method, Circular CORDIC, Linear CORDIC, Hyperbolic CORDIC, and Generalized Hyperbolic CORDIC, is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available, as the only operations they require are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.
In digital circuit design, register-transfer level (RTL) is a design abstraction which models a synchronous digital circuit in terms of the flow of digital signals (data) between hardware registers, and the logical operations performed on those signals.
Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.
In computer engineering, logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates, typically by a computer program called a synthesis tool. Common examples of this process include synthesis of designs specified in hardware description languages, including VHDL and Verilog. Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs, while others target the creation of ASICs. Logic synthesis is one step in circuit design in the electronic design automation, the others are place and route and verification and validation.
A virtual finite-state machine (VFSM) is a finite-state machine (FSM) defined in a virtual environment. The VFSM concept provides a software specification method to describe the behaviour of a control system using assigned names of input control properties and output actions.
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
In digital circuits and machine learning, a one-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). A similar implementation in which all bits are '1' except one '0' is sometimes called one-cold. In statistics, dummy variables represent a similar technique for representing categorical data.
XOR gate is a digital logic gate that gives a true output when the number of true inputs is odd. An XOR gate implements an exclusive or from mathematical logic; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "must have one or the other but not both".
Altera Hardware Description Language (AHDL) is a proprietary hardware description language (HDL) developed by Altera Corporation. AHDL is used for digital logic design entry for Altera's complex programmable logic devices (CPLDs) and field-programmable gate arrays (FPGAs). It is supported by Altera's MAX-PLUS and Quartus series of design software. AHDL has an Ada-like syntax, while its feature set is comparable to the synthesizable portions of the Verilog and VHDL hardware description languages. In contrast to HDLs such as Verilog and VHDL, AHDL is a design-entry language only; all of its language constructs are synthesizable. By default, Altera software expects AHDL source files to have a .tdf extension.
A data path is a collection of functional units such as arithmetic logic units (ALUs) or multipliers that perform data processing operations, registers, and buses. Along with the control unit it composes the central processing unit (CPU). A larger data path can be made by joining more than one data paths using multiplexers.
A Karnaugh map is a diagram that can be used to simplify a Boolean algebra expression. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which itself was a rediscovery of Allan Marquand's 1881 logical diagram. It is also useful for understanding logic circuits. Karnaugh maps are also known as Marquand–Veitch diagrams, Svoboda charts -(albeit only rarely)- and Karnaugh–Veitch maps.
The ESPRESSO logic minimizer is a computer program using heuristic and specific algorithms for efficiently reducing the complexity of digital logic gate circuits. ESPRESSO-I was originally developed at IBM by Robert K. Brayton et al. in 1982. and improved as ESPRESSO-II in 1984. Richard L. Rudell later published the variant ESPRESSO-MV in 1986 and ESPRESSO-EXACT in 1987. Espresso has inspired many derivatives.
In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).
MLDesigner is an integrated modeling and simulation tool for the design and analysis of complex embedded and networked systems. MLDesigner speeds up modeling, simulation and analysis of discrete event, discrete time and continuous time systems concerning architecture, function and performance. The tools is based on ideas of the "Ptolemy Project", done at the University if California Berkeley. MLDesigner is developed by MLDesign Technologies Inc. Palo Alto, CA, USA in collaboration with Mission Level Design GmbH, Ilmenau, Germany.
State encoding assigns a unique pattern of ones and zeros to each defined state of a finite-state machine (FSM). Traditionally, design criteria for FSM synthesis were speed, area or both. Following Moore's law, with technology advancement, density and speed of integrated circuits have increased exponentially. With this, power dissipation per area has inevitably increased, which has forced designers for portable computing devices and high-speed processors to consider power dissipation as a critical parameter during design consideration.
Finite state machines (FSMs) are widely used to implement control logic in various applications such as microprocessors, digital transmission, digital filters and digital signal processing. Even for designs containing a good number of datapath elements, the controller occupies a sizeable portion. As the devices are mostly portable and hand-held, reducing power dissipation has emerged as the primary concern of today's VLSI designers. While the datapath elements can be shut down when they are not being used, controllers are always active. As a result, the controller consumes a good amount of system power. Thus, power-efficient synthesis of FSM has come up as a very important problem domain, attracting a lot of research. The synthesis method must be able to reduce both dynamic power and leakage power consumed by the circuit.
The second annual IEEE Workshop on Microprocessors (now called the Asilomar Microcomputer Workshop, or AMW) was held Wednesday–Friday, April 28–30, 1976, near Monterey, California […] My Wednesday evening talk described tools that enabled a very different design methodology—Algorithmic State Machine design (ASM)—using Lyapunov state-variable mathematics, and derivative techniques pioneered at HP by Chris Clare and Dave Cochran for the spectacularly successful handheld scientific calculators (e.g., HP 35) […] My point: circuit design was no longer an element-by-element issue, but a question of "state flow" at lots of nodes—the sequential "words" of registers rather than the voltages of device pins. In effect, it argued that electronic voltages, whether analogic or switched, would "lose out" to software instructions, and "data states." Systems would be designed and analyzed for proper state sequencing rather than analogic signal distortion or digital switching times. […] I'd already seen the power of pre-publication books. Clare's insightful ASM methodology text, Designing Logic Systems Using State Machines, swept through the HPdesign community […] Stanford's electrical engineering department was not so sanguine, however, canceling Clare's course in 1974, saying that "it is a little bit too unconventional" […] Stanford preferred Quine–McCluskey minimization techniques. Fittingly, Mead's Caltech colleague Ivan Sutherland prepared a Scientific American article (1977) […] about the challenge microelectronics posed to computing theory and practice, noting that since most of a chip's surface was occupied by "wires" (conducting pathways) rather than "components" (transistors), decades of minimization theory in logic design had become irrelevant […](4 pages)
[…] In your April issue you published a letter by R. L. Dineley describing a simple method for treating product-of-sums logical expressions. […] An even simpler method is taught by D. A. Huffman. This method is based on recognizing that the Boolean expression will be zero when any of the factors in the product-of-sums form is zero. Plotting zeroes of factors on a Veitch diagram or Karnaugh map is as easy as locating ones for a sum-of-products expression. […] To illustrate, using Dineley's example (A+BC)(A+C): […] The zeroes resulting from A+BC will be located wherever both A and BC are zero. Therefore we locate on the map the expression A*BC (which is equal to A*B + A*C). Similarly the zeroes of A+C are located and plotted at A*C. With all zeroes located, the rest of the map can be filled with ones. One can be a little more formal and work out algebraically the logical complement of the expression under consideration and then plot zeroes for that resulting expression. In a simple product-of-sums representation, however, the complementary terms can be written by inspection; or the zeroes can be plotted by inspection without writing the complete expression […] "Classical Reduction Involving Infrequently Used Variables" October 11, 1968. University of Santa Clara […] Mr. Osborne's work draws close similarity to that I presented in this article and thus, would certainly be of interest to those readers seeking further information. I understand he has done work to apply the technique of infrequent variables to the design of sequential networks constructed from Read Only Memory. Since he has not yet published anything on this area, if readers would like additional information, they can write Mr. Osborne at: […] Thomas E. Osborne […] Building 1U […] 1501 Page Mill Road […] Palo Alto, California […] Thank you for the opportunity to publish with you. […] G. W. Schultz […] Central Data Systems, Inc. […] Sunnyvale, Calif.(1 page) (NB. Osborne's method was later published by Clare. [B] )
[…] An important contribution to the adaptation of theory to practice was made by Schultz [20]; he draws upon the designer's basic understanding of the problem and requires him to identify the "infrequent variables." Loosely defined, these variables do not relate to all internal states, i.e., they are not needed to define every state. In essence, the infrequent variables are relevant to only a few (perhaps one or two) states or state transitions. Schultz suggests that the designer first translate the verbal problem to a state transition graph that is reduced. The internal states are encoded and then information regarding infrequent variables is added to the appropriate state transitions. A "first approximation" to flip-flop input equations is made, based only upon the frequent variables. Schultz demonstrates how these equations can subsequently be modified to incorporate transitions controlled by the infrequent variables. In Schultz's examples the infrequent variables are all input signals, but this idea also applies to internal state variable signals that may be considered "infrequent." In this case, for example, an infrequent internal state variable flip-flop might be set by a particular circumstance and reset sometime later. The output of the flip-flop may now be treated as an infrequent input variable. […](ix+1+179+3 pages)