Reconvergent fan-out

Last updated

Reconvergent fan-out is a technique to make VLSI logic simulation less pessimistic.

Logic simulation is the use of simulation software to predict the behavior of digital circuits and hardware description languages. Simulation can be performed at varying degrees of physical abstraction, such as at the transistor level, gate level, register-transfer level (RTL), electronic system-level (ESL), or behavioral level.

Static timing analysis tries to figure out the best and worst case time estimate for each signal as they pass through an electronic device. Whenever a signal passes through a node, a bit of uncertainty must be added to the time required for the signal to transit that device. These uncertain delays add up so, after passing through many devices, the worst-case timing for a signal could be unreasonably pessimistic.

Static timing analysis (STA) is a simulation method of computing the expected timing of a digital circuit without requiring a simulation of the full circuit.

It is common for two signals to share an identical path, branch and follow different paths for a while, then converge back to the same point to produce a result. When this happens, you can remove a fair amount of uncertainty from the total delay because you know that they shared a common path for a while. Even though each signal has an uncertain delay, because their delays were identical for part of the journey the total uncertainty can be reduced. This tightens up the worst-case estimation for the signal delay, and usually allows a small but important speedup of the overall device.

This term is starting to be used in a more generic sense as well. Any time a signal splits into two and then reconverges, certain optimizations can be made. The term reconvergent fan-out has been used to describe similar optimizations in graph theory and static code analysis.

Graph theory Area of discrete mathematics

In mathematics, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices which are connected by edges. A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically; see Graph for more detailed definitions and for other variations in the types of graph that are commonly considered. Graphs are one of the prime objects of study in discrete mathematics.

See also

Related Research Articles

In computer science, best, worst, and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively. Usually the resource being considered is running time, i.e. time complexity, but it could also be memory or other resource. Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n. Average case is the function which performs an average number of steps on input data of n elements.

In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. The most common requirement is to minimize the time taken to execute a program; a less common one is to minimize the amount of memory occupied. The growth of portable computers has created a market for minimizing the power consumed by a program.

System on a chip type of integrated circuit

A system on a chip or system on chip is an integrated circuit that integrates all components of a computer or other electronic system. These components typically include a central processing unit (CPU), memory, input/output ports and secondary storage – all on a single substrate or microchip, the size of a coin. It may contain digital, analog, mixed-signal, and often radio frequency signal processing functions, depending on the application. As they are integrated on a single substrate, SoCs consume much less power and take up much less area than multi-chip designs with equivalent functionality. Because of this, SoCs are very common in the mobile computing and edge computing markets. Systems on chip are commonly used in embedded systems and the Internet of Things.

In electronics and especially synchronous digital circuits, a clock signal is a particular type of signal that oscillates between a high and a low state and is used like a metronome to coordinate actions of digital circuits.

In digital electronics, the fan-out of a logic gate output is the number of gate inputs it can drive.

ATPG is an electronic design automation method/technology used to find an input sequence that, when applied to a digital circuit, enables automatic test equipment to distinguish between the correct circuit behavior and the faulty circuit behavior caused by defects. The generated patterns are used to test semiconductor devices after manufacture, or to assist with determining the cause of failure. The effectiveness of ATPG is measured by the number of modeled defects, or fault models, detectable and by the number of generated patterns. These metrics generally indicate test quality and test application time. ATPG efficiency is another important consideration that is influenced by the fault model under consideration, the type of circuit under test, the level of abstraction used to represent the circuit under test, and the required test quality.

Race condition type of software defect

A race condition or race hazard is the behavior of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of the possible behaviors is undesirable.

Source-Synchronous clocking refers to a technique used for timing symbols on a digital interface. Specifically, it refers to the technique of having the transmitting device send a clock signal along with the data signals. The timing of the unidirectional data signals is referenced to the clock sourced by the same device that generates those signals, and not to a global clock. Compared to other digital clocking topologies like system-synchronous clocks, where a global clock source is fed to all devices in the system, a source-synchronous clock topology can attain far higher speeds.

The worst-case execution time (WCET) of a computational task is the maximum length of time the task could take to execute on a specific hardware platform.

Clock skew is a phenomenon in synchronous digital circuit systems in which the same sourced clock signal arrives at different components at different times i.e. the instantaneous difference between the readings of any two clocks is called their skew.

Delay calculation is the term used in integrated circuit design for the calculation of the gate delay of a single logic gate and the wires attached to it. By contrast, static timing analysis computes the delays of entire paths, using delay calculation to determine the delay of each gate and wire.

Elmore delay is a simple approximation to the delay through an RC network in an electronic system. It is often used in applications such as logic synthesis, delay calculation, static timing analysis, placement and routing, since it is simple to compute and is reasonably accurate. Even where it is not accurate, it is usually faithful, in the sense that reducing the Elmore delay will almost always reduce the true delay, so it is still useful in optimization.

Retiming is the technique of moving the structural location of latches or registers in a digital circuit to improve its performance, area, and/or power characteristics in such a way that preserves its functional behavior at its outputs. Retiming was first described by Charles E. Leiserson and James B. Saxe in 1983.

Conventional static timing analysis (STA) has been a stock analysis algorithm for the design of digital circuits over the last 30 years. However, in recent years the increased variation in semiconductor devices and interconnect has introduced a number of issues that cannot be handled by traditional (deterministic) STA. This has led to considerable research into statistical static timing analysis, which replaces the normal deterministic timing of gates and interconnects with probability distributions, and gives a distribution of possible circuit outcomes rather than a single outcome.

Traffic Optimization are the methods by which time stopped in road traffic is reduced.

In the automated design of integrated circuits, signoff checks is the collective name given to a series of verification steps that the design must pass before it can be taped out. This implies an iterative process involving incremental fixes across the board using one or more check types, and then retesting the design. There are two types of sign-off's: front-end sign-off and back-end sign-off. After back-end sign-off the chip goes to fabrication. After listing out all the features in the specification, the verification engineer will write coverage for those features to identify bugs, and send back the RTL design to the designer. Bugs, or defects, can include issues like missing features, errors in design, etc. When the coverage reaches a maximum% then the verification team will sign it off. By using a methodology like UVM, OVM, or VMM, the verification team develops a reusable environment. Nowadays, UVM is more popular than others.

Audio Video Bridging

Audio Video Bridging (AVB) is a common name for the set of technical standards developed by the Institute of Electrical and Electronics Engineers (IEEE) Audio Video Bridging Task Group of the IEEE 802.1 standards committee. This task group was renamed to Time-Sensitive Networking Task Group in November 2012 to reflect the expanded scope of work.

Power gating is a technique used in integrated circuit design to reduce power consumption, by shutting off the current to blocks of the circuit that are not in use. In addition to reducing stand-by or leakage power, power gating has the benefit of enabling Iddq testing.

High performance computing applications run on massively parallel supercomputers consist of concurrent programs designed using multi-threaded, multi-process models. The applications may consist of various constructs with varying degree of parallelism. Although high performance concurrent programs use similar design patterns, models and principles as that of sequential programs, unlike sequential programs, they typically demonstrate non-deterministic behavior. The probability of bugs increases with the number of interactions between the various parallel constructs. Race conditions, data races, deadlocks, missed signals and live lock are common error types.