Design for testing

Last updated

Design for testing or design for testability (DFT) consists of IC design techniques that add testability features to a hardware product design. The added features make it easier to develop and apply manufacturing tests to the designed hardware. The purpose of manufacturing tests is to validate that the product hardware contains no manufacturing defects that could adversely affect the product's correct functioning.

Contents

Tests are applied at several steps in the hardware manufacturing flow and, for certain products, may also be used for hardware maintenance in the customer's environment. The tests are generally driven by test programs that execute using automatic test equipment (ATE) or, in the case of system maintenance, inside the assembled system itself. In addition to finding and indicating the presence of defects (i.e., the test fails), tests may be able to log diagnostic information about the nature of the encountered test fails. The diagnostic information can be used to locate the source of the failure.

In other words, the response of vectors (patterns) from a good circuit is compared with the response of vectors (using the same patterns) from a DUT (device under test). If the response is the same or matches, the circuit is good. Otherwise, the circuit is not manufactured as it was intended.

DFT plays an important role in the development of test programs and as an interface for test application and diagnostics. Automatic test pattern generation, or ATPG, is much easier if appropriate DFT rules and suggestions have been implemented.

History

DFT techniques have been used at least since the early days of electric/electronic data processing equipment. Early examples from the 1940s/50s are the switches and instruments that allowed an engineer to "scan" (i.e., selectively probe) the voltage/current at some internal nodes in an analog computer [analog scan]. DFT often is associated with design modifications that provide improved access to internal circuit elements such that the local internal state can be controlled (controllability) and/or observed (observability) more easily. The design modifications can be strictly physical in nature (e.g., adding a physical probe point to a net) and/or add active circuit elements to facilitate controllability/observability (e.g., inserting a multiplexer into a net). While controllability and observability improvements for internal circuit elements definitely are important for test, they are not the only type of DFT. Other guidelines, for example, deal with the electromechanical characteristics of the interface between the product under test and the test equipment. Examples are guidelines for the size, shape, and spacing of probe points, or the suggestion to add a high-impedance state to drivers attached to probed nets such that the risk of damage from back-driving is mitigated.

Over the years the industry has developed and used a large variety of more or less detailed and more or less formal guidelines for desired and/or mandatory DFT circuit modifications. The common understanding of DFT in the context of Electronic Design Automation (EDA) for modern microelectronics is shaped to a large extent by the capabilities of commercial DFT software tools as well as by the expertise and experience of a professional community of DFT engineers researching, developing, and using such tools. Much of the related body of DFT knowledge focuses on digital circuits while DFT for analog/mixed-signal circuits takes somewhat of a backseat.

Objectives of DFT for microelectronics products

DFT affects and depends on the methods used for test development, test application, and diagnostics.

Most tool-supported DFT practiced in the industry today, at least for digital circuits, is predicated on a Structural test paradigm. Structural test makes no direct attempt to determine if the overall functionality of the circuit is correct. Instead, it tries to make sure that the circuit has been assembled correctly from some low-level building blocks as specified in a structural netlist. For example, are all specified logic gates present, operating correctly, and connected correctly? The stipulation is that if the netlist is correct, and structural testing has confirmed the correct assembly of the circuit elements, then the circuit should be functioning correctly.

Note that this is very different from functional testing , which attempts to validate that the circuit under test functions according to its functional specification. This is closely related to functional verification problem of determining if the circuit specified by the netlist meets the functional specifications, assuming it is built correctly.

One benefit of the Structural paradigm is that test generation can focus on testing a limited number of relatively simple circuit elements rather than having to deal with an exponentially exploding multiplicity of functional states and state transitions. While the task of testing a single logic gate at a time sounds simple, there is an obstacle to overcome. For today's highly complex designs, most gates are deeply embedded whereas the test equipment is only connected to the primary Input/outputs (I/Os) and/or some physical test points. The embedded gates, hence, must be manipulated through intervening layers of logic. If the intervening logic contains state elements, then the issue of an exponentially exploding state space and state transition sequencing creates an unsolvable problem for test generation. To simplify test generation, DFT addresses the accessibility problem by removing the need for complicated state transition sequences when trying to control and/or observe what's happening at some internal circuit element. Depending on the DFT choices made during circuit design/implementation, the generation of Structural tests for complex logic circuits can be more or less automated or self-automated [1] Archived 2013-10-13 at the Wayback Machine . One key objective of DFT methodologies, hence, is to allow designers to make trade-offs between the amount and type of DFT and the cost/benefit (time, effort, quality) of the test generation task.

Another benefit is to diagnose a circuit in case any problem emerges in the future. Its like adding some features or provisions in the design so that device can be tested in case of any fault during its use.

Looking forward

One challenge for the industry is keeping up with the rapid advances in chip technology (I/O count/size/placement/spacing, I/O speed, internal circuit count/speed/power, thermal control, etc.) without being forced to continually upgrade the test equipment. Modern DFT techniques, hence, have to offer options that allow next generation chips and assemblies to be tested on existing test equipment and/or reduce the requirements/cost for new test equipment. As a result, DFT techniques are continually being updated, such as incorporation of compression, in order to make sure that tester application times stay within certain bounds dictated by the cost target for the products under test.

Diagnostics

Especially for advanced semiconductor technologies, it is expected some of the chips on each manufactured wafer contain defects that render them non-functional. The primary objective of testing is to find and separate those non-functional chips from the fully functional ones, meaning that one or more responses captured by the tester from a non-functional chip under test differ from the expected response. The percentage of chips that fail test, hence, should be closely related to the expected functional yield for that chip type. In reality, however, it is not uncommon that all chips of a new chip type arriving at the test floor for the first time fail (so called zero-yield situation). In that case, the chips have to go through a debug process that tries to identify the reason for the zero-yield situation. In other cases, the test fall-out (percentage of test fails) may be higher than expected/acceptable or fluctuate suddenly. Again, the chips have to be subjected to an analysis process to identify the reason for the excessive test fall-out.

In both cases, vital information about the nature of the underlying problem may be hidden in the way the chips fail during test. To facilitate better analysis, additional fail information beyond a simple pass/fail is collected into a fail log. The fail log typically contains information about when (e.g., tester cycle), where (e.g., at what tester channel), and how (e.g., logic value) the test failed. Diagnostics attempt to derive from the fail log at which logical/physical location inside the chip the problem most likely started. By running a large number of failures through the diagnostics process, called volume diagnostics, systematic failures can be identified.

In some cases (e.g., Printed circuit boards, Multi-Chip Modules (MCMs), embedded or stand-alone memories) it may be possible to repair a failing circuit under test. For that purpose diagnostics must quickly find the failing unit and create a work-order for repairing/replacing the failing unit.

DFT approaches can be more or less diagnostics-friendly. The related objectives of DFT are to facilitate/simplify fail data collection and diagnostics to an extent that can enable intelligent failure analysis (FA) sample selection, as well as improve the cost, accuracy, speed, and throughput of diagnostics and FA.

Scan design

The most common method for delivering test data from chip inputs to internal circuits under test (CUTs, for short), and observing their outputs, is called scan-design. In scan-design, registers (flip-flops or latches) in the design are connected in one or more scan chains, which are used to gain access to internal nodes of the chip. Test patterns are shifted in via the scan chain(s), functional clock signals are pulsed to test the circuit during the "capture cycle(s)", and the results are then shifted out to chip output pins and compared against the expected "good machine" results.

Straightforward application of scan techniques can result in large vector sets with corresponding long tester time and memory requirements. Test compression techniques address this problem, by decompressing the scan input on chip and compressing the test output. Large gains are possible since any particular test vector usually only needs to set and/or examine a small fraction of the scan chain bits.

The output of a scan design may be provided in forms such as Serial Vector Format (SVF), to be executed by test equipment.

Debug using DFT features

In addition to being useful for manufacturing "go/no go" testing, scan chains can also be used to "debug" chip designs. In this context, the chip is exercised in normal "functional mode" (for example, a computer or mobile-phone chip might execute assembly language instructions). At any time, the chip clock can be stopped, and the chip re-configured into "test mode". At this point the full internal state can be dumped out, or set to any desired values, by use of the scan chains. Another use of scan to aid debug consists of scanning in an initial state to all memory elements and then go back to functional mode to perform system debug. The advantage is to bring the system to a known state without going through many clock cycles. This use of scan chains, along with the clock control circuits are a related sub-discipline of logic design called "Design for Debug" or "Design for Debuggability". [2]

See also

Related Research Articles

<span class="mw-page-title-main">Digital electronics</span> Electronic circuits that utilize digital signals

Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs includes important analog design considerations.

In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, most commonly to design ASICs and program FPGAs.

<span class="mw-page-title-main">Application-specific integrated circuit</span> Integrated circuit customized for a specific task

An application-specific integrated circuit is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use, such as a chip designed to run in a digital voice recorder or a high-efficiency video codec. Application-specific standard product chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series. ASIC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology, as MOS integrated circuit chips.

Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs).

In-circuit emulation (ICE) is the use of a hardware device or in-circuit emulator used to debug the software of an embedded system. It operates by using a processor with the additional ability to support debugging operations, as well as to carry out the main function of the system. Particularly for older systems, with limited processors, this usually involved replacing the processor temporarily with a hardware emulator: a more powerful although more expensive version. It was historically in the form of bond-out processor which has many internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor.

ATPG is an electronic design automation method or technology used to find an input sequence that, when applied to a digital circuit, enables automatic test equipment to distinguish between the correct circuit behavior and the faulty circuit behavior caused by defects. The generated patterns are used to test semiconductor devices after manufacture, or to assist with determining the cause of failure. The effectiveness of ATPG is measured by the number of modeled defects, or fault models, detectable and by the number of generated patterns. These metrics generally indicate test quality and test application time. ATPG efficiency is another important consideration that is influenced by the fault model under consideration, the type of circuit under test, the level of abstraction used to represent the circuit under test, and the required test quality.

JTAG is an industry standard for verifying designs and testing printed circuit boards after manufacture.

Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.

<span class="mw-page-title-main">Boundary scan</span>

Boundary scan is a method for testing interconnects on printed circuit boards or sub-blocks inside an integrated circuit. Boundary scan is also widely used as a debugging method to watch integrated circuit pin states, measure voltage, or analyze sub-blocks inside an integrated circuit.

<span class="mw-page-title-main">Integrated circuit design</span> Engineering process for electronic hardware

Integrated circuit design, or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography.

Design Closure is a part of the digital electronic design automation workflow by which an integrated circuit design is modified from its initial description to meet a growing list of design constraints and objectives.

Scan chain is a technique used in design for testing. The objective is to make testing easier by providing a simple way to set and observe every flip-flop in an IC.The basic structure of scan include the following set of signals in order to control and observe the scan mechanism.

  1. Scan_in and scan_out define the input and output of a scan chain. In a full scan mode usually each input drives only one chain and scan out observe one as well.
  2. A scan enable pin is a special signal that is added to a design. When this signal is asserted, every flip-flop in the design is connected into a long shift register.
  3. Clock signal which is used for controlling all the FFs in the chain during shift phase and the capture phase. An arbitrary pattern can be entered into the chain of flip-flops, and the state of every flip-flop can be read out.

A test engineer is a professional who determines how to create a process that would best test a particular product in manufacturing and related disciplines, in order to assure that the product meets applicable specifications. Test engineers are also responsible for determining the best way a test can be performed in order to achieve adequate test coverage. Often test engineers also serve as a liaison between manufacturing, design engineering, sales engineering and marketing communities as well.

In-circuit testing (ICT) is an example of white box testing where an electrical probe tests a populated printed circuit board (PCB), checking for shorts, opens, resistance, capacitance, and other basic quantities which will show whether the assembly was correctly fabricated. It may be performed with a "bed of nails" test fixture and specialist test equipment, or with a fixtureless in-circuit test setup.

Memory testers are specialized test equipment used to test and verify memory modules.

Test compression is a technique used to reduce the time and cost of testing integrated circuits. The first ICs were tested with test vectors created by hand. It proved very difficult to get good coverage of potential faults, so Design for testability (DFT) based on scan and automatic test pattern generation (ATPG) were developed to explicitly test each gate and path in a design. These techniques were very successful at creating high-quality vectors for manufacturing test, with excellent test coverage. However, as chips got bigger and more complex the ratio of logic to be tested per pin increased dramatically, and the volume of scan test data started causing a significant increase in test time, and required tester memory. This raised the cost of testing.

A Hardware Trojan (HT) is a malicious modification of the circuitry of an integrated circuit. A hardware Trojan is completely characterized by its physical representation and its behavior. The payload of an HT is the entire activity that the Trojan executes when it is triggered. In general, Trojans try to bypass or disable the security fence of a system: for example, leaking confidential information by radio emission. HTs also could disable, damage or destroy the entire chip or components of it.

A debug port is a diagnostic interface included in an electronic system or integrated circuit to aid design, fabrication, development, bootstrapping, configuration, debugging, and post-sale in-system programming. In general terms, a debug port is not necessary for end-use function and is often hidden or disabled in finished products.

Corelis, Inc, a subsidiary of Electronic Warfare Associates, is a private American company categorized under Electronic Equipment & Supplies and based in Cerritos, California.

In the electronics industry, embedded instrumentation refers to the integration of test and measurement instrumentation into semiconductor chips. Embedded instrumentation differs from embedded system, which are electronic systems or subsystems that usually comprise the control portion of a larger electronic system. Instrumentation embedded into chips is employed in a variety of electronic test applications, including validating and testing chips themselves, validating, testing and debugging the circuit boards where these chips are deployed, and troubleshooting systems once they have been installed in the field.

References

  1. Ben-Gal I., Herer Y. and Raz T. (2003). "Self-correcting inspection procedure under inspection errors" (PDF). IIE Transactions on Quality and Reliability, 34(6), pp. 529-540. Archived from the original (PDF) on 2013-10-13. Retrieved 2014-01-10.
  2. "Design for debugging: the unspoken imperative in chip design" [ permanent dead link ] article by Ron Wilson, EDN, 6/21/2007