Arbiter (electronics)

Last updated

Arbiters are electronic devices that allocate access to shared resources.

Contents

Bus arbiter

There are multiple ways to perform a computer bus arbitration, with the most popular varieties being: [1] [2]

  1. dynamic centralized parallel where one central arbiter is used for all masters as discussed in this article;
  2. centralized serial (or "daisy chain") where, upon accessing the bus, the active master passes the opportunity to the next one. In essence, each connected master contains its own arbiter;
  3. distributed arbitration by self-selection (distributed bus arbitration) where the access is self-granted based on the decision made locally by using information from other masters;
  4. distributed arbitration by collision detection where each master tries to access the bus on its own, but detects conflicts and retries the failed operations.

A bus arbiter is a device used in a multi-master bus system to decide which bus master will be allowed to control the bus for each bus cycle. The most common kind of bus arbiter is the memory arbiter in a system bus system.

A memory arbiter is a device used in a shared memory system to decide, for each memory cycle, which CPU will be allowed to access that shared memory. [3] [4] [5]

Some atomic instructions depend on the arbiter to prevent other CPUs from reading memory "halfway through" atomic read-modify-write instructions.

A memory arbiter is typically integrated into the memory controller/DMA controller.

Some systems, such as conventional PCI, have a single centralized bus arbitration device that one can point to as "the" bus arbiter, which was usually integrated in chipset. [6] Other systems use decentralized bus arbitration, where all the devices cooperate to decide who goes next. [7] [8]

When every CPU connected to the memory arbiter has synchronized memory access cycles, the memory arbiter can be designed as a synchronous arbiter. Otherwise the memory arbiter must be designed as an asynchronous arbiter.

Asynchronous arbiters

An important form of arbiter is used in asynchronous circuits to select the order of access to a shared resource among asynchronous requests. Its function is to prevent two operations from occurring at once when they should not. For example, in a computer that has multiple CPUs or other devices accessing computer memory, and has more than one clock, the possibility exists that requests from two unsynchronized sources could come in at nearly the same time. "Nearly" can be very close in time, in the sub-femtosecond range. The memory arbiter must then decide which request to service first. Unfortunately, it is not possible to do this in a fixed time [Anderson 1991].[ clarification needed ]

Asynchronous arbiters and metastability

Arbiters break ties. Like a flip-flop circuit, an arbiter has two stable states corresponding to the two choices. If two requests arrive at an arbiter within a few picoseconds (today, femtoseconds) of each other, the circuit may become meta-stable before reaching one of its stable states to break the tie. Classical arbiters are specially designed not to oscillate wildly when meta-stable and to decay from a meta-stability as rapidly as possible, typically by using extra power. The probability of not having reached a stable state decreases exponentially with time after inputs have been provided.

A reliable solution to this problem was found in the mid-1970s. Although an arbiter that makes a decision in a fixed time is not possible, one that sometimes takes a little longer in the hard case (close calls) can be made to work. It is necessary to use a multistage synchronization circuit that detects that the arbiter has not yet settled into a stable state. The arbiter then delays processing until a stable state has been achieved. In theory, the arbiter can take an arbitrarily long time to settle (see Buridan's principle), but in practice, it seldom takes more than a few gate delay times. The classic paper is [Kinniment and Woods 1976], which describes how to build a "3 state flip flop" to solve this problem, and [Ginosar 2003], a caution to engineers on common mistakes in arbiter design.

This result is of considerable practical importance, as multiprocessor computers would not work reliably without it. The first multiprocessor computers date from the late 1960s, predating the development of reliable arbiters. Some early multiprocessors with independent clocks for each processor suffered from arbiter race conditions, and thus unreliability. Today, this is no longer a problem.

Synchronous arbiters

Arbiters are used in synchronous contexts as well in order to allocate access to a shared resource. A wavefront arbiter is an example of a synchronous arbiter that is present in one type of large network switch.

Related Research Articles

<span class="mw-page-title-main">FIFO (computing and electronics)</span> Scheduling algorithm, the first piece of data inserted into a queue is processed first

In computing and in systems theory, first in, first out, acronymized as FIFO, is a method for organizing the manipulation of a data structure where the oldest (first) entry, or "head" of the queue, is processed first.

<span class="mw-page-title-main">Interrupt</span> Signal to a computer processor emitted by hardware or software

In digital computers, an interrupt is a request for the processor to interrupt currently executing code, so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler to deal with the event. This interruption is often temporary, allowing the software to resume normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error.

<span class="mw-page-title-main">Symmetric multiprocessing</span> The equal sharing of all resources by multiple identical processors

Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes. Most multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors.

Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU).

<span class="mw-page-title-main">Static random-access memory</span> Type of computer memory

Static random-access memory is a type of random-access memory (RAM) that uses latching circuitry (flip-flop) to store each bit. SRAM is volatile memory; data is lost when power is removed.

In automata theory, sequential logic is a type of logic circuit whose output depends on the present value of its input signals and on the sequence of past inputs, the input history. This is in contrast to combinational logic, whose output is a function of only the present input. That is, sequential logic has state (memory) while combinational logic does not.

Futurebus is a computer bus standard designed to replace all local bus connections in a computer, including the CPU, plug-in cards, and even some LAN links between machines. The project started in 1979 and was completed in 1987, but then went through a redesign until 1994. It has seen little real-world use, although custom implementations are still designed.

<span class="mw-page-title-main">Clock signal</span> Timing of electronic circuits

In electronics and especially synchronous digital circuits, a clock signal is an electronic logic signal which oscillates between a high and a low state at a constant frequency and is used like a metronome to synchronize actions of digital circuits. In a synchronous logic circuit, the most common type of digital circuit, the clock signal is applied to all storage devices, flip-flops and latches, and causes them all to change state simultaneously, preventing race conditions.

<span class="mw-page-title-main">Front-side bus</span> Type of computer communication interface

The front-side bus (FSB) is a computer communication interface (bus) that was often used in Intel-chip-based computers during the 1990s and 2000s. The EV6 bus served the same function for competing AMD CPUs. Both typically carry data between the central processing unit (CPU) and a memory controller hub, known as the northbridge.

<span class="mw-page-title-main">System bus</span> Single computer bus that connects the major components of a computer system

A system bus is a single computer bus that connects the major components of a computer system, combining the functions of a data bus to carry information, an address bus to determine where it should be sent or read from, and a control bus to determine its operation. The technique was developed to reduce costs and improve modularity, and although popular in the 1970s and 1980s, more modern computers use a variety of separate buses adapted to more specific needs.

<span class="mw-page-title-main">Q-Bus</span> Computer bus

The Q-bus, also known as the LSI-11 Bus, is one of several bus technologies used with PDP and MicroVAX computer systems previously manufactured by the Digital Equipment Corporation of Maynard, Massachusetts.

In digital electronics, a synchronous circuit is a digital circuit in which the changes in the state of memory elements are synchronized by a clock signal. In a sequential digital logic circuit, data is stored in memory devices called flip-flops or latches. The output of a flip-flop is constant until a pulse is applied to its "clock" input, upon which the input of the flip-flop is latched into its output. In a synchronous logic circuit, an electronic oscillator called the clock generates a string (sequence) of pulses, the "clock signal". This clock signal is applied to every storage element, so in an ideal synchronous circuit, every change in the logical levels of its storage components is simultaneous. Ideally, the input to each storage element has reached its final value before the next clock occurs, so the behaviour of the whole circuit can be predicted exactly. Practically, some delay is required for each logical operation, resulting in a maximum speed limitations at which each synchronous system can run.

Asynchronous circuit is a sequential digital logic circuit that does not use a global clock circuit or signal generator to synchronize its components. Instead, the components are driven by a handshaking circuit which indicates a completion of a set of instructions. Handshaking works by simple data transfer protocols. Many synchronous circuits were developed in early 1950s as part of bigger asynchronous systems. Asynchronous circuits and theory surrounding is a part of several steps in integrated circuit design, a field of digital electronics engineering.

<span class="mw-page-title-main">Metastability (electronics)</span> Ability of a digital electronic system to remain in unstable equilibrium forever

In electronics, metastability is the ability of a digital electronic system to persist for an unbounded time in an unstable equilibrium or metastable state. In digital logic circuits, a digital signal is required to be within certain voltage or current limits to represent a '0' or '1' logic level for correct circuit operation; if the signal is within a forbidden intermediate range it may cause faulty behavior in logic gates the signal is applied to. In metastable states, the circuit may be unable to settle into a stable '0' or '1' logic level within the time required for proper circuit operation. As a result, the circuit can act in unpredictable ways, and may lead to a system failure, sometimes referred to as a "glitch". Metastability is an instance of the Buridan's ass paradox.

In computer architecture, clock gating is a popular power management technique used in many synchronous circuits for reducing dynamic power dissipation, by removing the clock signal when the circuit, or a subpart of it, is not in use or ignores clock signal. Clock gating saves power by pruning the clock tree, at the cost of adding more logic to a circuit. Pruning the clock disables portions of the circuitry so that the flip-flops in them do not switch state, as switching the state consumes power. When not being switched, the switching power consumption goes to zero, and only leakage currents are incurred.

<span class="mw-page-title-main">Network on a chip</span> Electronic communication subsystem on an integrated circuit

A network on a chip or network-on-chip is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules.

In digital electronic design a clock domain crossing (CDC), or simply clock crossing, is the traversal of a signal in a synchronous digital circuit from one clock domain into another. If a signal does not assert long enough and is not registered, it may appear asynchronous on the incoming clock boundary.

<span class="mw-page-title-main">VAX 8000</span> Family of superminicomputers by Digital Equipment Corporation

The VAX 8000 is a discontinued family of superminicomputers developed and manufactured by Digital Equipment Corporation (DEC) using processors implementing the VAX instruction set architecture (ISA).

MQX is a real-time operating system (RTOS) developed by Precise Software Technologies, Inc., and currently sold by Synopsys, Embedded Access, Inc., and NXP Semiconductors.

<span class="mw-page-title-main">Flip-flop (electronics)</span> Electronic circuit with two stable states

In electronics, flip-flops and latches are circuits that have two stable states that can store state information – a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will output its state. It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.

References

  1. Noergaard 2012, p. 297.
  2. Gottlieb 1999.
  3. Michael Fingeroff. "High-Level Synthesis Blue Book". 2010. p. 270. quote: "The bus or memory arbiter processes the request from the different processes and decides who gets access to the bus/memory."
  4. Arten Esa, Bryan Myers. "Design of an Arbiter for DDR3 Memory". 2013.
  5. Kearney, D.A.; Veldman, G. "A concurrent multi-bank memory arbiter for dynamic IP cores using idle skip round robin". 2003. DOI: 10.1109/FPT.2003.1275789.
  6. docs.oracle.com https://docs.oracle.com/cd/E19620-01/805-4447/auto2/index.html . Retrieved 2024-07-26.{{cite web}}: Missing or empty |title= (help)
  7. Tim Downey. "Bus Arbitration"
  8. Shun Yan Cheung. "Bus Arbitration" Archived 2022-05-28 at the Wayback Machine

Sources