Peripheral Component Interconnect

Last updated

PCI
PCI Local Bus
PCI Local Bus logo.svg
PCI Slots Digon3.JPG
Three 5-volt 32-bit PCI expansion slots on a motherboard (PC bracket on left side)
Year createdJune 22, 1992;32 years ago (1992-06-22) [1]
Created by Intel
Supersedes ISA, EISA, MCA, VLB
Superseded by AGP for graphics (1997), PCI Express (2004)
Width in bits32 or 64
Speed Half-duplex: [2]
133  MB/s (32-bit at 33 MHz  the standard configuration)
266 MB/s (32-bit at 66 MHz)
266 MB/s (64-bit at 33 MHz)
533 MB/s (64-bit at 66 MHz)
Style Parallel
Hotplugging interfaceOptional
Website www.pcisig.com/home

Peripheral Component Interconnect (PCI) [3] is a local computer bus for attaching hardware devices in a computer and is part of the PCI Local Bus standard. The PCI bus supports the functions found on a processor bus but in a standardized format that is independent of any given processor's native bus. Devices connected to the PCI bus appear to a bus master to be connected directly to its own bus and are assigned addresses in the processor's address space. [4] It is a parallel bus, synchronous to a single bus clock. Attached devices can take either the form of an integrated circuit fitted onto the motherboard (called a planar device in the PCI specification) or an expansion card that fits into a slot. The PCI Local Bus was first implemented in IBM PC compatibles, where it displaced the combination of several slow Industry Standard Architecture (ISA) slots and one fast VESA Local Bus (VLB) slot as the bus configuration. It has subsequently been adopted for other computer types. Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as Universal Serial Bus (USB) or serial, TV tuner cards and hard disk drive host adapters. PCI video cards replaced ISA and VLB cards until rising bandwidth needs outgrew the abilities of PCI. The preferred interface for video cards then became Accelerated Graphics Port (AGP), a superset of PCI, before giving way to PCI Express. [5]

Contents

The first version of PCI found in retail desktop computers was a 32-bit bus using a 33  MHz bus clock and 5  V signaling, although the PCI 1.0 standard provided for a 64-bit variant as well. [6] These have one locating notch in the card. Version 2.0 of the PCI standard introduced 3.3 V slots, physically distinguished by a flipped physical connector to prevent accidental insertion of 5 V cards. Universal cards, which can operate on either voltage, have two notches. Version 2.1 of the PCI standard introduced optional 66 MHz operation. A server-oriented variant of PCI, PCI Extended (PCI-X) operated at frequencies up to 133 MHz for PCI-X 1.0 and up to 533 MHz for PCI-X 2.0. An internal connector for laptop cards, called Mini PCI, was introduced in version 2.2 of the PCI specification. The PCI bus was also adopted for an external laptop connector standard  the CardBus. [7] The first PCI specification was developed by Intel, but subsequent development of the standard became the responsibility of the PCI Special Interest Group (PCI-SIG). [8]

PCI and PCI-X sometimes are referred to as either Parallel PCI or Conventional PCI [9] to distinguish them technologically from their more recent successor PCI Express, which adopted a serial, lane-based architecture. [10] [11] PCI's heyday in the desktop computer market was approximately 1995 to 2005. [10] PCI and PCI-X have become obsolete for most purposes and has largely disappeared from many other modern motherboards since 2013; however they are still common on some modern desktops as of 2020 for the purposes of backward compatibility and the relative low cost to produce. Another common modern application of parallel PCI is in industrial PCs, where many specialized expansion cards, used here, never transitioned to PCI Express, just as with some ISA cards. Many kinds of devices formerly available on PCI expansion cards are now commonly integrated onto motherboards or available in USB and PCI Express versions.

History

A typical 32-bit, 5 V-only PCI card, in this case, a SCSI adapter from Adaptec Adaptec AHA-2940AU.png
A typical 32-bit, 5 V-only PCI card, in this case, a SCSI adapter from Adaptec
A motherboard with two 32-bit PCI slots and two sizes of PCI Express slots PCI und PCIe Slots.jpg
A motherboard with two 32-bit PCI slots and two sizes of PCI Express slots

Work on PCI began at the Intel Architecture Labs (IAL, also Architecture Development Lab) c.1990. A team of primarily IAL engineers defined the architecture and developed a proof of concept chipset and platform (Saturn) partnering with teams in the company's desktop PC systems and core logic product organizations.

PCI was immediately put to use in servers, replacing Micro Channel architecture (MCA) and Extended Industry Standard Architecture (EISA) as the server expansion bus of choice. In mainstream PCs, PCI was slower to replace VLB, and did not gain significant market penetration until late 1994 in second-generation Pentium PCs. By 1996, VLB was all but extinct, and manufacturers had adopted PCI even for Intel 80486 (486) computers. [12] EISA continued to be used alongside PCI through 2000. Apple Computer adopted PCI for professional Power Macintosh computers (replacing NuBus) in mid-1995, and the consumer Performa product line (replacing LC Processor Direct Slot (PDS)) in mid-1996.

Outside the server market, the 64-bit version of plain PCI remained rare in practice though, [13] although it was used for example by all (post-iMac) G3 and G4 Power Macintosh computers. [14]

Later revisions of PCI added new features and performance improvements, including a 66  MHz 3.3  V standard and 133 MHz PCI-X, and the adaptation of PCI signaling to other form factors. Both PCI-X 1.0b and PCI-X 2.0 are backward compatible with some PCI standards. These revisions were used on server hardware but consumer PC hardware remained nearly all 32-bit, 33 MHz and 5 volt.

The PCI-SIG introduced the serial PCI Express in c.2004. Since then, motherboard manufacturers have included progressively fewer PCI slots in favor of the new standard. Many new motherboards do not provide PCI slots at all, as of late 2013. [ citation needed ]

PCI history [15]
SpecYearChange summary [16]
PCI 1.01992Original issue
PCI 2.01993Incorporated connector and add-in card specification
PCI 2.11995Incorporated clarifications and added 66 MHz chapter
PCI 2.21998Incorporated ECNs, and improved readability
PCI 2.32002Incorporated ECNs, errata, and deleted 5 volt only keyed add-in cards
PCI 3.02004Removed support for 5.0 volt keyed system board connector

Auto configuration

PCI provides separate memory and memory-mapped I/O port address spaces for the x86 processor family, 64 and 32 bits, respectively. Addresses in these address spaces are assigned by software. A third address space, called the PCI Configuration Space, which uses a fixed addressing scheme, allows software to determine the amount of memory and I/O address space needed by each device. Each device can request up to six areas of memory space or input/output (I/O) port space via its configuration space registers.

In a typical system, the firmware (or operating system) queries all PCI buses at startup time (via PCI Configuration Space) to find out what devices are present and what system resources (memory space, I/O space, interrupt lines, etc.) each needs. It then allocates the resources and tells each device what its allocation is.

The PCI configuration space also contains a small amount of device type information, which helps an operating system choose device drivers for it, or at least to have a dialogue with a user about the system configuration.

Devices may have an on-board read-only memory (ROM) containing executable code for x86 or PA-RISC processors, an Open Firmware driver, or an Option ROM. These are typically needed for devices used during system startup, before device drivers are loaded by the operating system.

In addition, there are PCI Latency Timers that are a mechanism for PCI Bus-Mastering devices to share the PCI bus fairly. "Fair" in this case means that devices will not use such a large portion of the available PCI bus bandwidth that other devices are not able to get needed work done. Note, this does not apply to PCI Express.

How this works is that each PCI device that can operate in bus-master mode is required to implement a timer, called the Latency Timer, that limits the time that device can hold the PCI bus. The timer starts when the device gains bus ownership, and counts down at the rate of the PCI clock. When the counter reaches zero, the device is required to release the bus. If no other devices are waiting for bus ownership, it may simply grab the bus again and transfer more data. [17]

Interrupts

Devices are required to follow a protocol so that the interrupt-request (IRQ) lines can be shared. The PCI bus includes four interrupt lines, INTA# through INTD#, all of which are available to each device. Up to eight PCI devices share the same IRQ line (INTINA# through INTINH#) in APIC-enabled x86 systems. Interrupt lines are not wired in parallel as are the other PCI bus lines. The positions of the interrupt lines rotate between slots, so what appears to one device as the INTA# line is INTB# to the next and INTC# to the one after that. Single-function devices usually use their INTA# for interrupt signaling, so the device load is spread fairly evenly across the four available interrupt lines. This alleviates a common problem with sharing interrupts.

The mapping of PCI interrupt lines onto system interrupt lines, through the PCI host bridge, is implementation-dependent. Platform-specific firmware or operating system code is meant to know this, and set the "interrupt line" field in each device's configuration space indicating which IRQ it is connected to.

PCI interrupt lines are level-triggered. This was chosen over edge-triggering to gain an advantage when servicing a shared interrupt line, and for robustness: edge-triggered interrupts are easy to miss.

Later revisions of the PCI specification add support for message-signaled interrupts. In this system, a device signals its need for service by performing a memory write, rather than by asserting a dedicated line. This alleviates the problem of scarcity of interrupt lines. Even if interrupt vectors are still shared, it does not suffer the sharing problems of level-triggered interrupts. It also resolves the routing problem, because the memory write is not unpredictably modified between device and host. Finally, because the message signaling is in-band, it resolves some synchronization problems that can occur with posted writes and out-of-band interrupt lines.

PCI Express does not have physical interrupt lines at all. It uses message-signaled interrupts exclusively.

Conventional hardware specifications

Diagram showing the different key positions for 32-bit and 64-bit PCI cards PCI Keying.svg
Diagram showing the different key positions for 32-bit and 64-bit PCI cards

These specifications represent the most common version of PCI used in normal PCs:

The PCI specification also provides options for 3.3 V signaling, 64-bit bus width, and 66 MHz clocking, but these are not commonly encountered outside of PCI-X support on server motherboards.

The PCI bus arbiter performs bus arbitration among multiple masters on the PCI bus. Any number of bus masters can reside on the PCI bus, as well as requests for the bus. One pair of request and grant signals is dedicated to each bus master.

Card voltage and keying

A PCI-X Gigabit Ethernet expansion card with both 5 V and 3.3 V support notches, side B toward the camera Intelpromtserverpcixadapter1000mta342.jpg
A PCI-X Gigabit Ethernet expansion card with both 5 V and 3.3 V support notches, side B toward the camera

Typical PCI cards have either one or two key notches, depending on their signaling voltage. Cards requiring 3.3 volts have a notch 56.21 mm from the card backplate; those requiring 5 volts have a notch 104.41 mm from the backplate. This allows cards to be fitted only into slots with a voltage they support. "Universal cards" accepting either voltage have both key notches.

Connector pinout

The PCI connector is defined as having 62 contacts on each side of the edge connector, but two or four of them are replaced by key notches, so a card has 60 or 58 contacts on each side. Side A refers to the 'solder side' and side B refers to the 'component side': if the card is held with the connector pointing down, a view of side A will have the backplate on the right, whereas a view of side B will have the backplate on the left. The pinout of B and A sides are as follows, looking down into the motherboard connector (pins A1 and B1 are closest to backplate). [16] [18] [19]

32-bit PCI connector pinout
PinSide BSide AComments
1−12 VTRST# JTAG port pins (optional)
2TCK+12 V
3GroundTMS
4TDOTDI
5+5 V+5 V
6+5 VINTA#Interrupt pins (open-drain)
7INTB#INTC#
8INTD#+5 V
9PRSNT1#ReservedPulled low to indicate 7.5 or 25 W power required
10ReservedIOPWR+5 V or +3.3 V
11PRSNT2#ReservedPulled low to indicate 7.5 or 15 W power required
12GroundGroundKey notch for 3.3 V-capable cards
13GroundGround
14Reserved3.3 V aux Standby power (optional)
15GroundRST#Bus reset
16CLKIOPWR33/66 MHz clock
17GroundGNT#Bus grant from motherboard to card
18REQ#GroundBus request from card to motherboard
19IOPWRPME#Power management event (optional) 3.3 V, open drain, active low. [20]
20AD[31]AD[30]Address/data bus (upper half)
21AD[29]+3.3 V
22GroundAD[28]
23AD[27]AD[26]
24AD[25]Ground
25+3.3 VAD[24]
26C/BE[3]#IDSEL
27AD[23]+3.3 V
28GroundAD[22]
29AD[21]AD[20]
30AD[19]Ground
31+3.3 VAD[18]
32AD[17]AD[16]
33C/BE[2]#+3.3 V
34GroundFRAME#Bus transfer in progress
35IRDY#GroundInitiator ready
36+3.3 VTRDY#Target ready
37DEVSEL#GroundTarget selected
38PCIXCAPGroundSTOP# PCI-X capable; Target requests halt
39LOCK#+3.3 VLocked transaction
40PERR#SMBCLKSDONEParity error; SMBus clock or Snoop done (obsolete)
41+3.3 VSMBDATSBO#SMBus data or Snoop backoff (obsolete)
42SERR#GroundSystem error
43+3.3 VPAREven parity over AD[31:00] and C/BE[3:0]#
44C/BE[1]#AD[15]Address/data bus (higher half)
45AD[14]+3.3 V
46GroundAD[13]
47AD[12]AD[11]
48AD[10]Ground
49M66ENGroundAD[09]
50GroundGroundKey notch for 5 V-capable cards
51GroundGround
52AD[08]C/BE[0]#Address/data bus (lower half)
53AD[07]+3.3 V
54+3.3 VAD[06]
55AD[05]AD[04]
56AD[03]Ground
57GroundAD[02]
58AD[01]AD[00]
59IOPWRIOPWR
60ACK64#REQ64#For 64-bit extension; no connect for 32-bit devices.
61+5 V+5 V
62+5 V+5 V

64-bit PCI extends this by an additional 32 contacts on each side which provide AD[63:32], C/BE[7:4]#, the PAR64 parity signal, and a number of power and ground pins.

Legend
Ground pinZero volt reference
Power pinSupplies power to the PCI card
Output pinDriven by the PCI card, received by the motherboard
Initiator outputDriven by the master/initiator, received by the target
I/O signalMay be driven by initiator or target, depending on operation
Target outputDriven by the target, received by the initiator/master
InputDriven by the motherboard, received by the PCI card
Open drain May be pulled low and/or sensed by multiple cards
ReservedNot presently used, do not connect

Most lines are connected to each slot in parallel. The exceptions are:

Notes:

Mixing of 32-bit and 64-bit PCI cards in different width slots

A semi-inserted PCI-X card in a 32-bit PCI slot, illustrating the need for the rightmost notch and the extra room on the motherboard to remain backward compatible PCI-X in a 32-bit slot.jpg
A semi-inserted PCI-X card in a 32-bit PCI slot, illustrating the need for the rightmost notch and the extra room on the motherboard to remain backward compatible
64-bit SCSI card working in a 32-bit PCI slot 64bit PCI SCSI card in a 32bit PCI slot.jpg
64-bit SCSI card working in a 32-bit PCI slot

Most 32-bit PCI cards will function properly in 64-bit PCI-X slots, but the bus clock rate will be limited to the clock frequency of the slowest card, an inherent limitation of PCI's shared bus topology. For example, when a PCI 2.3, 66-MHz peripheral is installed into a PCI-X bus capable of 133 MHz, the entire bus backplane will be limited to 66 MHz. To get around this limitation, many motherboards have two or more PCI/PCI-X buses, with one bus intended for use with high-speed PCI-X peripherals, and the other bus intended for general-purpose peripherals.

Many 64-bit PCI-X cards are designed to work in 32-bit mode if inserted in shorter 32-bit connectors, with some loss of performance. [22] [23] An example of this is the Adaptec 29160 64-bit SCSI interface card. [24] However, some 64-bit PCI-X cards do not work in standard 32-bit PCI slots. [25] [ unreliable source? ]

Installing a 64-bit PCI-X card in a 32-bit slot will leave the 64-bit portion of the card edge connector not connected and overhanging. This requires that there be no motherboard components positioned so as to mechanically obstruct the overhanging portion of the card edge connector.

Physical dimensions

PCI brackets heights:

PCI Card lengths (Standard Bracket & 3.3 V): [28]

PCI Card lengths (Low Profile Bracket & 3.3 V): [29]

Mini PCI

A Mini PCI slot Mini PCI slot PNrdeg0344.jpg
A Mini PCI slot
Mini PCI Wi-Fi card Type IIIB MiniPCI WiFi.jpg
Mini PCI Wi-Fi card Type IIIB
PCI-to-MiniPCI converter Type III MiniPCI PCI.jpg
PCI-to-MiniPCI converter Type III
MiniPCI and MiniPCI Express cards in comparison MiniPCI and MiniPCI Express cards.jpg
MiniPCI and MiniPCI Express cards in comparison

Mini PCI was added to PCI version 2.2 for use in laptops and some routers;[ citation needed ] it uses a 32-bit, 33 MHz bus with powered connections (3.3 V only; 5 V is limited to 100 mA) and support for bus mastering and DMA. The standard size for Mini PCI cards is approximately a quarter of their full-sized counterparts. There is no access to the card from outside the case, unlike desktop PCI cards with brackets carrying connectors. This limits the kinds of functions a Mini PCI card can perform.

Many Mini PCI devices were developed such as Wi-Fi, Fast Ethernet, Bluetooth, modems (often Winmodems), sound cards, cryptographic accelerators, SCSI, IDEATA, SATA controllers and combination cards. Mini PCI cards can be used with regular PCI-equipped hardware, using Mini PCI-to-PCI converters. Mini PCI has been superseded by the much narrower PCI Express Mini Card

Technical details of Mini PCI

Mini PCI cards have a 2 W maximum power consumption, which limits the functionality that can be implemented in this form factor. They also are required to support the CLKRUN# PCI signal used to start and stop the PCI clock for power management purposes.

There are three card form factors: Type I, Type II, and Type III cards. The card connector used for each type include: Type I and II use a 100-pin stacking connector, while Type III uses a 124-pin edge connector, i.e. the connector for Types I and II differs from that for Type III, where the connector is on the edge of a card, like with a SO-DIMM. The additional 24 pins provide the extra signals required to route I/O back through the system connector (audio, AC-Link, LAN, phone-line interface). Type II cards have RJ11 and RJ45 mounted connectors. These cards must be located at the edge of the computer or docking station so that the RJ11 and RJ45 ports can be mounted for external access.

TypeCard on
outer edge of
host system
ConnectorSize
(mm × mm × mm)
comments
IANo100-pin
stacking
7.5 × 70 × 45Large Z dimension (7.5 mm)
IB5.5 × 70 × 45Smaller Z dimension (5.5 mm)
IIAYes17.44 × 70 × 45Large Z dimension (17.44 mm)
IIB5.5 × 78 × 45Smaller Z dimension (5.5 mm)
IIIANo124-pin
card edge
2.4 × 59.6 × 50.95Larger Y dimension (50.95 mm)
IIIB2.4 × 59.6 × 44.6Smaller Y dimension (44.6 mm)

Mini PCI is distinct from 144-pin Micro PCI. [30]

PCI bus transactions

PCI bus traffic consists of a series of PCI bus transactions. Each transaction consists of an address phase followed by one or more data phases. The direction of the data phases may be from initiator to target (write transaction) or vice versa (read transaction), but all of the data phases must be in the same direction. Either party may pause or halt the data phases at any point. (One common example is a low-performance PCI device that does not support burst transactions, and always halts a transaction after the first data phase.)

Any PCI device may initiate a transaction. First, it must request permission from a PCI bus arbiter on the motherboard. The arbiter grants permission to one of the requesting devices. The initiator begins the address phase by broadcasting a 32-bit address plus a 4-bit command code, then waits for a target to respond. All other devices examine this address and one of them responds a few cycles later.

64-bit addressing is done using a two-stage address phase. The initiator broadcasts the low 32 address bits, accompanied by a special "dual address cycle" command code. Devices that do not support 64-bit addressing can simply not respond to that command code. The next cycle, the initiator transmits the high 32 address bits, plus the real command code. The transaction operates identically from that point on. To ensure compatibility with 32-bit PCI devices, it is forbidden to use a dual address cycle if not necessary, i.e. if the high-order address bits are all zero.

While the PCI bus transfers 32 bits per data phase, the initiator transmits 4 active-low byte enable signals indicating which 8-bit bytes are to be considered significant. In particular, a write must affect only the enabled bytes in the target PCI device. They are of little importance for memory reads, but I/O reads might have side effects. The PCI standard explicitly allows a data phase with no bytes enabled, which must behave as a no-op.

PCI address spaces

PCI has three address spaces: memory, I/O address, and configuration.

Memory addresses are 32 bits (optionally 64 bits) in size, support caching and can be burst transactions.

I/O addresses are for compatibility with the Intel x86 architecture's I/O port address space. Although the PCI bus specification allows burst transactions in any address space, most devices only support it for memory addresses and not I/O.

Finally, PCI configuration space provides access to 256 bytes of special configuration registers per PCI device. Each PCI slot gets its own configuration space address range. The registers are used to configure devices memory and I/O address ranges they should respond to from transaction initiators. When a computer is first turned on, all PCI devices respond only to their configuration space accesses. The computer's BIOS scans for devices and assigns Memory and I/O address ranges to them.

If an address is not claimed by any device, the transaction initiator's address phase will time out causing the initiator to abort the operation. In case of reads, it is customary to supply all-ones for the read data value (0xFFFFFFFF) in this case. PCI devices therefore generally attempt to avoid using the all-ones value in important status registers, so that such an error can be easily detected by software.

PCI command codes

There are 16 possible 4-bit command codes, and 12 of them are assigned. With the exception of the unique dual address cycle, the least significant bit of the command code indicates whether the following data phases are a read (data sent from target to initiator) or a write (data sent from an initiator to target). PCI targets must examine the command code as well as the address and not respond to address phases that specify an unsupported command code.

The commands that refer to cache lines depend on the PCI configuration space cache line size register being set up properly; they may not be used until that has been done.

0000: Interrupt Acknowledge
This is a special form of read cycle implicitly addressed to the interrupt controller, which returns an interrupt vector. The 32-bit address field is ignored. One possible implementation is to generate an interrupt acknowledge cycle on an ISA bus using a PCI/ISA bus bridge. This command is for IBM PC compatibility; if there is no Intel 8259 style interrupt controller on the PCI bus, this cycle need never be used.
0001: Special Cycle
This cycle is a special broadcast write of system events that PCI card may be interested in. The address field of a special cycle is ignored, but it is followed by a data phase containing a payload message. The currently defined messages announce that the processor is stopping for some reason (e.g. to save power). No device ever responds to this cycle; it is always terminated with a master abort after leaving the data on the bus for at least 4 cycles.
0010: I/O Read
This performs a read from I/O space. All 32 bits of the read address are provided, so that a device may (for compatibility reasons) implement less than 4 bytes worth of I/O registers. If the byte enables request data not within the address range supported by the PCI device (e.g. a 4-byte read from a device which only supports 2 bytes of I/O address space), it must be terminated with a target abort. Multiple data cycles are permitted, using linear (simple incrementing) burst ordering.
The PCI standard is discouraging the use of I/O space in new devices, preferring that as much as possible be done through main memory mapping.
0011: I/O Write
This performs a write to I/O space.
010x: Reserved
A PCI device must not respond to an address cycle with these command codes.
0110: Memory Read
This performs a read cycle from memory space. Because the smallest memory space a PCI device is permitted to implement is 16 bytes, [18] [16] :§6.5.2.1 the two least significant bits of the address are not needed during the address phase; equivalent information will arrive during the data phases in the form of byte select signals. They instead specify the order in which burst data must be returned. [18] [16] :§3.2.2.2 If a device does not support the requested order, it must provide the first word and then disconnect.
If a memory space is marked as "prefetchable", then the target device must ignore the byte-select signals on a memory read and always return 32 valid bits.
0111: Memory Write
This operates similarly to a memory read. The byte select signals are more important in a write, as unselected bytes must not be written to memory.
Generally, PCI writes are faster than PCI reads, because a device may buffer the incoming write data and release the bus faster. For a read, it must delay the data phase until the data has been fetched.
100x: Reserved
A PCI device must not respond to an address cycle with these command codes.
1010: Configuration Read
This is similar to an I/O read, but reads from PCI configuration space. A device must respond only if the low 11 bits of the address specify a function and register that it implements, and if the special IDSEL signal is asserted. It must ignore the high 21 bits. Burst reads (using linear incrementing) are permitted in PCI configuration space.
Unlike I/O space, standard PCI configuration registers are defined so that reads never disturb the state of the device. It is possible for a device to have configuration space registers beyond the standard 64 bytes which have read side effects, but this is rare. [31]
Configuration space accesses often have a few cycles of delay to allow the IDSEL lines to stabilize, which makes them slower than other forms of access. Also, a configuration space access requires a multi-step operation rather than a single machine instruction. Thus, it is best to avoid them during routine operation of a PCI device.
1011: Configuration Write
This operates analogously to a configuration read.
1100: Memory Read Multiple
This command is identical to a generic memory read, but includes the hint that a long read burst will continue beyond the end of the current cache line, and the target should internally prefetch a large amount of data. A target is always permitted to consider this a synonym for a generic memory read.
1101: Dual Address Cycle
When accessing a memory address that requires more than 32 bits to represent, the address phase begins with this command and the low 32 bits of the address, followed by a second cycle with the actual command and the high 32 bits of the address. PCI targets that do not support 64-bit addressing may simply treat this as another reserved command code and not respond to it. This command code may only be used with a non-zero high-order address word; it is forbidden to use this cycle if not necessary.
1110: Memory Read Line
This command is identical to a generic memory read, but includes the hint that the read will continue to the end of the cache line. A target is always permitted to consider this a synonym for a generic memory read.
1111: Memory Write and Invalidate
This command is identical to a generic memory write, but comes with the guarantee that one or more whole cache lines will be written, with all byte selects enabled. This is an optimization for write-back caches snooping the bus. Normally, a write-back cache holding dirty data must interrupt the write operation long enough to write its own dirty data first. If the write is performed using this command, the data to be written back is guaranteed to be irrelevant, and may simply be invalidated in the write-back cache.
This optimization only affects the snooping cache, and makes no difference to the target, which may treat this as a synonym for the memory write command.

PCI bus latency

Soon after promulgation of the PCI specification, it was discovered that lengthy transactions by some devices, due to slow acknowledgments, long data bursts, or some combination, could cause buffer underrun or overrun in other devices. Recommendations on the timing of individual phases in Revision 2.0 were made mandatory in revision 2.1: [32] :3

Additionally, as of revision 2.1, all initiators capable of bursting more than two data phases must implement a programmable latency timer. The timer starts counting clock cycles when a transaction starts (initiator asserts FRAME#). If the timer has expired and the arbiter has removed GNT#, then the initiator must terminate the transaction at the next legal opportunity. This is usually the next data phase, but Memory Write and Invalidate transactions must continue to the end of the cache line.

Delayed transactions

Devices unable to meet those timing restrictions must use a combination of posted writes (for memory writes) and delayed transactions (for other writes and all reads). In a delayed transaction, the target records the transaction (including the write data) internally and aborts (asserts STOP# rather than TRDY#) the first data phase. The initiator must retry exactly the same transaction later. In the interim, the target internally performs the transaction, and waits for the retried transaction. When the retried transaction is seen, the buffered result is delivered.

A device may be the target of other transactions while completing one delayed transaction; it must remember the transaction type, address, byte selects and (if a write) data value, and only complete the correct transaction.

If the target has a limit on the number of delayed transactions that it can record internally (simple targets may impose a limit of 1), it will force those transactions to retry without recording them. They will be dealt with when the current delayed transaction is completed. If two initiators attempt the same transaction, a delayed transaction begun by one may have its result delivered to the other; this is harmless.

A target abandons a delayed transaction when a retry succeeds in delivering the buffered result, the bus is reset, or when 215=32768 clock cycles (approximately 1 ms) elapse without seeing a retry. The latter should never happen in normal operation, but it prevents a deadlock of the whole bus if one initiator is reset or malfunctions.

PCI bus bridges

The PCI standard permits multiple independent PCI buses to be connected by bus bridges that will forward operations on one bus to another when required. Although PCI tends not to use many bus bridges, PCI Express systems use many PCI-to-PCI bridge usually called PCI Express Root Port; each PCI Express slot appears to be a separate bus, connected by a bridge to the others. The PCI host bridge (usually northbridge in x86 platforms) interconnect between CPU, main memory and PCI bus. [33]

Posted writes

Generally, when a bus bridge sees a transaction on one bus that must be forwarded to the other, the original transaction must wait until the forwarded transaction completes before a result is ready. One notable exception occurs in the case of memory writes. Here, the bridge may record the write data internally (if it has room) and signal completion of the write before the forwarded write has completed. Or, indeed, before it has begun. Such "sent but not yet arrived" writes are referred to as "posted writes", by analogy with a postal mail message. Although they offer great opportunity for performance gains, the rules governing what is permissible are somewhat intricate. [34]

Combining, merging, and collapsing

The PCI standard permits bus bridges to convert multiple bus transactions into one larger transaction under certain situations. This can improve the efficiency of the PCI bus.

Combining

Write transactions to consecutive addresses may be combined into a longer burst write, as long as the order of the accesses in the burst is the same as the order of the original writes. It is permissible to insert extra data phases with all byte enables turned off if the writes are almost consecutive.

Merging

Multiple writes to disjoint portions of the same word may be merged into a single write with multiple byte enables asserted. In this case, writes that were presented to the bus bridge in a particular order are merged so they occur at the same time when forwarded.

Collapsing

Multiple writes to the same byte or bytes may not be combined, for example, by performing only the second write and skipping the first write that was overwritten. This is because the PCI specification permits writes to have side effects.

PCI bus signals

PCI bus transactions are controlled by five main control signals, two driven by the initiator of a transaction (FRAME# and IRDY#), and three driven by the target (DEVSEL#, TRDY#, and STOP#). There are two additional arbitration signals (REQ# and GNT#) that are used to obtain permission to initiate a transaction. [6] All are active-low, meaning that the active or asserted state is a low voltage. Pull-up resistors on the motherboard ensure they will remain high (inactive or deasserted) if not driven by any device, but the PCI bus does not depend on the resistors to change the signal level; all devices drive the signals high for one cycle before ceasing to drive the signals.

Signal timing

All PCI bus signals are sampled on the rising edge of the clock. Signals nominally change on the falling edge of the clock, giving each PCI device approximately one half a clock cycle to decide how to respond to the signals it observed on the rising edge, and one half a clock cycle to transmit its response to the other device.

The PCI bus requires that every time the device driving a PCI bus signal changes, one turnaround cycle must elapse between the time the one device stops driving the signal and the other device starts. Without this, there might be a period when both devices were driving the signal, which would interfere with bus operation.

The combination of this turnaround cycle and the requirement to drive a control line high for one cycle before ceasing to drive it means that each of the main control lines must be high for a minimum of two cycles when changing owners. The PCI bus protocol is designed so this is rarely a limitation; only in a few special cases (notably fast back-to-back transactions) is it necessary to insert additional delay to meet this requirement.

Arbitration

Any device on a PCI bus that is capable of acting as a bus master may initiate a transaction with any other device. To ensure that only one transaction is initiated at a time, each master must first wait for a bus grant signal, GNT#, from an arbiter located on the motherboard. Each device has a separate request line REQ# that requests the bus, but the arbiter may "park" the bus grant signal at any device if there are no current requests.

The arbiter may remove GNT# at any time. A device that loses GNT# may complete its current transaction, but may not start one (by asserting FRAME#) unless it observes GNT# asserted the cycle before it begins.

The arbiter may also provide GNT# at any time, including during another master's transaction. During a transaction, either FRAME# or IRDY# or both are asserted; when both are deasserted, the bus is idle. A device may initiate a transaction at any time that GNT# is asserted and the bus is idle.

Address phase

A PCI bus transaction begins with an address phase. The initiator (usually a chipset), seeing that it has GNT# and the bus is idle, drives the target address onto the AD[31:0] lines, the associated command (e.g. memory read, or I/O write) on the C/BE[3:0]# lines, and pulls FRAME# low.

Each other device examines the address and command and decides whether to respond as the target by asserting DEVSEL#. A device must respond by asserting DEVSEL# within 3 cycles. Devices that promise to respond within 1 or 2 cycles are said to have "fast DEVSEL" or "medium DEVSEL", respectively. (Actually, the time to respond is 2.5 cycles, since PCI devices must transmit all signals half a cycle early so that they can be received three cycles later.)

A device must latch the address on the first cycle; the initiator is required to remove the address and command from the bus on the following cycle, even before receiving a DEVSEL# response. The additional time is available only for interpreting the address and command after it is captured.

On the fifth cycle of the address phase (or earlier if all other devices have medium DEVSEL or faster), a catch-all "subtractive decoding" is allowed for some address ranges. This is commonly used by an ISA bus bridge for addresses within its range (24 bits for memory and 16 bits for I/O).

On the sixth cycle, if there has been no response, the initiator may abort the transaction by deasserting FRAME#. This is known as master abort termination and it is customary for PCI bus bridges to return all-ones data (0xFFFFFFFF) in this case. PCI devices, therefore, are generally designed to avoid using the all-ones value in important status registers, so that such an error can be easily detected by software.

Address phase timing

AddressPhaseTiming wavedrom.svg

Notes:

  • GNT# Irrelevant after cycle has started
  • Address is only valid for one cycle.
  • C/BE will provide the command following by first data phase byte enables

On the rising edge of clock 0, the initiator observes FRAME# and IRDY# both high, and GNT# low, so it drives the address, command, and asserts FRAME# in time for the rising edge of clock 1. Targets latch the address and begin decoding it. They may respond with DEVSEL# in time for clock 2 (fast DEVSEL), 3 (medium) or 4 (slow). Subtractive decode devices, seeing no other response by clock 4, may respond on clock 5. If the master does not see a response by clock 5, it will terminate the transaction and remove FRAME# on clock 6.

TRDY# and STOP# are deasserted (high) during the address phase. The initiator may assert IRDY# as soon as it is ready to transfer data, which could theoretically be as soon as clock 2.

Dual-cycle address

To allow 64-bit addressing, a master will present the address over two consecutive cycles. First, it sends the low-order address bits with a special "dual-cycle address" command on the C/BE[3:0]#. On the following cycle, it sends the high-order address bits and the actual command. Dual-address cycles are forbidden if the high-order address bits are zero, so devices that do not support 64-bit addressing can simply not respond to dual-cycle commands.

              _  0_  1_  2_  3_  4_  5_  6_         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/             ___        GNT#    \___/XXXXXXXXXXXXXXXXXXXXXXX             _______      FRAME#        \_______________________                     ___ ___    AD[31:0] -------<___X___>--------------- (Low, then high bits)                     ___ ___ _______________  C/BE[3:0]# -------<___X___X_______________ (DAC, then actual command)             ___________________________     DEVSEL#                \___\___\___\___                          Fast Med Slow               _   _   _   _   _   _   _   _         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/                  0   1   2   3   4   5   6 

Configuration access

Addresses for PCI configuration space access use special decoding. For these, the low-order address lines specify the offset of the desired PCI configuration register, and the high-order address lines are ignored. Instead, an additional address signal, the IDSEL input, must be high before a device may assert DEVSEL#. Each slot connects a different high-order address line to the IDSEL pin and is selected using one-hot encoding on the upper address lines.

Data phases

After the address phase (specifically, beginning with the cycle that DEVSEL# goes low) comes a burst of one or more data phases. In all cases, the initiator drives active-low byte select signals on the C/BE[3:0]# lines, but the data on the AD[31:0] may be driven by the initiator (in case of writes) or target (in case of reads).

During data phases, the C/BE[3:0]# lines are interpreted as active-low byte enables. In case of a write, the asserted signals indicate which of the four bytes on the AD bus are to be written to the addressed location. In the case of a read, they indicate which bytes the initiator is interested in. For reads, it is always legal to ignore the byte-enable signals and simply return all 32 bits; cacheable memory resources are required to always return 32 valid bits. The byte enables are mainly useful for I/O space accesses where reads have side effects.

A data phase with all four C/BE# lines deasserted is explicitly permitted by the PCI standard, and must have no effect on the target other than to advance the address in the burst access in progress.

The data phase continues until both parties are ready to complete the transfer and continue to the next data phase. The initiator asserts IRDY# (initiator ready) when it no longer needs to wait, while the target asserts TRDY# (target ready). Whichever side is providing the data must drive it on the AD bus before asserting its ready signal.

Once one of the participants asserts its ready signal, it may not become un-ready or otherwise alter its control signals until the end of the data phase. The data recipient must latch the AD bus each cycle until it sees both IRDY# and TRDY# asserted, which marks the end of the current data phase and indicates that the just-latched data is the word to be transferred.

To maintain full burst speed, the data sender then has half a clock cycle after seeing both IRDY# and TRDY# asserted to drive the next word onto the AD bus.

             0_  1_  2_  3_  4_  5_  6_  7_  8_  9_         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/                 ___         _______     ___ ___ ___    AD[31:0] ---<___XXXXXXXXX_______XXXXX___X___X___ (If a write)                 ___             ___ _______ ___ ___    AD[31:0] ---<___>~~~<XXXXXXXX___X_______X___X___ (If a read)                 ___ _______________ _______ ___ ___  C/BE[3:0]# ---<___X_______________X_______X___X___ (Must always be valid)             _______________      |  ___  |   |   |       IRDY#              x \_______/ x \___________             ___________________  |       |   |   |       TRDY#              x   x \___________________             ___________          |       |   |   |     DEVSEL#            \___________________________             ___                  |       |   |   |      FRAME#    \___________________________________               _   _   _   _   _  |_   _  |_  |_  |_         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/              0   1   2   3   4   5   6   7   8   9 

This continues the address cycle illustrated above, assuming a single address cycle with medium DEVSEL, so the target responds in time for clock 3. However, at that time, neither side is ready to transfer data. For clock 4, the initiator is ready, but the target is not. On clock 5, both are ready, and a data transfer takes place (as indicated by the vertical lines). For clock 6, the target is ready to transfer, but the initiator is not. On clock 7, the initiator becomes ready, and data is transferred. For clocks 8 and 9, both sides remain ready to transfer data, and data is transferred at the maximum possible rate (32 bits per clock cycle).

In case of a read, clock 2 is reserved for turning around the AD bus, so the target is not permitted to drive data on the bus even if it is capable of fast DEVSEL.

Fast DEVSEL# on reads

A target that supports fast DEVSEL could in theory begin responding to a read on the cycle after the address is presented. This cycle is, however, reserved for AD bus turnaround. Thus, a target may not drive the AD bus (and thus may not assert TRDY#) on the second cycle of a transaction. Most targets will not be this fast and will not need any special logic to enforce this condition.

Ending transactions

Either side may request that a burst end after the current data phase. Simple PCI devices that do not support multi-word bursts will always request this immediately. Even devices that do support bursts will have some limit on the maximum length they can support, such as the end of their addressable memory.

Initiator burst termination

The initiator can mark any data phase as the final one in a transaction by deasserting FRAME# at the same time as it asserts IRDY#. The cycle after the target asserts TRDY#, the final data transfer is complete, both sides deassert their respective RDY# signals, and the bus is idle again. The master may not deassert FRAME# before asserting IRDY#, nor may it deassert FRAME# while waiting, with IRDY# asserted, for the target to assert TRDY#.

The only minor exception is a master abort termination, when no target responds with DEVSEL#. Obviously, it is pointless to wait for TRDY# in such a case. However, even in this case, the master must assert IRDY# for at least one cycle after deasserting FRAME#. (Commonly, a master will assert IRDY# before receiving DEVSEL#, so it must simply hold IRDY# asserted for one cycle longer.) This is to ensure that bus turnaround timing rules are obeyed on the FRAME# line.

Target burst termination

The target requests the initiator end a burst by asserting STOP#. The initiator will then end the transaction by deasserting FRAME# at the next legal opportunity; if it wishes to transfer more data, it will continue in a separate transaction. There are several ways for the target to do this:

Disconnect with data
If the target asserts STOP# and TRDY# at the same time, this indicates that the target wishes this to be the last data phase. For example, a target that does not support burst transfers will always do this to force single-word PCI transactions. This is the most efficient way for a target to end a burst.
Disconnect without data
If the target asserts STOP# without asserting TRDY#, this indicates that the target wishes to stop without transferring data. STOP# is considered equivalent to TRDY# for the purpose of ending a data phase, but no data is transferred.
Retry
A Disconnect without data before transferring any data is a retry, and unlike other PCI transactions, PCI initiators are required to pause slightly before continuing the operation. See the PCI specification for details.
Target abort
Normally, a target holds DEVSEL# asserted through the last data phase. However, if a target deasserts DEVSEL# before disconnecting without data (asserting STOP#), this indicates a target abort, which is a fatal error condition. The initiator may not retry, and typically treats it as a bus error. A target may not deassert DEVSEL# while waiting with TRDY# or STOP# low; it must do this at the beginning of a data phase.

It will always take at least one cycle for the initiator to notice a target-initiated disconnection request and respond by deasserting FRAME#. There are two sub-cases, which take the same amount of time, but one requires an additional data phase:

Disconnect-A
If the initiator observes STOP# before asserting its own IRDY#, then it can end the burst by deasserting FRAME# at the same time as it asserts IRDY#, ending the burst after the current data phase.
Disconnect-B
If the initiator has already asserted IRDY# (without deasserting FRAME#) by the time it observes the target's STOP#, it is committed to an additional data phase. The target must wait through an additional data phase without data, holding STOP# asserted without TRDY#, before the transaction can end.

If the initiator ends the burst at the same time as the target requests disconnection, there is no additional bus cycle.

Burst addressing

For memory space accesses, the words in a burst may be accessed in several orders. The unnecessary low-order address bits AD[1:0] are used to convey the initiator's requested order. A target which does not support a particular order must terminate the burst after the first word. Some of these orders depend on the cache line size, which is configurable on all PCI devices.

PCI burst ordering
A[1]A[0]Burst order (with 16-byte cache line)
00Linear incrementing (0x0C, 0x10, 0x14, 0x18, 0x1C, ...)
01Cacheline toggle (0x0C, 0x08, 0x04, 0x00, 0x1C, 0x18, ...)
10Cacheline wrap (0x0C, 0x00, 0x04, 0x08, 0x1C, 0x10, ...)
11Reserved (disconnect after first transfer)

If the starting offset within the cache line is zero, all of these modes reduce to the same order.

Cache line toggle and cache line wrap modes are two forms of critical-word-first cache line fetching. Toggle mode XORs the supplied address with an incrementing counter. This is the native order for Intel 486 and Pentium processors. It has the advantage that it is not necessary to know the cache line size to implement it.

PCI version 2.1 obsoleted toggle mode and added the cache line wrap mode, [32] :2 where fetching proceeds linearly, wrapping around at the end of each cache line. When one cache line is completely fetched, fetching jumps to the starting offset in the next cache line.

Most PCI devices only support a limited range of typical cache line sizes; if the cache line size is programmed to an unexpected value, they force single-word access.

PCI also supports burst access to I/O and configuration space, but only linear mode is supported. (This is rarely used, and may be buggy in some devices; they may not support it, but not properly force single-word access either.)

Transaction examples

This is the highest-possible speed four-word write burst, terminated by the master:

             0_  1_  2_  3_  4_  5_  6_  7_         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \                 ___ ___ ___ ___ ___    AD[31:0] ---<___X___X___X___X___>---<___>                 ___ ___ ___ ___ ___  C/BE[3:0]# ---<___X___X___X___X___>---<___>                      |   |   |   |  ___       IRDY# ^^^^^^^^\______________/   ^^^^^                      |   |   |   |  ___       TRDY# ^^^^^^^^\______________/   ^^^^^                      |   |   |   |  ___     DEVSEL# ^^^^^^^^\______________/   ^^^^^             ___      |   |   |  ___      FRAME#    \_______________/ | ^^^^\____               _   _  |_  |_  |_  |_   _   _         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \              0   1   2   3   4   5   6   7 

On clock edge 1, the initiator starts a transaction by driving an address, command, and asserting FRAME# The other signals are idle (indicated by ^^^), pulled high by the motherboard's pull-up resistors. That might be their turnaround cycle. On cycle 2, the target asserts both DEVSEL# and TRDY#. As the initiator is also ready, a data transfer occurs. This repeats for three more cycles, but before the last one (clock edge 5), the master deasserts FRAME#, indicating that this is the end. On clock edge 6, the AD bus and FRAME# are undriven (turnaround cycle) and the other control lines are driven high for 1 cycle. On clock edge 7, another initiator can start a different transaction. This is also the turnaround cycle for the other control lines.

The equivalent read burst takes one more cycle, because the target must wait 1 cycle for the AD bus to turn around before it may assert TRDY#:

             0_  1_  2_  3_  4_  5_  6_  7_  8_         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \                 ___     ___ ___ ___ ___    AD[31:0] ---<___>---<___X___X___X___>---<___>                 ___ _______ ___ ___ ___  C/BE[3:0]# ---<___X_______X___X___X___>---<___>             ___          |   |   |   |  ___       IRDY#    ^^^^\___________________/   ^^^^^             ___    _____ |   |   |   |  ___       TRDY#    ^^^^     \______________/   ^^^^^             ___          |   |   |   |  ___     DEVSEL#    ^^^^\___________________/   ^^^^^             ___          |   |   |  ___      FRAME#    \___________________/ | ^^^^\____               _   _   _  |_  |_  |_  |_   _   _         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \              0   1   2   3   4   5   6   7   8 

A high-speed burst terminated by the target will have an extra cycle at the end:

             0_  1_  2_  3_  4_  5_  6_  7_  8_         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \                 ___     ___ ___ ___ ___    AD[31:0] ---<___>---<___X___X___X___XXXX>----                 ___ _______ ___ ___ ___ ___  C/BE[3:0]# ---<___X_______X___X___X___X___>----                          |   |   |   |      ___       IRDY# ^^^^^^^\_______________________/                    _____ |   |   |   |  _______       TRDY# ^^^^^^^     \______________/                    ________________  |      ___       STOP# ^^^^^^^      |   |   | \_______/                          |   |   |   |      ___     DEVSEL# ^^^^^^^\_______________________/             ___          |   |   |   |  ___      FRAME#    \_______________________/   ^^^^               _   _   _  |_  |_  |_  |_   _   _         CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \              0   1   2   3   4   5   6   7   8 

On clock edge 6, the target indicates that it wants to stop (with data), but the initiator is already holding IRDY# low, so there is a fifth data phase (clock edge 7), during which no data is transferred.

Parity

The PCI bus detects parity errors, but does not attempt to correct them by retrying operations; it is purely a failure indication. Due to this, there is no need to detect the parity error before it has happened, and the PCI bus actually detects it a few cycles later. During a data phase, whichever device is driving the AD[31:0] lines computes even parity over them and the C/BE[3:0]# lines, and sends that out the PAR line one cycle later. All access rules and turnaround cycles for the AD bus apply to the PAR line, just one cycle later. The device listening on the AD bus checks the received parity and asserts the PERR# (parity error) line one cycle after that. This generally generates a processor interrupt, and the processor can search the PCI bus for the device which detected the error.

The PERR# line is only used during data phases, once a target has been selected. If a parity error is detected during an address phase (or the data phase of a Special Cycle), the devices which observe it assert the SERR# (System error) line.

Even when some bytes are masked by the C/BE# lines and not in use, they must still have some defined value, and this value must be used to compute the parity.

Fast back-to-back transactions

Due to the need for a turnaround cycle between different devices driving PCI bus signals, in general it is necessary to have an idle cycle between PCI bus transactions. However, in some circumstances it is permitted to skip this idle cycle, going directly from the final cycle of one transfer (IRDY# asserted, FRAME# deasserted) to the first cycle of the next (FRAME# asserted, IRDY# deasserted).

An initiator may only perform back-to-back transactions when:

Additional timing constraints may come from the need to turn around are the target control lines, particularly DEVSEL#. The target deasserts DEVSEL#, driving it high, in the cycle following the final data phase, which in the case of back-to-back transactions is the first cycle of the address phase. The second cycle of the address phase is then reserved for DEVSEL# turnaround, so if the target is different from the prior one, it must not assert DEVSEL# until the third cycle (medium DEVSEL speed).

One case where this problem cannot arise is if the initiator knows somehow (presumably because the addresses share sufficient high-order bits) that the second transfer is addressed to the same target as the prior one. In that case, it may perform back-to-back transactions. All PCI targets must support this.

It is also possible for the target to keep track of the requirements. If it never does fast DEVSEL, they are met trivially. If it does, it must wait until medium DEVSEL time unless:

Targets that have this ability indicate it by a special bit in a PCI configuration register, and if all targets on a bus have it, all initiators may use back-to-back transfers freely.

A subtractive decoding bus bridge must know to expect this extra delay in the event of back-to-back cycles, to advertise back-to-back support.

64-bit PCI

Starting from revision 2.1,[ clarification needed ] the PCI specification includes optional 64-bit support. This is provided via an extended connector which provides the 64-bit bus extensions AD[63:32], C/BE[7:4]#, and PAR64, and a number of additional power and ground pins. The 64-bit PCI connector can be distinguished from a 32-bit connector by the additional 64-bit segment.

Memory transactions between 64-bit devices may use all 64 bits to double the data transfer rate. Non-memory transactions (including configuration and I/O space accesses) may not use the 64-bit extension. During a 64-bit burst, burst addressing works just as in a 32-bit transfer, but the address is incremented twice per data phase. The starting address must be 64-bit aligned; i.e. AD2 must be 0. The data corresponding to the intervening addresses (with AD2 = 1) is carried on the upper half of the AD bus.

To initiate a 64-bit transaction, the initiator drives the starting address on the AD bus and asserts REQ64# at the same time as FRAME#. If the selected target can support a 64-bit transfer for this transaction, it replies by asserting ACK64# at the same time as DEVSEL#. A target may decide on a per-transaction basis whether to allow a 64-bit transfer.

If REQ64# is asserted during the address phase, the initiator also drives the high 32 bits of the address and a copy of the bus command on the high half of the bus. If the address requires 64 bits, a dual address cycle is still required, but the high half of the bus carries the upper half of the address and the final command code during both address phase cycles; this allows a 64-bit target to see the entire address and begin responding earlier.

If the initiator sees DEVSEL# asserted without ACK64#, it performs 32-bit data phases. The data which would have been transferred on the upper half of the bus during the first data phase is instead transferred during the second data phase. Typically, the initiator drives all 64 bits of data before seeing DEVSEL#. If ACK64# is missing, it may cease driving the upper half of the data bus.

The REQ64# and ACK64# lines are held asserted for the entire transaction save the last data phase and deasserted at the same time as FRAME# and DEVSEL#, respectively.

The PAR64 line operates just like the PAR line, but provides even parity over AD[63:32] and C/BE[7:4]#. It is only valid for address phases if REQ64# is asserted. PAR64 is only valid for data phases if both REQ64# and ACK64# are asserted.

Cache snooping (obsolete)

PCI originally included optional support for write-back cache coherence. This required support by cacheable memory targets, which would listen to two pins from the cache on the bus, SDONE (snoop done) and SBO# (snoop backoff). [35]

Because this was rarely implemented in practice, it was deleted from revision 2.2 of the PCI specification, [16] [36] and the pins re-used for SMBus access in revision 2.3. [18]

The cache would watch all memory accesses, without asserting DEVSEL#. If it noticed an access that might be cached, it would drive SDONE low (snoop not done). A coherence-supporting target would avoid completing a data phase (asserting TRDY#) until it observed SDONE high.

In the case of a write to data that was clean in the cache, the cache would only have to invalidate its copy and would assert SDONE as soon as this was established. However, if the cache contained dirty data, the cache would have to write it back before the access could proceed. so it would assert SBO# when raising SDONE. This would signal the active target to assert STOP# rather than TRDY#, causing the initiator to disconnect and retry the operation later. In the meantime, the cache would arbitrate for the bus and write its data back to memory.

Targets supporting cache coherency are also required to terminate bursts before they cross cache lines.

Development tools

A PCI POST card that displays power-on self-test (POST) numbers during BIOS startup POST card 98usd.jpg
A PCI POST card that displays power-on self-test (POST) numbers during BIOS startup

When developing and/or troubleshooting the PCI bus, examination of hardware signals can be very important. Logic analyzers and bus analyzers are tools that collect, analyze, and decode signals for users to view in useful ways.

See also

Related Research Articles

<span class="mw-page-title-main">Accelerated Graphics Port</span> Expansion bus standard

Accelerated Graphics Port (AGP) is a parallel expansion card standard, designed for attaching a video card to a computer system to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP was progressively phased out in favor of PCI Express (PCIe), which is serial, as opposed to parallel; by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favor of PCI Express.

<span class="mw-page-title-main">Bus (computing)</span> Data transfer channel connecting parts of a computer

In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components and software, including communication protocols.

<span class="mw-page-title-main">Industry Standard Architecture</span> Internal expansion bus in early PC compatibles

Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles.

i486 Successor to the Intel 386

The Intel 486, officially named i486 and also known as 80486, is a microprocessor introduced in 1989. It is a higher-performance follow-up to the Intel 386. It represents the fourth generation of binary compatible CPUs following the 8086 of 1978, the Intel 80286 of 1982, and 1985's i386.

Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU).

<span class="mw-page-title-main">Synchronous dynamic random-access memory</span> Type of computer memory

Synchronous dynamic random-access memory is any DRAM where the operation of its external pin interface is coordinated by an externally supplied clock signal.

<span class="mw-page-title-main">I²C</span> Serial communication bus

I2C (Inter-Integrated Circuit; pronounced as “eye-squared-see” or “eye-two-see”), alternatively known as I2C or IIC, is a synchronous, multi-controller/multi-target (historically termed as multi-master/multi-slave), single-ended, serial communication bus invented in 1982 by Philips Semiconductors. It is widely used for attaching lower-speed peripheral integrated circuits (ICs) to processors and microcontrollers in short-distance, intra-board communication.

HyperTransport (HT), formerly known as Lightning Data Transport, is a technology for interconnection of computer processors. It is a bidirectional serial/parallel high-bandwidth, low-latency point-to-point link that was introduced on April 2, 2001. The HyperTransport Consortium is in charge of promoting and developing HyperTransport technology.

<span class="mw-page-title-main">PCI Express</span> Computer expansion bus standard

PCI Express, officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards, capture cards, sound cards, hard disk drive host adapters, SSDs, Wi-Fi, and Ethernet hardware connections. PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance scaling for bus devices, a more detailed error detection and reporting mechanism, and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization.

The System Management Bus is a single-ended simple two-wire bus for the purpose of lightweight communication. Most commonly it is found in chipsets of computer motherboards for communication with the power source for ON/OFF instructions. The exact functionality and hardware interfaces vary with vendors.

<span class="mw-page-title-main">PCI-X</span> Computer bus and expansion card standard

PCI-X, short for Peripheral Component Interconnect eXtended, is a computer bus and expansion card standard that enhances the 32-bit PCI local bus for higher bandwidth demanded mostly by servers and workstations. It uses a modified protocol to support higher clock speeds, but is otherwise similar in electrical implementation. PCI-X 2.0 added speeds up to 533 MHz, with a reduction in electrical signal levels.

PCI configuration space is the underlying way that the Conventional PCI, PCI-X and PCI Express perform auto configuration of the cards inserted into their bus.

XDR DRAM is a high-performance dynamic random-access memory interface. It is based on and succeeds RDRAM. Competing technologies include DDR2 and GDDR4.

<span class="mw-page-title-main">Low Pin Count</span> Low-bandwidth computer motherboard bus

The Low Pin Count (LPC) bus is a computer bus used on IBM-compatible personal computers to connect low-bandwidth devices to the CPU, such as the BIOS ROM, "legacy" I/O devices, and Trusted Platform Module (TPM). "Legacy" I/O devices usually include serial and parallel ports, PS/2 keyboard, PS/2 mouse, and floppy disk controller.

GIO is a computer bus standard developed by SGI and used in a variety of their products in the 1990s as their primary expansion system. GIO was similar in concept to competing standards such as NuBus or (later) PCI, but saw little use outside SGI and severely limited the devices available on their platform as a result. Most devices using GIO were SGI's own graphics cards, although a number of cards supporting high-speed data access such as Fibre Channel and FDDI were available from third parties. Later SGI machines use the XIO bus, which is laid out as a computer network as opposed to a bus.

Released as the expansion bus of the Commodore Amiga 3000 in 1990, the Zorro III computer bus was used to attach peripheral devices to an Amiga motherboard. Designed by Commodore International lead engineer Dave Haynie, the 32-bit Zorro III replaced the 16-bit Zorro II bus used in the Amiga 2000. As with the Zorro II bus, Zorro III allowed for true Plug and Play autodetection wherein devices were dynamically allocated the resources they needed on boot.

<span class="mw-page-title-main">Alpha 21164</span> Microprocessor

The Alpha 21164, also known by its code name, EV5, is a microprocessor developed and fabricated by Digital Equipment Corporation that implemented the Alpha instruction set architecture (ISA). It was introduced in January 1995, succeeding the Alpha 21064A as Digital's flagship microprocessor. It was succeeded by the Alpha 21264 in 1998.

<span class="mw-page-title-main">Alpha 21264</span> RISC microprocessor

The Alpha 21264 is a RISC microprocessor developed by Digital Equipment Corporation launched on 19 October 1998. The 21264 implemented the Alpha instruction set architecture (ISA).

<span class="mw-page-title-main">I3C (bus)</span> Serial bus specification

I3C, also known as SenseWire, is a specification to enable communication between computer chips by defining the electrical connection between the chips and signaling patterns to be used. Short for "Improved Inter Integrated Circuit", the standard defines the electrical connection between the chips to be a two wire, shared (multidrop), serial data bus, one wire (SCL) being used as a clock to define the sampling times, the other wire (SDA) being used as a data line whose voltage can be sampled. The standard defines a signalling protocol in which multiple chips can control communication and thereby act as the bus controller.

The Advanced eXtensible Interface (AXI) is an on-chip communication bus protocol and is part of the Advanced Microcontroller Bus Architecture specification (AMBA). AXI had been introduced in 2003 with the AMBA3 specification. In 2010, a new revision of AMBA, AMBA4, defined the AXI4, AXI4-Lite and AXI4-Stream protocols. AXI is royalty-free and its specification is freely available from ARM.

References

  1. PCI Local Bus Specification Revision 2.2. Hillsboro, Oregon: PCI Special Interest Group. December 18, 1998. page ii.
  2. "PCIe (Peripheral Component Interconnect Express) | On the Motherboard | Pearson IT Certification". www.pearsonitcertification.com. Retrieved 2020-09-25.
  3. "PCI". Web-o-pedia. September 1996..
  4. Hamacher, V. Carl; Vranesic, Zvonko G.; Zaky, Safwat G. (2002). Computer Organization (5th ed.). McGraw-Hill. ISBN   9780071122184.
  5. "PCI Edition AMD HD 4350 Graphic Card from HIS" . Retrieved 2009-07-27.
  6. 1 2 Abbott, Doug (2004). PCI bus demystified. Demystifying technology series (2nd ed.). Amsterdam Boston: Newnes. pp. xi, 7–10. ISBN   978-0-7506-7739-4.
  7. Imdad-Haque, Faisal (1996). Inside PC Card: CardBus and PCMCIA Design: CardBus and PCMCIA Design. Newnes. p. 39. ISBN   978-0-08-053473-2.
  8. Sumathi, S.; Surekha, P. (2007). LabVIEW based Advanced Instrumentation Systems. Springer. p. 305. ISBN   978-3-540-48501-8.
  9. PCI Bus Variation
  10. 1 2 Williams, John (2008). Digital VLSI Design with Verilog: A Textbook from Silicon Valley Technical Institute. Springer. p. 67. ISBN   978-1-4020-8446-1.
  11. Bachmutsky, Alexander (2011). System Design for Telecommunication Gateways. John Wiley & Sons. p. 81. ISBN   978-1-119-95642-6.
  12. VLB was designed for 486-based systems, yet even the more generic PCI was to gain prominence on that platform.
  13. Meyers, Michael (2012). CompTIA A+ Certification All-in-One Exam Guide (8th ed.). McGraw Hill Professional. p. 339. ISBN   978-0-07-179512-8.
  14. Identify a variety of PCI slots, LaCie
  15. "PCI Family History" (PDF).
  16. 1 2 3 4 5 6 PCI Local Bus Specification, revision 3.0
  17. "PCI Latency Timer Howto". Reric.NET by Eric Seppanen. 2004-11-14. Retrieved 2008-07-17.
  18. 1 2 3 4 5 PCI Local Bus Specification Revision 2.3. Portland, Oregon: PCI Special Interest Group. March 29, 2002.
  19. "PCI Connector Pinout".
  20. 1 2 PCI Power Management Interface Specification v1.2
  21. "archive.org/zuavra.net - Using Wake-On-LAN WOL/PME to power up your computer remotely". Archived from the original on 8 March 2007.
  22. ZNYX Networks (June 16, 2009). "ZX370 Series". Archived from the original on May 2, 2011. Retrieved July 13, 2012. The ZX370 Series is a true 64-bit adapter, widening the network pipeline to achieve higher throughput while offering backward compatibility with standard 32-bit PCI slots.
  23. ZNYX Networks. "ZX370 Series Multi-Channel PCI Fast Ethernet Adapter" (PDF). Archived from the original (PDF) on July 20, 2013. Retrieved July 13, 2012. Backward compatible with 32 bit, 33 MHz PCI slots
  24. Adaptec (January 2000). "Adaptec SCSI Card 29160 Ultra160 SCSI Controller User's Reference" (pdf). p. 1. Retrieved July 13, 2012. Although the Adaptec SCSI Card 29160 is a 64-bit PCI card, it also works in a 32-bit PCI slot. When installed in a 32-bit PCI slot, the card automatically runs in the slower 32-bit mode.
  25. LaCie. "LaCie support: Identify a variety of PCI slots". Archived from the original on April 4, 2012. Retrieved July 13, 2012.
  26. PCI Local Bus Specification Revision 3.0. Hillsboro, Oregon: PCI Special Interest Group. February 3, 2004. Figure 5-8.
  27. PCI Local Bus Specification Revision 3.0. Hillsboro, Oregon: PCI Special Interest Group. February 3, 2004. Figure 5-9.
  28. PCI Local Bus Specification Revision 3.0. Hillsboro, Oregon: PCI Special Interest Group. February 3, 2004. Figure 5-6.
  29. PCI Local Bus Specification Revision 3.0. Hillsboro, Oregon: PCI Special Interest Group. February 3, 2004. Figure 5-7.
  30. Micro PCI, Micro AGP (FAQ), iBASE, archived from the original on 2001-12-11, retrieved 2010-11-20.
  31. Roudier, Gérard (2001-11-28). "Re: sym53c875: reading /proc causes SCSI parity error". linux-kernel (Mailing list).
  32. 1 2 PCI Local Bus Specification: Revision 2.1 vs. Revision 2.0 (PDF) (Application Note). Intel Corporation. March 1997. AP-753. Archived from the original (PDF) on 2015-04-30.
  33. "Bus Specifics - Writing Device Drivers for Oracle® Solaris 11.3". docs.oracle.com. Retrieved 2020-12-18.
  34. PCI-to-PCI Bridge Architecture Specification, revision 1.1
  35. PCI Local Bus Specification, revision 2.1
  36. PCI Local Bus Specification Revision 2.2. Hillsboro, Oregon: PCI Special Interest Group. December 18, 1998.

Further reading

Official technical specifications
Books
Official
Technical Details
Lists of Vendors, Devices, IDs
Tips
Linux
Development Tools
FPGA Cores