PCI Express

Last updated
PCI Express
PCI Express logo.svg
Year created2003;16 years ago (2003)
Created by
Supersedes
Width in bits1–32
No. of devicesOne device each on each endpoint of each connection. PCI Express switches can create multiple endpoints out of one endpoint to allow sharing one endpoint with multiple devices.
SpeedFor single-lane (×1) and 16-lane (×16) links, in each direction:
  • v. 1.x (2.5  GT/s):
    • 250 MB/s (×1)
    • 4 GB/s (×16)
  • v. 2.x (5 GT/s):
    • 500 MB/s (×1)
    • 8 GB/s (×16)
  • v. 3.x (8 GT/s):
    • 985 MB/s (×1)
    • 15.75 GB/s (×16)
  • v. 4.x (16 GT/s):
    • 1.969 GB/s (×1)
    • 31.51 GB/s (×16)
  • v. 5.x (32 GT/s):
    • 3.938 GB/s (×1)
    • 63.01 GB/s (×16)
Style Serial
Hotplugging interfaceYes, if ExpressCard, Mobile PCI Express Module, XQD card or Thunderbolt
External interfaceYes, with PCI Express OCuLink and External Cabling, such as Thunderbolt
Website pcisig.com
Various slots on a computer motherboard, from top to bottom:
PCI Express x4
PCI Express x16
PCI Express x1
PCI Express x16
Conventional PCI (32-bit, 5 V) PCIExpress.jpg
Various slots on a computer motherboard, from top to bottom:
  • PCI Express ×4
  • PCI Express ×16
  • PCI Express ×1
  • PCI Express ×16
  • Conventional PCI (32-bit, 5 V)
Conventional PCI local computer bus for attaching hardware devices

Conventional PCI, often shortened to PCI, is a local computer bus for attaching hardware devices in a computer. PCI is the acronym for Peripheral Component Interconnect and is part of the PCI Local Bus standard. The PCI bus supports the functions found on a processor bus but in a standardized format that is independent of any particular processor's native bus. Devices connected to the PCI bus appear to a bus master to be connected directly to its own bus and are assigned addresses in the processor's address space. It is a parallel bus, synchronous to a single bus clock.

PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, [1] is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards, hard drives, SSDs, Wi-Fi and Ethernet hardware connections. [2] PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance scaling for bus devices, a more detailed error detection and reporting mechanism (Advanced Error Reporting, AER [3] ), and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization.

Serial communication Type of data transfer

In telecommunication and data transmission, serial communication is the process of sending data one bit at a time, sequentially, over a communication channel or computer bus. This is in contrast to parallel communication, where several bits are sent as a whole, on a link with several parallel channels.

A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system, and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

PCI-X computer bus and expansion card standard

PCI-X, short for Peripheral Component Interconnect eXtended, is a computer bus and expansion card standard that enhances the 32-bit PCI local bus for higher bandwidth demanded mostly by servers and workstations. It uses a modified protocol to support higher clock speeds, but is otherwise similar in electrical implementation. PCI-X 2.0 added speeds up to 533 MHz, with a reduction in electrical signal levels.

Contents

Defined by its number of lanes, [4] the PCI Express electrical interface is also used in a variety of other standards, most notably the laptop expansion card interface ExpressCard and computer storage interfaces SATA Express and M.2.

Laptop personal computer for mobile use

A laptop computer is a small, portable personal computer (PC) with a "clamshell" form factor, typically having a thin LCD or LED computer screen mounted on the inside of the upper lid of the clamshell and an alphanumeric keyboard on the inside of the lower lid. The clamshell is opened up to use the computer. Laptops are folded shut for transportation, and thus are suitable for mobile use. Its name comes from lap, as it was deemed to be placed on a person's lap when being used. Although originally there was a distinction between laptops and notebooks, as of 2014, there is often no longer any difference. Laptops are commonly used in a variety of settings, such as at work, in education, for playing games, Internet surfing, for personal multimedia, and general home computer use.

ExpressCard

ExpressCard, initially called NEWCARD, is an interface to connect peripheral devices to a computer, usually a laptop computer. The ExpressCard technical standard specifies the design of slots built into the computer and of expansion cards to insert in the slots. The cards contain electronic circuits and sometimes connectors for external devices. The ExpressCard standard replaces the PC Card standards.

SATA Express Computer bus

SATA Express is a computer bus interface that supports both Serial ATA (SATA) and PCI Express (PCIe) storage devices, initially standardized in the SATA 3.2 specification. The SATA Express connector used on the host side is backward compatible with the standard SATA data connector, while it also provides two PCI Express lanes as a pure PCI Express connection to the storage device.

Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group), a group of more than 900 companies that also maintain the conventional PCI specifications.

The PCI-SIG or Peripheral Component Interconnect Special Interest Group is an electronics industry consortium responsible for specifying the Peripheral Component Interconnect (PCI), PCI-X, and PCI Express (PCIe) computer buses. It is based in Beaverton, Oregon. The PCI-SIG is distinct from the similarly named and adjacently-focused PCI Industrial Computer Manufacturers Group.

A Special Interest Group (SIG) is a community within a larger organization with a shared interest in advancing a specific area of knowledge, learning or technology where members cooperate to affect or to produce solutions within their particular field, and may communicate, meet, and organize conferences. The term was used in 1961 by the Association for Computing Machinery (ACM), an academic and professional computer society. SIG was later popularized on CompuServe, an early online service provider, where SIGs were a section of the service devoted to particular interests.

Architecture

An example of the PCI Express topology; white "junction boxes" represent PCI Express device downstream ports, while the gray ones represent upstream ports. Example PCI Express Topology.svg
An example of the PCI Express topology; white "junction boxes" represent PCI Express device downstream ports, while the gray ones represent upstream ports.
A PCI Express x1 card containing a PCI Express switch (covered by a small heat sink), which creates multiple endpoints out of one endpoint and allows it to be shared by multiple devices RouterBOARD RB14e, top view.jpg
A PCI Express ×1 card containing a PCI Express switch (covered by a small heat sink), which creates multiple endpoints out of one endpoint and allows it to be shared by multiple devices

Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus. [6] One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data and control lines. In contrast, PCI Express is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Because of its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCI Express bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.

Bus (computing) communication system that transfers data between components inside a computer

In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components and software, including communication protocols.

Network topology arrangement of the various elements of a computer network; topological structure of a network and may be depicted physically or logically

Network topology is the arrangement of the elements of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses, and computer networks.

Root complex

In a PCI Express (PCIe) system, a root complex device connects the processor and memory subsystem to the PCI Express switch fabric composed of one or more switch devices.

In terms of bus protocol, PCI Express communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCI Express port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCI Express slots are not interchangeable. At the software level, PCI Express preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCI Express devices without explicit support for the PCI Express standard, though new PCI Express features are inaccessible.

Backward compatibility a property of a system, product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system, especially in telecommunications and computing

Backward compatibility is a property of a system, product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system, especially in telecommunications and computing. Backward compatibility is sometimes also called downward compatibility.

The PCI Express link between two devices can vary in size from one to 32 lanes. In a multi-lane link, the packet data is striped across lanes, and peak data throughput scales with the overall link width. The lane count is automatically negotiated during device initialization, and can be restricted by either endpoint. For example, a single-lane PCI Express (×1) card can be inserted into a multi-lane slot (×4, ×8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure itself to use fewer lanes, providing a failure tolerance in case bad or unreliable lanes are present. The PCI Express standard defines link widths of ×1, ×4, ×8, ×12, ×16 and ×32. [5] :4,5 This allows the PCI Express bus to serve both cost-sensitive applications where high throughput is not needed, and performance-critical applications such as 3D graphics, networking (10 Gigabit Ethernet or multiport Gigabit Ethernet), and enterprise storage (SAS or Fibre Channel). Slots and connectors are only defined for a subset of these widths, with link widths in between using the next larger physical slot size.

As a point of reference, a PCI-X (133 MHz 64-bit) device and a PCI Express 1.0 device using four lanes (×4) have roughly the same peak single-direction transfer rate of 1064 MB/s. The PCI Express bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data simultaneously, or if communication with the PCI Express peripheral is bidirectional.

Interconnect

A PCI Express link between two devices consists of one or more lanes, which are dual simplex channels using two differential signaling pairs. PCI Express Terminology.svg
A PCI Express link between two devices consists of one or more lanes, which are dual simplex channels using two differential signaling pairs.

PCI Express devices communicate via a logical connection called an interconnect [7] or link. A link is a point-to-point communication channel between two PCI Express ports allowing both of them to send and receive ordinary PCI requests (configuration, I/O or memory read/write) and interrupts (INTx, MSI or MSI-X). At the physical level, a link is composed of one or more lanes. [7] Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (×1) link, while a graphics adapter typically uses a much wider and therefore faster 16-lane (×16) link.

Lane

A lane is composed of two differential signaling pairs, with one pair for receiving data and the other for transmitting. Thus, each lane is composed of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream, transporting data packets in eight-bit "byte" format simultaneously in both directions between endpoints of a link. [8] Physical PCI Express links may contain from one to 32 lanes, more precisely 1, 2, 4, 8, 12, 16 or 32 lanes. [5] :4,5 [7] Lane counts are written with an "×" prefix (for example, "×8" represents an eight-lane card or slot), with ×16 being the largest size in common use. [9] Lane sizes are also referred to via the terms "width" or "by" e.g., an eight-lane slot could be referred to as a "by 8" or as "8 lanes wide."

For mechanical card sizes, see below.

Serial bus

The bonded serial bus architecture was chosen over the traditional parallel bus because of inherent limitations of the latter, including half-duplex operation, excess signal count, and inherently lower bandwidth due to timing skew. Timing skew results from separate electrical signals within a parallel interface traveling through conductors of different lengths, on potentially different printed circuit board (PCB) layers, and at possibly different signal velocities. Despite being transmitted simultaneously as a single word, signals on a parallel interface have different travel duration and arrive at their destinations at different times. When the interface clock period is shorter than the largest time difference between signal arrivals, recovery of the transmitted word is no longer possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz.

A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information is embedded within the serial signal itself. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCI Express is one example of the general trend toward replacing parallel buses with serial interconnects; other examples include Serial ATA (SATA), USB, Serial Attached SCSI (SAS), FireWire (IEEE 1394), and RapidIO. In digital video, examples in common use are DVI, HDMI and DisplayPort.

Multichannel serial design increases flexibility with its ability to allocate fewer lanes for slower devices.

Form factors

PCI Express (standard)

Intel P3608 NVMe flash SSD, PCI-E add-in card Intel P3608 NVMe flash SSD, PCI-E add-in card.jpg
Intel P3608 NVMe flash SSD, PCI-E add-in card

A PCI Express card fits into a slot of its physical size or larger (with ×16 as the largest used), but may not fit into a smaller PCI Express slot; for example, a ×16 card may not fit into a ×4 or ×8 slot. Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection.

The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size. An example is a ×16 slot that runs at ×4, which will accept any ×1, ×2, ×4, ×8 or ×16 card, but provides only four lanes. Its specification may read as "×16 (×4 mode)", while "×size @ ×speed" notation ("×16 @ ×4") is also common. The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate. Standard mechanical sizes are ×1, ×4, ×8, and ×16. Cards with a differing number of lanes need to use the next larger mechanical size (ie. a ×2 card uses the ×4 size, or a ×12 card uses the ×16 size).

The cards themselves are designed and manufactured in various sizes. For example, solid-state drives (SSDs) that come in the form of PCI Express cards often use HHHL (half height, half length) and FHHL (full height, half length) to describe the physical dimensions of the card. [10] [11]

PCI TypeDimensions (mm)Dimensions (in)
Full-Length PCI Card107 mm (height) × 312 mm (long)4.21 in (height) × 12.28 in (long)
Half-Length PCI Card106.68 mm (height) × 175.26 mm (long)4.2 in (height) × 6.9 in (long)
Low-Profile/ Slim PCI Card64.41 mm (height) × 119.91–167.64 mm (long)2.54 in (height) × 4.72–6.59 in (long)

Pinout

The following table identifies the conductors on each side of the edge connector on a PCI Express card. The solder side of the printed circuit board (PCB) is the A side, and the component side is the B side. [12] PRSNT1# and PRSNT2# pins must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted. The WAKE# pin uses full voltage to wake the computer, but must be pulled high from the standby power to indicate that the card is wake capable. [13]

PCI Express connector pinout (×1, ×4, ×8 and ×16 variants)
PinSide BSide ADescriptionPinSide BSide ADescription
1+12 VPRSNT1#Must connect to farthest PRSNT2# pin50HSOp(8)ReservedLane 8 transmit data, + and −
2+12 V+12 VMain power pins51HSOn(8)Ground
3+12 V+12 V52GroundHSIp(8)Lane 8 receive data, + and −
4GroundGround53GroundHSIn(8)
5SMCLKTCK SMBus and JTAG port pins54HSOp(9)GroundLane 9 transmit data, + and −
6SMDATTDI55HSOn(9)Ground
7GroundTDO56GroundHSIp(9)Lane 9 receive data, + and −
8+3.3 VTMS57GroundHSIn(9)
9TRST#+3.3 V58HSOp(10)GroundLane 10 transmit data, + and −
10+3.3 V aux+3.3 V Standby power 59HSOn(10)Ground
11WAKE#PERST#Link reactivation; fundamental reset60GroundHSIp(10)Lane 10 receive data, + and −
Key notch61GroundHSIn(10)
12CLKREQ# [14] GroundClock Request Signal62HSOp(11)GroundLane 11 transmit data, + and −
13GroundREFCLK+Reference clock differential pair63HSOn(11)Ground
14HSOp(0)REFCLK−Lane 0 transmit data, + and −64GroundHSIp(11)Lane 11 receive data, + and −
15HSOn(0)Ground65GroundHSIn(11)
16GroundHSIp(0)Lane 0 receive data, + and −66HSOp(12)GroundLane 12 transmit data, + and −
17PRSNT2#HSIn(0)67HSOn(12)Ground
18GroundGround68GroundHSIp(12)Lane 12 receive data, + and −
PCI Express ×1 cards end at pin 1869GroundHSIn(12)
19HSOp(1)ReservedLane 1 transmit data, + and −70HSOp(13)GroundLane 13 transmit data, + and −
20HSOn(1)Ground71HSOn(13)Ground
21GroundHSIp(1)Lane 1 receive data, + and −72GroundHSIp(13)Lane 13 receive data, + and −
22GroundHSIn(1)73GroundHSIn(13)
23HSOp(2)GroundLane 2 transmit data, + and −74HSOp(14)GroundLane 14 transmit data, + and −
24HSOn(2)Ground75HSOn(14)Ground
25GroundHSIp(2)Lane 2 receive data, + and −76GroundHSIp(14)Lane 14 receive data, + and −
26GroundHSIn(2)77GroundHSIn(14)
27HSOp(3)GroundLane 3 transmit data, + and −78HSOp(15)GroundLane 15 transmit data, + and −
28HSOn(3)Ground79HSOn(15)Ground
29GroundHSIp(3)Lane 3 receive data, + and −80GroundHSIp(15)Lane 15 receive data, + and −
30PWRBRK# [15] HSIn(3)81PRSNT2#HSIn(15)
31PRSNT2#Ground82ReservedGround
32GroundReserved
PCI Express ×4 cards end at pin 32
33HSOp(4)ReservedLane 4 transmit data, + and −
34HSOn(4)Ground
35GroundHSIp(4)Lane 4 receive data, + and −
36GroundHSIn(4)
37HSOp(5)GroundLane 5 transmit data, + and −
38HSOn(5)Ground
39GroundHSIp(5)Lane 5 receive data, + and −
40GroundHSIn(5)
41HSOp(6)GroundLane 6 transmit data, + and −
42HSOn(6)Ground
43GroundHSIp(6)Lane 6 receive data, + and −Legend
44GroundHSIn(6)Ground pinZero volt reference
45HSOp(7)GroundLane 7 transmit data, + and −Power pinSupplies power to the PCIe card
46HSOn(7)GroundCard-to-host pinSignal from the card to the motherboard
47GroundHSIp(7)Lane 7 receive data, + and −Host-to-card pinSignal from the motherboard to the card
48PRSNT2#HSIn(7) Open drain May be pulled low or sensed by multiple cards
49GroundGroundSense pinTied together on card
PCI Express ×8 cards end at pin 49ReservedNot presently used, do not connect

Power

8-pin (left) and 6-pin (right) power connectors used on PCI Express cards PCI Express Power Supply Connector-female PNrdeg0438.jpg
8-pin (left) and 6-pin (right) power connectors used on PCI Express cards

All PCI express cards may consume up to 3  A at +3.3  V (9.9  W ). The amount of +12 V and total power they may consume depends on the type of card: [16] :35–36 [17]

  • ×1 cards are limited to 0.5 A at +12 V (6 W) and 10 W combined.
  • ×4 and wider cards are limited to 2.1 A at +12 V (25 W) and 25 W combined.
  • A full-sized ×1 card may draw up to the 25 W limits after initialization and software configuration as a "high power device".
  • A full-sized ×16 graphics card [13] may draw up to 5.5 A at +12 V (66 W) and 75 W combined after initialization and software configuration as a "high power device".

Optional connectors add 75 W (6-pin) or 150 W (8-pin) of +12 V power for up to 300 W total (2×75 W + 1×150 W).

  • Sense0 pin is connected to ground by the cable or power supply, or float on board if cable is not connected.
  • Sense1 pin is connected to ground by the cable or power supply, or float on board if cable is not connected.

There are cards that use two 8-pin connectors, but this has not been standardized yet as of 2018, therefore such cards must not carry the official PCI Express logo. This configuration allows 375 W total (1×75 W + 2×150 W) and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard. The 8-pin PCI Express connector could be confused with the EPS12V connector, which is mainly used for powering SMP and multi-core systems.

6-pin power connector (75 W) [18] 8-pin power connector (150 W) [19] [20] [21]
6 pin power connector pin map PCIe6connector.png
6 pin power connector pin map
8 pin power connector pin map PCIe8connector.png
8 pin power connector pin map
PinDescriptionPinDescription
1+12 V1+12 V
2Not connected (usually +12 V as well)2+12 V
3+12 V3+12 V
4Sense1 (8-pin connected [lower-alpha 1] )
4Ground5Ground
5Sense6Sense0 (6-pin or 8-pin connected)
6Ground7Ground
8Ground
  1. When a 6-pin connector is plugged into an 8-pin receptacle the card is notified by a missing Sense1 that it may only use up to 75 W.

PCI Express Mini Card

A WLAN PCI Express Mini Card and its connector Intel WM3945ABG MOW2 and its connector 20070216.jpg
A WLAN PCI Express Mini Card and its connector
MiniPCI and MiniPCI Express cards in comparison MiniPCI and MiniPCI Express cards.jpg
MiniPCI and MiniPCI Express cards in comparison

PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, Mini PCI-E, mPCIe, and PEM), based on PCI Express, is a replacement for the Mini PCI form factor. It is developed by the PCI-SIG. The host device supports both PCI Express and USB  2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 use PCI Express for expansion cards; however, as of 2015, many vendors are moving toward using the newer M.2 form factor for this purpose.

Due to different dimensions, PCI Express Mini Cards are not physically compatible with standard full-size PCI Express slots; however, passive adapters exist that allow them to be used in full-size slots. [22]

Physical dimensions

Dimensions of PCI Express Mini Cards are 30 × 50.95 mm (width × length) for a Full Mini Card. There is a 52-pin edge connector, consisting of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap equivalent to four contacts, then a further 18 contacts. Boards have a thickness of 1.0 mm, excluding the components. A "Half Mini Card" (sometimes abbreviated as HMC) is also specified, having approximately half the physical length of 26.8 mm.

Electrical interface

PCI Express Mini Card edge connectors provide multiple connections and buses:

  • PCI Express ×1 (with SMBus)
  • USB 2.0
  • Wires to diagnostics LEDs for wireless network (i.e., Wi-Fi) status on computer's chassis
  • SIM card for GSM and WCDMA applications (UIM signals on spec.).
  • Future extension for another PCIe lane
  • 1.5 V and 3.3 V power

Mini-SATA (mSATA) variant

Despite sharing the Mini PCI Express form factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. Notebooks such as Lenovo's ThinkPad T, W and X series, released in March–April 2011, have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560/Y570/Y580 also support mSATA. [23]

Some notebooks (notably the Asus Eee PC, the Apple MacBook Air, and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an SSD. This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe ×1 bus intact. [24] This makes the "miniPCIe" flash and solid-state drives sold for netbooks largely incompatible with true PCI Express Mini implementations.

Also, the typical Asus miniPCIe SSD is 71 mm long, causing the Dell 51 mm model to often be (incorrectly) referred to as half length. A true 51 mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers that allow for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed.

Intel has numerous desktop boards with the PCIe ×1 Mini-Card slot which typically do not support mSATA SSD. A list of desktop boards that natively support mSATA in the PCIe ×1 Mini-Card slot (typically multiplexed with a SATA port) is provided on the Intel Support site. [25]

PCI Express M.2 (Mini PCIe v2)

The new version of Mini PCI express, M.2 replaces the mSATA standard. Computer bus interfaces provided through the M.2 connector are PCI Express 3.0 (up to four lanes), Serial ATA 3.0, and USB 3.0 (a single logical port for each of the latter two). It is up to the manufacturer of the M.2 host or device to select which interfaces are to be supported, depending on the desired level of host support and device type.

PCI Express External Cabling

PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications were released by the PCI-SIG in February 2007. [26] [27]

Standard cables and connectors have been defined for ×1, ×4, ×8, and ×16 link widths, with a transfer rate of 250 MB/s per lane. The PCI-SIG also expects the norm will evolve to reach 500 MB/s, as in PCI Express 2.0. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCIe slots and PCIe-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe spec.

OCuLink (standing for "optical-copper link", since Cu is the chemical symbol for Copper) is an extension for the "cable version of PCI Express", acting as a competitor to version 3 of the Thunderbolt interface. Version 1.0 of OCuLink, released in Oct 2015, supports up to PCIe 3.0 ×4 lanes (8 GT/s, 3.9 GB/s) over copper cabling; a fiber optic version may appear in the future. [28] [29]

OCuLink in last version will have up to 16 GT/s (8 GB/s total for ×4 lanes), [30] while the maximum bandwidth of a Thunderbolt 3 connector is 5 GB/s.

Derivative forms

Several other types of expansion card are derived from PCIe; these include:

History and revisions

While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. A technical working group named the Arapaho Work Group (AWG) drew up the standard. For initial drafts, the AWG consisted only of Intel engineers; subsequently, the AWG expanded to include industry partners.

Since, PCIe has undergone several large and smaller revisions, improving on performance and other features.

PCI Express link performance [33] [34]
PCI Express
version
IntroducedLine
code
Transfer
rate [lower-roman 1]
Throughput [lower-roman 1]
×1×2×4×8×16
1.02003 8b/10b 2.5  GT/s250  MB/s0.50 GB/s1.0  GB/s2.0 GB/s4.0 GB/s
2.020078b/10b5.0 GT/s500 MB/s1.0 GB/s2.0 GB/s4.0 GB/s8.0 GB/s
3.02010 128b/130b 8.0 GT/s984.6 MB/s1.97 GB/s3.94 GB/s7.88 GB/s15.8 GB/s
4.02017128b/130b16.0 GT/s1969 MB/s3.94 GB/s7.88 GB/s15.75 GB/s31.5 GB/s
5.0 [35] [36] expected in
Q1 2019
[37]
128b/130b32.0 GT/s [lower-roman 2] 3938 MB/s7.88 GB/s15.75 GB/s31.51 GB/s63.0 GB/s
  1. 1 2 In each direction (each lane is a dual simplex channel).
  2. Initially, 25.0 GT/s was also considered for technical feasibility.

PCI Express 1.0a

In 2003, PCI-SIG introduced PCIe 1.0a, with a per-lane data rate of 250 MB/s and a transfer rate of 2.5 gigatransfers per second (GT/s). Transfer rate is expressed in transfers per second instead of bits per second because the number of transfers includes the overhead bits, which do not provide additional throughput; [38] PCIe 1.x uses an 8b/10b encoding scheme, resulting in a 20% (= 2/10) overhead on the raw channel bandwidth. [39]

PCI Express 1.1

In 2005, PCI-SIG [40] introduced PCIe 1.1. This updated specification includes clarifications and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate.

PCI Express 2.0

A PCI Express 2.0 expansion card that provides USB 3.0 connectivity. Rosewill-USB3-PCI-Express-Card.jpg
A PCI Express 2.0 expansion card that provides USB 3.0 connectivity.

PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007. [41] The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and the per-lane throughput rises from 250 MB/s to 500 MB/s. Consequently, a 32-lane PCIe connector (×32) can support an aggregate throughput of up to 16 GB/s.

PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 will work with the other being v1.1 or v1.0a.

The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture. [42]

Intel's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors (Abit, Asus, Gigabyte) as of October 21, 2007. [43] AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with the MCP72. [44] All of Intel's prior chipsets, including the Intel P35 chipset, supported PCIe 1.1 or 1.0a. [45]

Like 1.x, PCIe 2.0 uses an 8b/10b encoding scheme, therefore delivering, per-lane, an effective 4 Gbit/s max transfer rate from its 5 GT/s raw data rate.

PCI Express 2.1

PCI Express 2.1 (with its specification dated March 4, 2009) supports a large proportion of the management, support, and troubleshooting systems planned for full implementation in PCI Express 3.0. However, the speed is the same as PCI Express 2.0. The increase in power from the slot breaks backward compatibility between PCI Express 2.1 cards and some older motherboards with 1.0/1.0a, but most motherboards with PCI Express 1.1 connectors are provided with a BIOS update by their manufacturers through utilities to support backward compatibility of cards with PCIe 2.1.

PCI Express 3.0

PCI Express 3.0 Base specification revision 3.0 was made available in November 2010, after multiple delays. In August 2007, PCI-SIG announced that PCI Express 3.0 would carry a bit rate of 8 gigatransfers per second (GT/s), and that it would be backward compatible with existing PCI Express implementations. At that time, it was also announced that the final specification for PCI Express 3.0 would be delayed until Q2 2010. [46] New features for the PCI Express 3.0 specification include a number of optimizations for enhanced signaling and data integrity, including transmitter and receiver equalization, PLL improvements, clock data recovery, and channel enhancements for currently supported topologies. [47]

Following a six-month technical analysis of the feasibility of scaling the PCI Express interconnect bandwidth, PCI-SIG's analysis found that 8 gigatransfers per second can be manufactured in mainstream silicon process technology, and can be deployed with existing low-cost materials and infrastructure, while maintaining full compatibility (with negligible impact) to the PCI Express protocol stack.

PCI Express 3.0 upgrades the encoding scheme to 128b/130b from the previous 8b/10b encoding, reducing the bandwidth overhead from 20% of PCI Express 2.0 to approximately 1.54% (= 2/130). A desirable balance of 0 and 1 bits in the data stream is achieved by XORing a known binary polynomial as a "scrambler" to the data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by applying the XOR a second time. Both the scrambling and descrambling steps are carried out in hardware. PCI Express 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane, nearly doubling the lane bandwidth relative to PCI Express 2.0. [34]

On November 18, 2010, the PCI Special Interest Group officially published the finalized PCI Express 3.0 specification to its members to build devices based on this new version of PCI Express. [48]

PCI Express 3.1

In September 2013, PCI Express 3.1 specification was announced to be released in late 2013 or early 2014, consolidating various improvements to the published PCI Express 3.0 specification in three areas: power management, performance and functionality. [28] [49] It was released in November 2014. [50]

PCI Express 4.0

On November 29, 2011, PCI-SIG preliminarily announced PCI Express 4.0, [51] providing a 16 GT/s bit rate that doubles the bandwidth provided by PCI Express 3.0, while maintaining backward and forward compatibility in both software support and used mechanical interface. [52] PCI Express 4.0 specs will also bring OCuLink-2, an alternative to Thunderbolt connector. OCuLink version 2 will have up to 16 GT/s (8 GB/s total for ×4 lanes), [30] while the maximum bandwidth of a Thunderbolt 3 connector is 5 GB/s. Additionally, active and idle power optimizations are to be investigated.

In August 2016, Synopsys presented a test machine running PCIe 4.0 at the Intel Developer Forum. Their IP has been licensed to several firms planning to present their chips and products at the end of 2016. [36]

PCI Express 4.0 was officially announced on June 8, 2017, by PCI-SIG. [53] The spec includes improvements in flexibility, scalability, and lower-power.

NETINT Technologies introduced the first NVMe SSD based on PCIe 4.0 on July 17, 2018, ahead of Flash Memory Summit 2018 [54]

Broadcom announced on 12 September 2018 the first 200 Gbit Ethernet Controller with PCIe 4.0. [55]

AMD announced on 9 January 2019 their upcoming X570 chipset will support PCIe 4.0. [56] Motherboard manufacturers will be able to update UEFIs on 300 and 400 series motherboards to enable partial PCIe 4.0 support, accessible when a Ryzen 3000 series CPU is installed. This would enable the first PCIe x16 slot to provide PCIe 4.0 connectivity, while the other CPU-driven slots would remain as PCIe 3.0. [57]

PCI Express 5.0

In June 2017, PCI-SIG preliminarily announced the PCI Express 5.0 specification. [53] Bandwidth is expected to increase to 32 GT/s, yielding 63 GB/s in each direction in a 16 lane configuration. It is expected to be standardized in 2019.

PLDA announced the availability of their XpressRICH5 PCIe 5.0 Controller IP based on draft 0.7 of the PCIe 5.0 specification on the same day. [58] [59]

On 10 December 2018, the PCI SIG released version 0.9 of the PCIe 5.0 specification to its members. [37]

On 17 January 2019, the PCI SIG announced the version 0.9 of the PCIe 5.0 specification has been ratified, and the version 1.0 is targeted for release in the first quarter of 2019. [60]

Extensions and future directions

Some vendors offer PCIe over fiber products, [61] [62] [63] but these generally find use only in specific cases where transparent PCIe bridging is preferable to using a more mainstream standard (such as InfiniBand or Ethernet) that may require additional software to support it; current implementations focus on distance rather than raw bandwidth and typically do not implement a full ×16 link.

Thunderbolt was co-developed by Intel and Apple as a general-purpose high speed interface combining a ×4 PCIe link with DisplayPort and was originally intended to be an all-fiber interface, but due to early difficulties in creating a consumer-friendly fiber interconnect, nearly all implementations are copper systems. A notable exception, the Sony VAIO Z VPC-Z2, uses a nonstandard USB port with an optical component to connect to an outboard PCIe display adapter. Apple has been the primary driver of Thunderbolt adoption through 2011, though several other vendors [64] have announced new products and systems featuring Thunderbolt.

Mobile PCIe specification (abbreviated to M-PCIe) allows PCI Express architecture to operate over the MIPI Alliance's M-PHY physical layer technology. Building on top of already existing widespread adoption of M-PHY and its low-power design, Mobile PCIe allows PCI Express to be used in tablets and smartphones. [65]

Draft process

There are 5 primary releases/checkpoints in a PCI-SIG specification: [66]

Historically, the earliest adopters of a new PCIe specification generally begin designing with the Draft 0.5 as they can confidently build up their application logic around the new bandwidth definition and often even start developing for any new protocol features. At the Draft 0.5 stage, however, there is still a strong likelihood of changes in the actual PCIe protocol layer implementation, so designers responsible for developing these blocks internally may be more hesitant to begin work than those using interface IP from external sources.

Hardware protocol summary

The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel bus.

PCI Express is a layered protocol, consisting of a transaction layer , a data link layer , and a physical layer . The Data Link Layer is subdivided to include a media access control (MAC) sublayer. The Physical Layer is subdivided into logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms are borrowed from the IEEE 802 networking protocol model.

Physical layer

Connector pins and lengths
LanesPinsLength
TotalVariableTotalVariable
×12×18 = 36 [67] 2×7 = 1425 mm7.65 mm
×42×32 = 642×21 = 4239 mm21.65 mm
×82×49 = 982×38 = 7656 mm38.65 mm
×162×82 = 1642×71 = 14289 mm71.65 mm
An open-end PCI Express x1 connector, allowing longer cards capable of using more lanes to be plugged while operating at x1 speeds PCIe J1900 SoC ITX Mainboard IMG 1820.JPG
An open-end PCI Express ×1 connector, allowing longer cards capable of using more lanes to be plugged while operating at ×1 speeds

The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification published by Intel, the PHY Interface for PCI Express (PIPE), [68] defines the MAC/PCS functional partitioning and the interface between these two sub-layers. The PIPE specification also identifies the physical media attachment (PMA) layer, which includes the serializer/deserializer (SerDes) and other analog circuitry; however, since SerDes implementations vary greatly among ASIC vendors, PIPE does not specify an interface between the PCS and PMA.

At the electrical level, each lane consists of two unidirectional differential pairs operating at 2.5, 5, 8 or 16  Gbit/s, depending on the negotiated capabilities. Transmit and receive are separate differential pairs, for a total of four data wires per lane.

A connection between any two PCIe devices is known as a link, and is built up from a collection of one or more lanes. All devices must minimally support single-lane (×1) link. Devices may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes. This allows for very good compatibility in two ways:

In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and BIOS versions are verified to support ×1, ×4, ×8 and ×16 connectivity on the same connection.

Even though the two would be signal-compatible, it is not usually possible to place a physically larger PCIe card (e.g., a ×16 sized card) into a smaller slot  though if the PCIe slots are altered or a riser is used most motherboards will allow this. The width of a PCIe connector is 8.8 mm, while the height is 11.25 mm, and the length is variable. The fixed section of the connector is 11.65 mm in length and contains two rows of 11 (22 pins total), while the length of the other section is variable depending on the number of lanes. The pins are spaced at 1 mm intervals, and the thickness of the card going into the connector is 1.8 mm. [69] [70]

Data transmission

PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can never be blocked, so latency is still comparable to conventional PCI, which has dedicated interrupt lines.

Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as data striping. While requiring significant hardware complexity to synchronize (or deskew) the incoming striped data, striping can significantly reduce the latency of the nth byte on a link. While the lanes are not tightly synchronized, there is a limit to the lane to lane skew of 20/8/6 ns for 2.5/5/8 GT/s so the hardware buffers can re-align the striped data. [71] Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.

As with other high data rate serial transmission protocols, the clock is embedded in the signal. At the physical level, PCI Express 2.0 utilizes the 8b/10b encoding scheme [34] to ensure that strings of consecutive identical digits (zeros or ones) are limited in length. This coding was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 instead uses 128b/130b encoding with scrambling. 128b/130b encoding relies on the scrambling to limit the run length of identical-digit strings in data streams and ensure the receiver stays synchronised to the transmitter. It also reduces electromagnetic interference (EMI) by preventing repeating data patterns in the transmitted data stream.

The data link layer performs three vital services for the PCIe express link:

  1. sequence the transaction layer packets (TLPs) that are generated by the transaction layer,
  2. ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol (ACK and NAK signaling) that explicitly requires replay of unacknowledged/bad TLPs,
  3. initialize and manage flow control credits

On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit cyclic redundancy check code (known in this context as Link CRC or LCRC) is also appended to the end of each outgoing TLP.

On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number (which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with past sequence-numbers.)

If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium.

In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and consumes DLLPs, data link layer packets. ACK and NAK signals are communicated via DLLPs, as are some power management messages and flow control credit information (on behalf of the transaction layer).

In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the transmitter's replay buffer (which must store a copy of all transmitted TLPs until the remote receiver ACKs them), and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs.

Transaction layer

PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response.

PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each received buffer in its transaction layer. The device at the opposite end of the link, when sending transactions to this device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes.

PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5  gigabaud) divided by the encoding overhead (10 bits per byte.) This means a sixteen lane (×16) PCIe card would then be theoretically capable of 16×250 MB/s = 4 GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels.

Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (×2, ×4, etc.) But in more typical applications (such as a USB or Ethernet controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgements. [72] This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and forced interrupts (either in the device's host interface or the PC's CPU). Being a protocol for devices connected to the same printed circuit board, it does not require the same tolerance for transmission errors as a protocol for communication over longer distances, and thus, this loss of efficiency is not particular to PCIe.

Applications

Asus Nvidia GeForce GTX 650 Ti, a PCI Express 3.0 x16 graphics card ASUS GTX-650 Ti TOP Cu-II PCI Express 3.0 x16 graphics card.jpg
Asus Nvidia GeForce GTX 650 Ti, a PCI Express 3.0 ×16 graphics card
The NVIDIA GeForce GTX 1070, a PCI Express 3.0 x16 Graphics card. NVIDIA-GTX-1070-FoundersEdition-FL.jpg
The NVIDIA GeForce GTX 1070, a PCI Express 3.0 x16 Graphics card.
Intel 82574L Gigabit Ethernet NIC, a PCI Express x1 card An Intel 82574L Gigabit Ethernet NIC, PCI Express x1 card.jpg
Intel 82574L Gigabit Ethernet NIC, a PCI Express ×1 card
A Marvell-based SATA 3.0 controller, as a PCI Express x1 card SATA 6 Gbit-s controller, in form of a PCI Express card.jpg
A Marvell-based SATA 3.0 controller, as a PCI Express ×1 card

PCI Express operates in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an expansion card interface for add-in boards.

In virtually all modern (as of 2012) PCs, from consumer laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system-processor with both integrated-peripherals (surface-mounted ICs) and add-on peripherals (expansion cards). In most of these systems, the PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI peripherals.

As of 2013 PCI Express has replaced AGP as the default interface for graphics cards on new systems. Almost all models of graphics cards released since 2010 by AMD (ATI) and Nvidia use PCI Express. Nvidia uses the high-bandwidth data transfer of PCIe for its Scalable Link Interface (SLI) technology, which allows multiple graphics cards of the same chipset and model number to run in tandem, allowing increased performance. AMD has also developed a multi-GPU system based on PCIe called CrossFire. AMD, Nvidia, and Intel have released motherboard chipsets that support as many as four PCIe ×16 slots, allowing tri-GPU and quad-GPU card configurations.

Note that there are special power cables called PCI-e power cables which are required for high-end graphics cards [73] .

External GPUs

Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with a power supply and cooling); this is possible with an ExpressCard interface or a Thunderbolt interface. The ExpressCard interface provides bit rates of 5 Gbit/s (0.5 GB/s throughput), whereas the Thunderbolt interface provides bit rates of up to 40 Gbit/s (5 GB/s throughput).

In 2006, Nvidia developed the Quadro Plex external PCIe family of GPUs that can be used for advanced graphic applications for the professional market. [74] These video cards require a PCI Express ×8 or ×16 slot for the host-side card which connects to the Plex via a VHDCI carrying eight PCIe lanes. [75]

In 2008, AMD announced the ATI XGP technology, based on a proprietary cabling system that is compatible with PCIe ×8 signal transmissions. [76] This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter. [77] Around 2010 Acer launched the Dynavivid graphics dock for XGP. [78]

In 2010 external card hubs were introduced that can connect to a laptop or desktop through a PCI ExpressCard slot. These hubs can accept full-sized graphics cards. Examples include MSI GUS, [79] Village Instrument's ViDock, [80] the Asus XG Station, Bplus PE4H V3.2 adapter, [81] as well as more improvised DIY devices. [82] However such solutions are limited by the size (often only ×1) and version of the available PCIe slot on a laptop.

Intel Thunderbolt interface has given opportunity to new and faster products to connect with a PCIe card externally. Magma has released the ExpressBox 3T, which can hold up to three PCIe cards (two at ×8 and one at ×4). [83] MSI also released the Thunderbolt GUS II, a PCIe chassis dedicated for video cards. [84] Other products such as the Sonnet’s Echo Express [85] and mLogic’s mLink are Thunderbolt PCIe chassis in a smaller form factor. [86] However, all these products require a computer with a Thunderbolt port (i.e., Thunderbolt devices), such as Apple's MacBook Pro models released in late 2013.

In 2017, more fully featured external card hubs were introduced, such as the Razer Core, which has a full-length PCIe ×16 interface. [87]

Storage devices

An OCZ RevoDrive SSD, a full-height x4 PCI Express card PCIe card full height.jpg
An OCZ RevoDrive SSD, a full-height ×4 PCI Express card

PCI Express protocol can be used as data interface to flash memory devices, such as memory cards and solid-state drives (SSDs).

XQD card is a memory card format utilizing PCI Express, developed by the CompactFlash Association, with transfer rates of up to 500 MB/s. [88]

Many high-performance, enterprise-class SSDs are designed as PCI Express RAID controller cards with flash memory chips placed directly on the circuit board, utilizing proprietary interfaces and custom drivers to communicate with the operating system; this allows much higher transfer rates (over 1 GB/s) and IOPS (over one million I/O operations per second) when compared to Serial ATA or SAS drives. [89] [90] For example, in 2011 OCZ and Marvell co-developed a native PCI Express solid-state drive controller for a PCI Express 3.0 ×16 slot with maximum capacity of 12 TB and a performance of to 7.2 GB/s sequential transfers and up to 2.52 million IOPS in random transfers. [91]

SATA Express is an interface for connecting SSDs, by providing multiple PCI Express lanes as a pure PCI Express connection to the attached storage device. [92] M.2 is a specification for internally mounted computer expansion cards and associated connectors, which also uses multiple PCI Express lanes. [93]

PCI Express storage devices can implement both AHCI logical interface for backward compatibility, and NVM Express logical interface for much faster I/O operations provided by utilizing internal parallelism offered by such devices. Enterprise-class SSDs can also implement SCSI over PCI Express. [94]

Cluster interconnect

Certain data-center applications (such as large computer clusters) require the use of fiber-optic interconnects due to the distance limitations inherent in copper cabling. Typically, a network-oriented standard such as Ethernet or Fibre Channel suffices for these applications, but in some cases the overhead introduced by routable protocols is undesirable and a lower-level interconnect, such as InfiniBand, RapidIO, or NUMAlink is needed. Local-bus standards such as PCIe and HyperTransport can in principle be used for this purpose, [95] but as of 2015 solutions are only available from niche vendors such as Dolphin ICS.

Competing protocols

Other communications standards based on high bandwidth serial architectures include InfiniBand, RapidIO, HyperTransport, Intel QuickPath Interconnect, and the Mobile Industry Processor Interface (MIPI). The differences are based on the trade-offs between flexibility and extensibility vs latency and overhead. For example, making the system hot-pluggable, as with Infiniband but not PCI Express, requires that software track network topology changes.

Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.

PCI Express falls somewhere in the middle, targeted by design as a system interconnect (local bus) rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat.

Delays in PCIe 4.0 implementations led to the Gen-Z consortium, the CCIX [96] effort and an open Coherent Accelerator Processor Interface (CAPI) all being announced by the end of 2016. [97]

As March 2019, Intel presented Compute Express Link (CXL), [98] a new interconnect bus.

See also

Notes

  1. The card's Serial ATA power connector is present because the USB 3.0 ports require more power than the PCI Express bus can supply. More often, a 4-pin Molex power connector is used.

Related Research Articles

Accelerated Graphics Port expansion bus

The Accelerated Graphics Port (AGP) was designed as a high-speed point-to-point channel for attaching a video card to a computer system, primarily to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP has been progressively phased out in favor of PCI Express (PCIe); by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favour of PCI Express.

Backplane

A backplane is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used as a backbone to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications.

Industry Standard Architecture 16-bit internal bus of IBM PC/AT

Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles.

USB industry standard

USB is an industry standard that establishes specifications for cables, connectors and protocols for connection, communication and power supply between personal computers and their peripheral devices. Released in 1996, the USB standard is currently maintained by the USB Implementers Forum. There have been three generations of USB specifications: USB 1.x, USB 2.0 and USB 3.x; the fourth called USB4 is scheduled to be published in the middle of 2019.

PC Card

In computing, PC Card is a configuration for computer parallel communication peripheral interface, designed for laptop computers. Originally introduced as PCMCIA, the PC Card standard as well as its successors like CardBus were defined and developed by the Personal Computer Memory Card International Association (PCMCIA).

Expansion card a printed circuit board that can be inserted into an electrical connector, or expansion slot on a computer motherboard, backplane or riser card to add functionality to a computer system via the expansion bus

In computing, the expansion card, expansion board, adapter card or accessory card is a printed circuit board that can be inserted into an electrical connector, or expansion slot, on a computer motherboard, backplane or riser card to add functionality to a computer system via the expansion bus.

HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a technology for interconnection of computer processors. It is a bidirectional serial/parallel high-bandwidth, low-latency point-to-point link that was introduced on April 2, 2001. The HyperTransport Consortium is in charge of promoting and developing HyperTransport technology.

Serial ATA computer bus interface

Serial ATA is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. Serial ATA succeeded the earlier Parallel ATA (PATA) standard to become the predominant interface for storage devices.

The AMD 700 chipset series is a set of chipsets designed by ATI for AMD Phenom processors to be sold under the AMD brand. Several members were launched in the end of 2007 and the first half of 2008, others launched throughout the rest of 2008.

Serial Digital Video Out (SDVO) is a proprietary Intel technology introduced with their 9xx-series of motherboard chipsets.

USB 3.0 third major version of the Universal Serial Bus standard for computer connectivity

USB 3.0 is the third major version of the Universal Serial Bus (USB) standard for interfacing computers and electronic devices. Among other improvements, USB 3.0 adds the new transfer rate referred to as SuperSpeed USB (SS) that can transfer data at up to 5 Gbit/s (625 MB/s), which is about 10 times faster than the USB 2.0 standard. It is recommended that manufacturers distinguish USB 3.0 connectors from their USB 2.0 counterparts by using blue color for the Standard-A receptacles and plugs, and by the initials SS.

Intel X58 Intel chipset architecture

The Intel X58 is an Intel chip designed to connect Intel processors with Intel QuickPath Interconnect (QPI) interface to peripheral devices. Supported processors implement the Nehalem microarchitecture and therefore have an integrated memory controller (IMC), so the X58 does not have a memory interface. Initially supported processors were the Core i7, but the chip also supported Nehalem- and Westmere based Xeon processors.

Thunderbolt is the brand name of a hardware interface developed by Intel that allows the connection of external peripherals to a computer. Thunderbolt 1 and 2 use the same connector as Mini DisplayPort (MDP), whereas Thunderbolt 3 re-uses the Type-C connector from USB. It was initially developed and marketed under the name Light Peak, and first sold as part of a consumer product on 24 February 2011.

The Intel X79 is a Platform Controller Hub (PCH) designed and manufactured by Intel for their LGA 2011 and LGA 2011-1.

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus. The acronym NVM stands for non-volatile memory, which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCI Express (PCIe) add-in cards, M.2 cards, and other forms. NVM Express, as a logical device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.

M.2 Connector for solid-state disks

M.2, formerly known as the Next Generation Form Factor (NGFF), is a specification for internally mounted computer expansion cards and associated connectors. It replaces the mSATA standard, which uses the PCI Express Mini Card physical card layout and connectors. M.2's more flexible physical specification allows different module widths and lengths, and, paired with the availability of more advanced interfacing features, makes the M.2 more suitable than mSATA for solid-state storage applications in general and particularly for the use in small devices such as ultrabooks or tablets.

U.2

U.2, formerly known as SFF-8639, is a computer interface for connecting SSDs to a computer. It uses up to four PCI Express lanes.

References

  1. "Archived copy". Archived from the original on 2017-03-24. Retrieved 2017-03-23.CS1 maint: Archived copy as title (link)
  2. https://www.pcmag.com/encyclopedia/term/48998/pci-express
  3. Zhang, Yanmin; Nguyen, T Long (June 2007). "Enable PCI Express Advanced Error Reporting in the Kernel" (PDF). Proceedings of the Linux Symposium. Fedora project. Archived (PDF) from the original on 2016-03-10.
  4. https://www.hyperstone.com Flash Memory Form Factors - The Fundamentals of Reliable Flash Storage, Retrieved 19 April 2018
  5. 1 2 3 4 Ravi Budruk (2007-08-21). "PCI Express Basics". PCI-SIG. Archived from the original (PDF) on 2014-07-15. Retrieved 2014-07-15.
  6. "How PCI Express Works". How Stuff Works. Archived from the original on 2009-12-03. Retrieved 2009-12-07.
  7. 1 2 3 "PCI Express Architecture Frequently Asked Questions". PCI-SIG. Archived from the original on 13 November 2008. Retrieved 23 November 2008.
  8. "PCI Express Bus". Interface bus. Archived from the original on 2007-12-08. Retrieved 2010-06-12.
  9. "PCI Express – An Overview of the PCI Express Standard". Developer Zone. National Instruments. 2009-08-13. Archived from the original on 2010-01-05. Retrieved 2009-12-07.
  10. "New PCIe Form Factor Enables Greater PCIe SSD Adoption". NVM Express. 12 June 2012. Archived from the original on 6 September 2015.
  11. "Memblaze PBlaze4 AIC NVMe SSD Review". StorageReview. 21 December 2015.
  12. "What is the A side, B side configuration of PCI cards". Frequently Asked Questions. Adex Electronics. 1998. Archived from the original on 2011-11-02. Retrieved Oct 24, 2011.
  13. 1 2 PCI Express Card Electromechanical Specification Revision 2.0
  14. "L1 PM Substates with CLKREQ, Revision 1.0a" (PDF). PCI-SIG. Retrieved 2018-11-08.
  15. "Emergency Power Reduction Mechanism with PWRBRK Signal ECN" (PDF). PCI-SIG. Retrieved 2018-11-08.
  16. PCI Express Card Electromechanical Specification Revision 1.1
  17. Schoenborn, Zale (2004), Board Design Guidelines for PCI Express Architecture (PDF), PCI-SIG, pp. 19–21, archived (PDF) from the original on 2016-03-27
  18. PCI Express x16 Graphics 150W-ATX Specification Revision 1.0
  19. PCI Express 225 W/300 W High Power Card Electromechanical Specification Revision 1.0
  20. PCI Express Card Electromechanical Specification Revision 3.0
  21. Yun Ling (2008-05-16). "PCIe Electromechanical Updates". Archived from the original on 2015-11-05. Retrieved 2015-11-07.
  22. "MP1: Mini PCI Express / PCI Express Adapter". hwtools.net. 2014-07-18. Archived from the original on 2014-10-03. Retrieved 2014-09-28.
  23. "mSATA FAQ: A Basic Primer". Notebook review. Archived from the original on 2012-02-10.
  24. "Eee PC Research". ivc (wiki). Archived from the original on 30 March 2010. Retrieved 26 October 2009.
  25. "Desktop Board Solid-state drive (SSD) compatibility". Intel. Archived from the original on 2016-01-02.
  26. "PCI Express External Cabling 1.0 Specification". Archived from the original on 10 February 2007. Retrieved 9 February 2007.
  27. "PCI Express External Cabling Specification Completed by PCI-SIG". PCI SIG. 2007-02-07. Archived from the original on 2013-11-26. Retrieved 2012-12-07.
  28. 1 2 3 "PCI SIG discusses M‐PCIe oculink & 4th gen PCIe", The Register, UK, September 13, 2013, archived from the original on June 29, 2017
  29. 1 2 OCuLink 2nd gen Archived 2017-03-13 at the Wayback Machine
  30. "Supermicro Universal I/O (UIO) Solutions". Supermicro.com. Archived from the original on 2014-03-24. Retrieved 2014-03-24.
  31. "Get ready for M-PCIe testing", PC board design, EDN
  32. "PCI Express 4.0 Frequently Asked Questions". pcisig.com. PCI-SIG. Archived from the original on 2014-05-18. Retrieved 2014-05-18.
  33. 1 2 3 "PCI Express 3.0 Frequently Asked Questions". pcisig.com. PCI-SIG. Archived from the original on 2014-02-01. Retrieved 2014-05-01.
  34. "PCIe 4.0 Heads to Fab, 5.0 to Lab". EE Times. 2016-06-26. Archived from the original on 2016-08-28. Retrieved 2016-08-27.
  35. 1 2 "Archived copy". Archived from the original on 2016-08-19. Retrieved 2016-08-18.CS1 maint: Archived copy as title (link)
  36. 1 2 "Doubling Bandwidth in Under Two Years: PCI Express® Base Specification Revision 5.0, Version 0.9 is Now Available to Members". pcisig.com. Retrieved 2018-12-12.
  37. "What does GT/s mean, anyway?". TM World. Archived from the original on 2012-08-14. Retrieved 2012-12-07.
  38. "Deliverable 12.2". SE: Eiscat. Archived from the original on 2010-08-17. Retrieved 2012-12-07.
  39. PCI SIG, archived from the original on 2008-07-06
  40. "PCI Express Base 2.0 specification announced" (PDF) (Press release). PCI-SIG. 15 January 2007. Archived from the original (PDF) on 4 March 2007. Retrieved 9 February 2007. — note that in this press release the term aggregate bandwidth refers to the sum of incoming and outgoing bandwidth; using this terminology the aggregate bandwidth of full duplex 100BASE-TX is 200 Mbit/s.
  41. Smith, Tony (11 October 2006). "PCI Express 2.0 final draft spec published". The Register. Archived from the original on 29 January 2007. Retrieved 9 February 2007.
  42. Key, Gary; Fink, Wesley (21 May 2007). "Intel P35: Intel's Mainstream Chipset Grows Up". AnandTech. Archived from the original on 23 May 2007. Retrieved 21 May 2007.
  43. Huynh, Anh (8 February 2007). "NVIDIA "MCP72" Details Unveiled". AnandTech. Archived from the original on 10 February 2007. Retrieved 9 February 2007.
  44. "Intel P35 Express Chipset Product Brief" (PDF). Intel. Archived (PDF) from the original on 26 September 2007. Retrieved 5 September 2007.
  45. Hachman, Mark (2009-08-05). "PCI Express 3.0 Spec Pushed Out to 2010". PC Mag. Archived from the original on 2014-01-07. Retrieved 2012-12-07.
  46. "PCI Express 3.0 Bandwidth: 8.0 Gigatransfers/s". ExtremeTech. 9 August 2007. Archived from the original on 24 October 2007. Retrieved 5 September 2007.
  47. "PCI Special Interest Group Publishes PCI Express 3.0 Standard". X bit labs. 18 November 2010. Archived from the original on 21 November 2010. Retrieved 18 November 2010.
  48. "PCIe 3.1 and 4.0 Specifications Revealed". eteknix.com. Archived from the original on 2016-02-01.
  49. "Trick or Treat… PCI Express 3.1 Released!". synopsys.com. Archived from the original on 2015-03-23.
  50. "PCI Express 4.0 evolution to 16 GT/s, twice the throughput of PCI Express 3.0 technology" (press release). PCI-SIG. 2011-11-29. Archived from the original on 2012-12-23. Retrieved 2012-12-07.
  51. https://pcisig.com/faq?field_category_value%5B%5D=pci_express_4.0#4415 Archived 2016-10-20 at the Wayback Machine
  52. 1 2 Born, Eric (8 June 2017). "PCIe 4.0 specification finally out with 16 GT/s on tap". Tech Report. Archived from the original on 8 June 2017. Retrieved 8 June 2017.
  53. "NETINT Introduces Codensity with Support for PCIe 4.0 - NETINT Technologies". NETINT Technologies. 2018-07-17. Retrieved 2018-09-28.
  54. https://www.broadcom.com/company/news/product-releases/2367107
  55. https://wccftech.com/amd-ryzen-3000-zen-2-desktop-am4-processors-launching-mid-2019/
  56. https://www.tomshardware.com/news/amd-ryzen-pcie-4.0-motherboard,38401.html
  57. "PLDA Announces Availability of XpressRICH5™ PCIe 5.0 Controller IP | PLDA.com". www.plda.com. Retrieved 2018-06-28.
  58. "XpressRICH5 for ASIC | PLDA.com". www.plda.com. Retrieved 2018-06-28.
  59. "PCIe 5.0 Is Ready For Prime Time". tomshardware.com. Retrieved 18 January 2019.
  60. "PLX demo shows PCIe over fiber as data center clustering interconnect". Cabling install. Penn Well. Retrieved 29 August 2012.
  61. "Introduced second generation PCI Express Gen 2 over fiber optic systems". Adnaco. 2011-04-22. Archived from the original on 4 October 2012. Retrieved 29 August 2012.
  62. "PCIe Active Optical Cable System". Archived from the original on 30 December 2014. Retrieved 23 October 2015.
  63. "Acer, Asus to Bring Intel's Thunderbolt Speed Technology to Windows PCs". PC World. 2011-09-14. Archived from the original on 2012-01-18. Retrieved 2012-12-07.
  64. Kevin Parrish (2013-06-28). "PCIe for Mobile Launched; PCIe 3.1, 4.0 Specs Revealed". Tom's Hardware. Retrieved 2014-07-10.
  65. "PCI Express 4.0 Draft 0.7 & PIPE 4.4 Specifications - What Do They Mean to Designers? — Synopsys Technical Article | ChipEstimate.com". www.chipestimate.com. Retrieved 2018-06-28.
  66. "PCI Express 1×, 4×, 8×, 16× bus pinout and wiring @". RU: Pinouts. Archived from the original on 2009-11-25. Retrieved 2009-12-07.
  67. "PHY Interface for the PCI Express Architecture" (PDF) (version 2.00 ed.). Intel. Archived from the original (PDF) on 17 March 2008. Retrieved 21 May 2008.
  68. "Mechanical Drawing for PCI Express Connector". Interface bus. Retrieved 7 December 2007.
  69. "FCi schematic for PCIe connectors" (PDF). FCI connect. Retrieved 7 December 2007.
  70. PCI EXPRESS BASE SPECIFICATION, REV. 3.0 Table 4-24
  71. "Computer Peripherals And Interfaces". Technical Publications Pune. Archived from the original on 25 February 2014. Retrieved 23 July 2009.
  72. "All about the various PC power supply cables and connectors". www.playtool.com. Retrieved 2018-11-10.
  73. "NVIDIA Introduces NVIDIA Quadro® Plex – A Quantum Leap in Visual Computing". Nvidia. 2006-08-01. Archived from the original on 2006-08-24.
  74. "Quadro Plex VCS – Advanced visualization and remote graphics". nVidia. Archived from the original on 2011-04-28. Retrieved 2010-09-11.
  75. "XGP". ATI. AMD. Archived from the original on 2010-01-29. Retrieved 2010-09-11.
  76. Fujitsu-Siemens Amilo GraphicBooster External Laptop GPU Released, 2008-12-03, archived from the original on 2015-10-16, retrieved 2015-08-09
  77. DynaVivid Graphics Dock from Acer arrives in France, what about the US?, 2010-08-11, archived from the original on 2015-10-16, retrieved 2015-08-09
  78. Dougherty, Steve (May 22, 2010), "MSI to showcase 'GUS' external graphics solution for laptops at Computex", TweakTown
  79. Hellstrom, Jerry (August 9, 2011), "ExpressCard trying to pull a (not so) fast one?", PC Perspective (editorial), archived from the original on February 1, 2016
  80. "PE4H V3.2 (PCIe x16 Adapter)". Hwtools.net. Archived from the original on 2014-02-14. Retrieved 2014-02-05.
  81. O'Brien, Kevin (September 8, 2010), "How to Upgrade Your Notebook Graphics Card Using DIY ViDOCK", Notebook review, archived from the original on December 13, 2013
  82. Lal Shimpi, Anand (September 7, 2011), "The Thunderbolt Devices Trickle In: Magma's ExpressBox 3T", AnandTech, archived from the original on March 4, 2016
  83. "MSI GUS II external GPU enclosure with Thunderbolt" (hands-on). The Verge. Archived from the original on 2012-02-13. Retrieved 2012-02-12.
  84. "PCI express graphics, Thunderbolt", Tom’s hardware
  85. "M logics M link Thunderbold chassis no shipping", Engadget, Dec 13, 2012, archived from the original on 2017-06-25
  86. Burns, Chris (October 17, 2017), "2017 Razer Blade Stealth and Core V2 detailed", SlashGear, archived from the original on October 17, 2017
  87. "CompactFlash Association readies next-gen XQD format, promises write speeds of 125 MB/s and up". Engadget. 2011-12-08. Archived from the original on 2014-05-19. Retrieved 2014-05-18.
  88. Zsolt Kerekes (December 2011). "What's so very different about the design of Fusion-io's ioDrives / PCIe SSDs?". storagesearch.com. Archived from the original on 2013-09-23. Retrieved 2013-10-02.
  89. "Fusion-io ioDrive Duo Enterprise PCIe Review". storagereview.com. 2012-07-16. Archived from the original on 2013-10-04. Retrieved 2013-10-02.
  90. "OCZ Demos 4 TiB, 16 TiB Solid-State Drives for Enterprise". X-bit labs. Archived from the original on 2013-03-25. Retrieved 2012-12-07.
  91. "Enabling Higher Speed Storage Applications with SATA Express". SATA-IO. Archived from the original on 2012-11-27. Retrieved 2012-12-07.
  92. "SATA M.2 Card". SATA-IO. Archived from the original on 2013-10-03. Retrieved 2013-09-14.
  93. "SCSI Express". SCSI Trade Association. Archived from the original on 2013-01-27. Retrieved 2012-12-27.
  94. Meduri, Vijay (2011-01-24). "A Case for PCI Express as a High-Performance Cluster Interconnect". HPCwire. Archived from the original on 2013-01-14. Retrieved 2012-12-07.
  95. "Archived copy". Archived from the original on 2016-11-28. Retrieved 2016-12-17.CS1 maint: Archived copy as title (link)
  96. Evan Koblentz (February 3, 2017). "New PCI Express 4.0 delay may empower next-gen alternatives". Tech Republic. Archived from the original on April 1, 2017. Retrieved March 31, 2017.
  97. Compute Express Link (CXL) site

Further reading