Release date | October 12, 2022 |
---|---|
Manufactured by | TSMC |
Designed by | Nvidia |
Marketed by | Nvidia |
Codename | AD10x |
Architecture | Ada Lovelace |
Models | GeForce RTX series |
Cores | 20–128 Streaming Multiprocessors (2560–16384 CUDA cores) |
Transistors |
|
Fabrication process | TSMC 4N [1] |
Cards | |
Entry-level |
|
Mid-range |
|
High-end |
|
Enthusiast |
|
API support | |
DirectX | Direct3D 12.0 Ultimate (feature level 12_2) Shader Model 6.8 |
OpenCL | OpenCL 3.0 [a] |
OpenGL | OpenGL 4.6 |
Vulkan | Vulkan 1.3 |
History | |
Predecessor | GeForce 30 series |
Support status | |
Supported |
The GeForce 40 series is the most recent family of consumer-level graphics processing units developed by Nvidia, succeeding the GeForce 30 series. The series was announced on September 20, 2022, at the GPU Technology Conference (GTC) 2022 event, and launched on October 12, 2022, starting with its flagship model, the RTX 4090. [1]
The cards are based on Nvidia's Ada Lovelace architecture and feature Nvidia RTX's third-generation RT cores for hardware-accelerated real-time ray tracing, and fourth-generation deep-learning-focused Tensor Cores.
Architectural highlights of the Ada Lovelace architecture include the following: [3]
The RTX 4090 was released as the first model of the series on October 12, 2022, launched for $1,599 US, [1] and the 16GB RTX 4080 was released on November 16, 2022 for $1,199 US. An RTX 4080 12GB was announced in September 2022, originally to be priced at $899 US, however following some controversy in the media it was "unlaunched" by Nvidia. On January 5, 2023, that model was released as the RTX 4070 Ti for $799 US. The RTX 4070 was then released on April 13, 2023 at $599 US MSRP. The RTX 4060 Ti was released on May 24, 2023 at $399 US, and the RTX 4060 on June 29, 2023, at $299 US. An RTX 4060 Ti 16GB followed on July 18, 2023, at $499 US. On January 8, 2024, Nvidia released the RTX 4070 SUPER at $599, RTX 4070 Ti SUPER at $799 and RTX 4080 SUPER at $999. These video cards were launched at higher specs and lower prices than their original counterparts. [11] In the same vein, the production of the RTX 4080 and RTX 4070 Ti have stopped due to the SUPER series, but the 4070 will remain. [12] In August 2024, Nvidia, citing the need "to improve supply and availability", introduced a variant of the RTX 4070 card with GDDR6 running at 20Gbps while all the other specs remain the same. [13] [14]
Model | Launch | Launch MSRP (USD/CNY) | Code name(s) | Transistors (billion) | Die size (mm2) | Core config [b] | SM count [c] | L2 cache (MB) | Clock speeds [d] | Fillrate [e] [f] | Memory | Processing power (TFLOPS) | TDP | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core clock (MHz) | Memory (Gb/s) | Pixel (Gpx/s) | Texture (Gtex/s) | Size (GB) | Band- width (GB/s) | Bus width (bit) | Half (boost) | Single (boost) | Double (boost) | Tensor compute (sparse) | ||||||||||
GeForce RTX 4060 [16] [17] [18] | Jun 29, 2023 | $299 | AD107-400 | 18.9 | 146 | 3072 96:32:24:96 | 24 | 24 | 1830 (2460) | 17 | 58.6 (78.7) | 175.7 (236.2) | 8 | 272 | 128 | (15.11) | (15.11) | (0.236) | 115 W | |
GeForce RTX 4060 Ti [16] [19] [20] | May 24, 2023 | $399 | AD106-350 | 22.9 | 190 | 4352 136:48:34:136 | 34 | 32 | 2310 (2540) | 18 | (121.7) | (344.8) | 288 | (22.06) | (22.06) | (0.345) | 160 W | |||
Jul 18, 2023 | $499 | AD106-351 | 16 | 165 W | ||||||||||||||||
GeForce RTX 4070 [21] [22] [23] | Apr 13, 2023 | $599 | AD104-250 | 35.8 | 294.5 | 5888 184:64:46:184 | 46 | 36 | 1920 (2475) | 21 | 122.9 (158.4) | 353.3 (455.4) | 12 | 504 | 192 | 22.61 (29.15) | 22.61 (29.15) | 0.353 (0.455) | 116.8 [233.6] | 200 W |
GeForce RTX 4070 SUPER [24] [25] | Jan 17, 2024 | AD104-350 | 7168 224:80:56:224 | 56 | 48 | 1980 (2475) | 158.4 (198) | 443.5 (554.4) | (35.48) | (35.48) | (0.554) | 220 W | ||||||||
GeForce RTX 4070 Ti [21] [26] | Jan 5, 2023 | $799 | AD104-400 | 7680 240:80:60:240 | 60 | 2310 (2610) | 184.8 (208.8) | 554.4 (626.4) | 35.48 (40.09) | 35.48 (40.09) | 0.554 (0.626) | 160.4 [320.8] | 285 W | |||||||
GeForce RTX 4070 Ti SUPER [24] [27] | Jan 24, 2024 | AD103-275 | 45.9 | 378.6 | 8448 264:96:66:264 | 66 | 2340 (2610) | 224.6 (292.3) | 617.8 (689) | 16 | 672 | 256 | (44.1) | (44.1) | (0.689) | |||||
GeForce RTX 4080 (12 GB) [28] [29] | Unlaunched [30] [31] | $899 | AD104-400 | 35.8 | 294.5 | 7680 240:80:60:240 | 60 | 2310 (2610) | 184.8 (208.8) | 554.4 (626.4) | 12 | 504.2 | 192 | 35.48 (40.09) | 35.48 (40.09) | 0.554 (0.626) | 160.4 [320.8] | |||
GeForce RTX 4080 [32] [33] | Nov 16, 2022 | $1,199 | AD103-300 | 45.9 | 378.6 | 9728 304:112:76:304 | 76 | 64 | 2210 (2505) | 22.4 | 247.5 (280.6) | 671.8 (761.5) | 16 | 716.8 | 256 | 42.99 (48.74) | 42.99 (48.74) | 0.672 (0.762) | 194.9 [389.8] | 320 W |
GeForce RTX 4080 SUPER [24] [34] | Jan 31, 2024 | $999 | AD103-400 | 10240 320:112:80:320 | 80 | 2205 (2550) | 23 | 246.9 (280.6) | 705.6 (801.6) | 736 | (51.3) | (51.3) | (0.802) | |||||||
GeForce RTX 4090 D [35] [36] [37] | Dec 28, 2023 (China only) | ¥12,999 | AD102-250 | 76.3 | 608.5 | 14592 456:176:114:456 | 114 | 72 | 2280 (2520) | 21 | 401.2 (443.5) | 1,040 (1,149) | 24 | 1008 | 384 | (73.54) | (73.54) | (1.149) | 425 W | |
GeForce RTX 4090 [38] [39] | Oct 12, 2022 | $1,599 | AD102-300 | 16384 512:176:128:512 | 128 | 2230 (2520) | 392.5 (443.5) | 1,142 (1,290) | 73.07 (82.58) | 73.07 (82.58) | 1.142 (1.290) | 330.3 [660.6] | 450 W |
Model | Launch | Code name(s) | Transistors (billion) | Die size (mm2) | Core config [b] | SM count [c] | L2 cache (MB) | Clock speeds [d] | Memory | TDP | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core clock (MHz) | Memory (Gb/s) | Size (GB) | Band- width (GB/s) | Bus width (bit) | |||||||||
GeForce RTX 4050 Laptop [40] [41] [42] | Feb 22, 2023 | AD107 (GN21-X2) | 18.9 | 156 [43] | 2560 80:32:20:80 | 20 | 12 | 1140–2370 (1605–2370) | 16 | 6 | 192 | 96 | 35–115 W |
GeForce RTX 4060 Laptop [40] [44] [45] | AD107 (GN21-X4) | 3072 96:32:24:96 | 24 | 32 | 1140–2295 (1470–2370) | 8 | 256 | 128 | |||||
GeForce RTX 4070 Laptop [40] [46] [47] | AD106 (GN21-X6) | 22.9 | 186 [43] | 4608 144:48:36:144 | 36 | 735–2070 (1230–2175) | |||||||
GeForce RTX 4080 Laptop [40] [48] [49] | Feb 8, 2023 | AD104 (GN21-X9) | 35.8 | 294.5 [43] | 7424 232:80:58:232 | 58 | 48 | 795–1860 (1350–2280) | 18 | 12 | 432 | 192 | 60–150 W |
GeForce RTX 4090 Laptop [40] [50] [51] | AD103 (GN21-X11) | 45.9 | 378.6 [43] | 9728 304:112:76:304 | 76 | 64 | 930–1620 (1455–2040) | 16 | 576 | 256 | 80–150 W |
When the 12GB variant of the RTX 4080 was announced, numerous publications, prominent YouTubers, reviewers, and the community criticized Nvidia for marketing it as an RTX 4080, given the equivalent performance of previous XX80-class cards when compared with other cards of the same series, and the large gap in specifications and performance compared to the 16GB variant. [52] [53] [54] [55] The criticism focused on Nvidia's naming scheme - two 4080 models were to be offered, an "RTX 4080 12GB" and an "RTX 4080 16GB" - the obvious interpretation was that the only difference between the two products would be a difference in VRAM capacity. However, unlike other examples of Nvidia GPUs with differing memory configurations within the same class which have typically been very close in processing performance, the RTX 4080 12GB uses a completely different processing unit and memory architecture: the 4080 12GB would use the AD104 chip, which features 27% fewer CUDA cores than the 16GB variant's AD103 chip. It would also have a reduced TDP (285W - the 16GB variant was configured to draw up to 320W), and a cut-down 192-bit memory bus. In prior generations (e.g., the GeForce 10, 20 and 30 series), a 192-bit bus width had been reserved for XX60-class cards and below. [52] These changes made the card up to 30% slower than the RTX 4080 16GB in memory-agnostic performance, while being priced significantly higher than previous XX70-class cards ($900 vs. $500 for the RTX 3070, approximately 80% more expensive).
On October 14, 2022, Nvidia announced that due to the confusion caused by the naming scheme, it would be "unlaunching"—i.e. postponing the launch of—the RTX 4080 12GB, with the RTX 4080 16GB's launch remaining unaffected. [31] [30]
On January 3, 2023, Nvidia reintroduced the RTX 4080 12GB as the RTX 4070 Ti during CES 2023 and reduced its list price by $100. [40] The 4070 Ti's production was stopped with the introduction of the 4070 Ti SUPER. [12]
Some buyers of the Nvidia RTX 4090, the first GPU to use the new connector, reported that the connectors of their RTX 4090 were melting, [56] which sparked several theories to explain it. After investigation, several sources reported that the main cause was the 12VHPWR connector not being fully seated while being put under load that resulted in overheating of the connector's pins, which in turn caused the melting of the plastic housing. [57] [58]
PCI-SIG, the standards organization responsible for the creation of the 12VHPWR connector, has decided to make changes to the connector's specifications following the failures. [59]
A class-action lawsuit has been filed against Nvidia over melting 12VHPWR cables which the lawsuit states is "a dangerous product that should not have been sold in its current state." [60] The plaintiff who brought the suit claims that Nvidia unjustly enriched itself, violated the product's warranty and engaged in fraud and they are demanding that Nvidia pay damages to affected customers as compensation. [61]
Following its own investigation and testing, Nvidia officially offered a statement on the melting connectors. They determined that the melting connectors are caused by user error from not inserting the 12VHPWR connector properly, causing partial contact. They have offered an expedited RMA process for any RTX 4090 affected by the melting connectors. [62] [63] [64] PCI-SIG later said in a statement that Nvidia and their partners were still responsible for testing their products to account for user error. [65]
Despite these claims of user error, a revised connector design intended to address these issues was introduced under the new name 12V-2x6. [66]
In February 2024 the U.S. Consumer Product Safety Commission announced a voluntary recall of 12VHPWR adapters made by Cablemod. According to the recall filing, 272 reports were filed with about 25300 units shipped. The recall covers adapters using both the initial and the revised 12V-2x6 (CEM 5.1) design. [67]
It was also reported that the new connectors have a limited lifespan of around 30–40 mating cycles before contact potentially becoming unreliable. [68]
It has been noted that the older 6- and 8-pin connectors had substantially larger manufacturer-specified current-carrying capacity in relation to the power limits specified by PCI SIG: [69] [70]
Connector | 8-pin power | 12VHPWR / 12V-2x6 |
---|---|---|
Rated current per pin | 7–8 A [71] [72] [i] | 9.5 A [73] |
Rated power [ii] | 8 A × 12 V × 3 = 288 W | 9.5 A × 12 V × 6 = 684 W |
Specified power [iii] | 150 W | 600 W |
Safety factor | 1.9 | 1.1 |
The United States Department of Commerce began enacting restrictions on the Nvidia RTX 4090 for export to certain countries in 2023. [74] [75] [76] This was targeted mainly towards China as an attempt to halt its AI development. [77] [78] However as a result of this, the market price of the RTX 4090 rose by 25% as China began to stockpile 4090s. [79] This led to Nvidia releasing a China-specific model of the RTX 4090 called the RTX 4090D (D standing for "Dragon"). [80] The RTX 4090D features a shaved down AD102 die with 14592 CUDA cores, down from 16384 cores of the original 4090. [81]
The restrictions began on November 17th, 2023, effectively banning the RTX 4090 from China and other countries on the export restricted list. [74]
Upon release, the RTX 4090's performance received praise from reviewers, with Tom's Hardware saying "it now ranks among the best graphics cards". [82] A review by PC Gamer gave it 83/100, calling it "a hell of an introduction to the sort of extreme performance Ada can deliver when given a long leash". [83] Tom Warren in a review for The Verge said that the RTX 4090 is "a beast of a graphics card that marks a new era for PC gaming". [84] John Loeffler from GamesRadar wrote that the RTX 4090 offers "an incredible gen-on-gen performance improvement over the RTX 3090", "even putting the Nvidia GeForce RTX 3090 Ti to shame" and being "too powerful for a lot of gamers to really take advantage of". [85] German online magazine golem.de described the RTX 4090 as the first GPU capable of native 4K gaming with ray tracing, as with the predecessor "one often preferred to forego ray tracing in favor of native resolution, especially in 4K." [86] Aside from gaming performance, PCWorld 's review highlighted the RTX 4090's benefits for content creators and streamers with its 24GB VRAM and AV1 encoding ability and was positive towards the Founders Edition's quiet operation and cooling ability. [87] According to Joanna Nelius from USA Today , "The RTX 4090 drives such high frame rates that it would be a waste to have anything other than a 4K monitor, especially one with a refresh rate under 144Hz." [88]
However, it also received criticism for its value proposition given that it begins at $1,599. [84] Analysis by TechSpot found that the RTX 4090's value at 1440p was worse than the RTX 3090 Ti and that the RTX 4090 did not make much sense for 1440p as it was limited by CPU bottlenecks. [89] Power consumption was another point of criticism for the RTX 4090. [89] The RTX 4090 has a TDP of 450W compared to the 350W of its last generation equivalent. However, Jarred Walton of Tom's Hardware noted that the RTX 4090 has the same 450W TDP as the RTX 3090 Ti while delivering much higher performance with that power consumption. [82]
Aftermarket AIB RTX 4090 variants received criticism in particular for their massive cooler sizes with some models being 4 PCIe slots tall. This made it difficult to fit an RTX 4090 into many mainstream PC cases. [90] [91] YouTube reviewer JayzTwoCents showed an Asus ROG Strix RTX 4090 model being comparable in size to an entire PlayStation 5 console. [92] Another comparison showed Nvidia's RTX 4090 Founders Edition next to the Xbox Series X for size. [93] It was theorized that a reason for the massive coolers on some models is that the RTX 4090 was originally designed to have a TDP up to 600W before it was reduced to its official 450W TDP. [94]
The RTX 4080 received more mixed reviews when compared to the RTX 4090. TechSpot gave the RTX 4080 score of 80/100. [95] Antony Leather of Forbes found that the RTX 4080 consistently performed better than the RTX 3090 Ti. [96] The GPU's power efficiency was positively received with Digital Trends finding that the GPU had an average power draw of 271W despite its rated 320W TDP. [97]
The RTX 4080's $1199 price received criticism for its dramatic increase from that of the RTX 3080. In his review for RockPaperShotgun , James Archer wrote that the RTX 4080 "produces sizeable gains on the RTX 3080, though they're not exactly proportional to the price rise". [98] In another critique, RockPaperShotgun highlighted that AIB models can significantly exceed the base $1199 Founders Edition price, creating further value considerations. The Asus ROG Strix AIB model they reviewed came in at $1550 which is $50 less than the RTX 4090 Founders Edition. [98] Tom Warren of The Verge recommended waiting to see what AMD could deliver in performance and value with their RDNA 3 GPUs. [99] AMD's direct competitor, the Radeon RX 7900 XTX, comes in at $999 compared to the $1199 price of the RTX 4080.
The RTX 4080 received criticism for reusing the RTX 4090's massive 4-slot coolers which are not required to cool the RTX 4080's 320W TDP. [100] [101] A smaller cooler would have been sufficient. The RTX 3080 and RTX 3080 Ti with their respective 320W and 350W TDPs maintained 2-slot coolers while the 320W RTX 4080 has a 3-slot cooler on the Founders Edition and 4-slots on many AIB models. [99]
It was reported that RTX 4080 sales were quite weak compared to the RTX 4090, which had shockingly sold out during its launch a month earlier. [102] The global cost of living crisis and the RTX 4080's generational pricing increase have been suggested as major contributing factors for poor sales numbers. [103]
Following its relaunch as the RTX 4070 Ti, the GPU received mixed reviews. While its good thermal, 1080p, and 1440p performance were praised, its 4K performance is considered weak for the high $800 price tag.
Jacob Roach from Digital Trends gave the RTX 4070 Ti a 6/10, criticizing the lower memory bandwidth compared to the RTX 4080, its worse 4K performance, and the lack of a Founder's Edition model, with fears that the board partner cards would sell for over the $800 price tag, although he did praise the thermal performance and efficiency of his unit. [104]
Michael Justin Allen Sexton from PCMag reviewed the ZOTAC RTX 4070 Ti Amp Extreme Airo model, giving it a 3.5/5, criticizing the large size of the card and it being only $100 less than AMD's direct competitor, the Radeon RX 7900 XT, which comes in at $899 compared to the $799 price of the RTX 4070 Ti. [105]
It was reported that RTX 4070 Ti sales were quite weak, similar to the RTX 4080 at launch. [106]
At launch, Jacob Roach from Digital Trends gave the RTX 4070 a 4.5/5, praising its efficiency, the fact the Founder's Edition only took up two slots, and its good GPU performance at 1440p and 4K. [107] However, following the launch of the AMD Radeon RX 7800 XT, AMD's direct competitor to the RTX 4070, Monica J. White from Digital Trends compared the latter to the RTX 4070, and said she would recommend the 7800 XT over the RTX 4070 because of better rasterization performance at 1440p, the fact that the 7800 XT had 16GB of VRAM compared to 12GB on the RTX 4070, and that the 7800 XT had a $100 cheaper price. [108]
After the launch of AMD's Radeon RX 7800 XT, the direct competitor to the RTX 4070, which was released on September 6, 2023, the price of some RTX 4070 cards was lowered to $550 and the official MSRP was lowered to $550 following the launch of the SUPER series. [109] [110]
The RTX 4060 Ti 8GB released in late May 2023. Many reviewers and customers denounced Nvidia for the lack of appropriate video memory for the release of the card, pointing that the RTX 3060 of the previous generation was released with 12GB of VRAM. Furthermore, many reviews mentioned that an 8GB card was simply not enough for modern standards, for example Digital Trends saying "It makes sense for the upcoming RTX 4060, which is launching in July with 8GB of VRAM for $300, but not for the $400 RTX 4060 Ti. At this price and performance, it should have 16GB of memory, or at the very least, a larger bus size." [111] A few weeks later, prompted by consumers backlash, Nvidia announced they would release the RTX 4060 Ti 16GB in July 2023.
Despite that, the release of the 16GB variant itself was met with suspicion. Nvidia did not send samples for review, including to big names such as Gamers Nexus, Hardware Unboxed, and Linus Tech Tips. It was also later found out that AIBs were also prevented, presumably by Nvidia, to be allowed to give cards out for reviews. According to Gamesradar: "Coming in at $499, the Nvidia RTX 4060 Ti 16GB is arguably a hard sell. Normally, we'd test whether the GPU offers bang for buck, but it looks like reviews aren't going to be a thing. In a way, that sort of makes sense, as it should perform fairly similarly to the existing 8GB model, with the added benefit of more VRAM." [112]
YouTube reviewers Hardware Unboxed criticized both models of the RTX 4060 Ti, calling the 8GB of VRAM model "laughably bad at $400" and "frankly, we don't even need to test the RTX 4060 Ti 8GB to tell you it's a disgustingly poor value product...". They criticized its 8GB of VRAM, which led to poor 1% low frame rates at 1080p and unusable 1% low frame rates at 1440p in games such as The Last of Us Part I and The Callisto Protocol both using the ultra quality preset. A Plague Tale: Requiem was also unusable at 1080p not only using the ultra quality preset with ray tracing enabled, but using the high quality preset with ray tracing enabled as it suffered similar unusable 1% low performance. While Halo Infinite and Forspoken did not have poor 1% low performance at 1080p, they instead sometimes failed to load textures with the 8GB RTX 4060 Ti due to a lack of VRAM. [113] These were not an issue with the 16GB RTX 4060 Ti, however, the 16GB RTX 4060 Ti was criticized for costing $500 to "fix" the issues with running out of VRAM, which was $100 more than the 8GB RTX 4060 Ti for otherwise identical performance outside of VRAM limitations. [114]
Jacob Roach from Digital Trends gave the RTX 4060 a 3/5, criticizing the poor performance of the GPU for the price, its low 8GB of VRAM capacity, the slow memory interface, and its performance at 1440p, although its efficiency was praised. He stated: "In the past, the RTX 4060 would have been classified as a workhorse GPU... The RTX 4060 falls short of filling that post, offering solid generational improvements but middling competitive performance, and relying on DLSS 3 and ray tracing to find its value." [115]
A graphics card is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU). A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to erroneously refer to the graphics card as a whole.
GeForce is a brand of graphics processing units (GPUs) designed by Nvidia and marketed for the performance market. As of the GeForce 40 series, there have been eighteen iterations of the design. The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards, to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
Alienware Corporation is an American computer hardware subsidiary brand of Dell. Their product range is dedicated to gaming computers and accessories and can be identified by their alien-themed designs. Alienware was founded in 1996 by Nelson Gonzalez and Alex Aguila. The development of the company is also associated with Frank Azor, Arthur Lewis, Joe Balerdi, and Michael S. Dell (CEO). The company's corporate headquarters is located in The Hammocks, Miami, Florida.
Quadro was Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning from 2000 to 2020.
PureVideo is Nvidia's hardware SIP core that performs video decoding. PureVideo is integrated into some of the Nvidia GPUs, and it supports hardware decoding of multiple video codec standards: MPEG-2, VC-1, H.264, HEVC, and AV1. PureVideo occupies a considerable amount of a GPU's die area and should not be confused with Nvidia NVENC. In addition to video decoding on chip, PureVideo offers features such as edge enhancement, noise reduction, deinterlacing, dynamic contrast enhancement and color enhancement.
EVGA Corporation is an American computer hardware company that produces motherboards, gaming laptops, power supplies, all-in-one liquid coolers, computer cases, and gaming mice. Founded on April 13, 1999, its headquarters are in Brea, California. EVGA also produced Nvidia GPU-based video cards until 2022.
The GeForce 900 series is a family of graphics processing units developed by Nvidia, succeeding the GeForce 700 series and serving as the high-end introduction to the Maxwell microarchitecture, named after James Clerk Maxwell. They were produced with TSMC's 28 nm process.
The GeForce 10 series is a series of graphics processing units developed by Nvidia, initially based on the Pascal microarchitecture announced in March 2014. This design series succeeded the GeForce 900 series, and is succeeded by the GeForce 16 series and GeForce 20 series using the Turing microarchitecture.
NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was first announced in March 2014 and uses a proprietary high-speed signaling interconnect (NVHS).
Graphics Double Data Rate 6 Synchronous Dynamic Random-Access Memory is a type of synchronous graphics random-access memory (SGRAM) with a high bandwidth, "double data rate" interface, designed for use in graphics cards, game consoles, and high-performance computing. It is a type of GDDR SDRAM, and is the successor to GDDR5. Just like GDDR5X it uses QDR in reference to the write command clock (WCK) and ODR in reference to the command clock (CK).
Nvidia RTX is a professional visual computing platform created by Nvidia, primarily used in workstations for designing complex large-scale models in architecture and product design, scientific visualization, energy exploration, and film and video production, as well as being used in mainstream PCs for gaming.
The GeForce 20 series is a family of graphics processing units developed by Nvidia. Serving as the successor to the GeForce 10 series, the line started shipping on September 20, 2018, and after several editions, on July 2, 2019, the GeForce RTX Super line of cards was announced.
Turing is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is named after the prominent mathematician and computer scientist Alan Turing. The architecture was first introduced in August 2018 at SIGGRAPH 2018 in the workstation-oriented Quadro RTX cards, and one week later at Gamescom in consumer GeForce 20 series graphics cards. Building on the preliminary work of Volta, its HPC-exclusive predecessor, the Turing architecture introduces the first consumer products capable of real-time ray tracing, a longstanding goal of the computer graphics industry. Key elements include dedicated artificial intelligence processors and dedicated ray tracing processors. Turing leverages DXR, OptiX, and Vulkan for access to ray tracing. In February 2019, Nvidia released the GeForce 16 series GPUs, which utilizes the new Turing design but lacks the RT and Tensor cores.
The GeForce 16 series is a series of graphics processing units (GPUs) developed by Nvidia, based on the Turing microarchitecture, announced in February 2019. The 16 series, commercialized within the same timeframe as the 20 series, aims to cover the entry-level to mid-range market, not addressed by the latter. As a result, the media have mainly compared it to AMD's Radeon RX 500 series of GPUs.
Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures. It was officially announced on May 14, 2020 and is named after French mathematician and physicist André-Marie Ampère.
The GeForce 30 series is a suite of graphics processing units (GPUs) designed and marketed by Nvidia, succeeding the GeForce 20 series. The GeForce 30 series is based on the Ampere architecture, which features Nvidia's second-generation ray tracing (RT) cores and third-generation Tensor Cores. Through Nvidia RTX, hardware-enabled real-time ray tracing is possible on GeForce 30 series cards.
The Radeon RX 6000 series is a series of graphics processing units developed by AMD, based on their RDNA 2 architecture. It was announced on October 28, 2020 and is the successor to the Radeon RX 5000 series. It consists of the entry-level RX 6400, mid-range RX 6500 XT, high-end RX 6600, RX 6600 XT, RX 6650 XT, RX 6700, RX 6700 XT, upper high-end RX 6750 XT, RX 6800, RX 6800 XT, and enthusiast RX 6900 XT and RX 6950 XT for desktop computers; and the RX 6600M, RX 6700M, and RX 6800M for laptops. A sub-series for mobile, Radeon RX 6000S, was announced in CES 2022, targeting thin and light laptop designs.
Kelvin is the codename for a GPU microarchitecture developed by Nvidia, and released in 2001, as the successor to the Celsius microarchitecture. It was named with reference to William Thomson and used with the GeForce 3 and 4 series.
Ada Lovelace, also referred to simply as Lovelace, is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Ampere architecture, officially announced on September 20, 2022. It is named after the English mathematician Ada Lovelace, one of the first computer programmers. Nvidia announced the architecture along with the GeForce RTX 40 series consumer GPUs and the RTX 6000 Ada Generation workstation graphics card. The Lovelace architecture is fabricated on TSMC's custom 4N process which offers increased efficiency over the previous Samsung 8 nm and TSMC N7 processes used by Nvidia for its previous-generation Ampere architecture.
Members are reminded that PCI-SIG specifications provide necessary technical information for interoperability and do not attempt to address proper design, manufacturing methods, materials, safety testing, safety tolerances or workmanship. When implementing a PCI-SIG specification, Members are responsible for the design, manufacturing, and testing, including safety testing, of their products.
{{cite web}}
: CS1 maint: numeric names: authors list (link)Operating Current Rating For Power Pin = 9.5A/pin (12 pins energized)
{{cite web}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)