Release date |
|
---|---|
Discontinued | November 28, 2022[1] |
Manufactured by | TSMC |
Designed by | Nvidia |
Marketed by | Nvidia |
Codename | TU10x |
Architecture | |
Models | GeForce RTX series |
Transistors |
|
Fabrication process | TSMC 12 nm (FinFET) |
Cards | |
Entry-level |
|
Mid-range |
|
High-end | |
Enthusiast |
|
API support | |
DirectX | Direct3D 12.0 (feature level 12_2) Shader Model 6.8 |
OpenCL | OpenCL 3.0 [5] [a] |
OpenGL | OpenGL 4.6 [6] |
Vulkan | Vulkan 1.3 [7] |
History | |
Predecessor | GeForce 10 series |
Variant | GeForce 16 series |
Successor | GeForce 30 series |
Support status | |
Supported |
The GeForce 20 series is a family of graphics processing units developed by Nvidia. [8] Serving as the successor to the GeForce 10 series, [9] the line started shipping on September 20, 2018, [10] and after several editions, on July 2, 2019, the GeForce RTX Super line of cards was announced. [11]
The 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, [12] the first in the industry to implement hardware-enabled real-time ray tracing in a consumer product. [13] In a departure from Nvidia's usual strategy, the 20 series has no entry-level range, leaving it to the 16 series to cover this segment of the market. [14]
These cards are succeeded by the GeForce 30 series, powered by the Ampere microarchitecture, which first launched in 2020. [15]
On August 14, 2018, Nvidia teased the announcement of the first card in the 20 series, the GeForce RTX 2080, shortly after introducing the Turing architecture at SIGGRAPH earlier that year. [12] The GeForce 20 series was finally announced at Gamescom on August 20, 2018, [8] becoming the first line of graphics cards "designed to handle real-time ray tracing" thanks to the "inclusion of dedicated tensor and RT cores." [13]
In August 2018, it was reported that Nvidia had trademarked GeForce RTX and Quadro RTX as names. [16]
The line started shipping on September 20, 2018. [10] Serving as the successor to the GeForce 10 series, [9] the 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, the first in the industry to implement realtime hardware ray tracing in a consumer product. [17]
Released in late 2018, the RTX 2080 was marketed as up to 75% faster than the GTX 1080 in various games, [18] also describing the chip as "the most significant generational upgrade to its GPUs since the first CUDA cores in 2006," according to PC Gamer . [19]
After the initial release, factory overclocked versions were released in the late 2018. [20] The first was the "Ti" edition, [21] while the Founders Edition cards were overclocked by default and had a three-year warranty. [18] When the GeForce RTX 2080 Ti came out, TechRadar called it "the world’s most powerful GPU on the market." [22] The GeForce RTX 2080 Founders Edition was positively reviewed for performance by PC Gamer on September 19, 2018, [23] but was criticized for the high cost to consumers, [23] [24] also noting that its ray tracing feature wasn't yet utilized by many programs or games. [23] In January 2019, Tom's Hardware also stated the GeForce RTX 2080 Ti Xtreme was "the fastest gaming graphics card available," although it criticized the loudness of the cooling solution, the size and heat output in PC cases. [25] In August 2018, the company claimed that the GeForce RTX graphics cards were the "world’s first graphics cards to feature super-fast GDDR6 memory, a new DisplayPort 1.4 output that can drive up to 8K HDR at 60Hz on future-generation monitors with just a single cable, and a USB Type-C output for next-generation Virtual Reality headsets." [26]
In October 2018, PC Gamer reported the supply of the 2080 Ti card was "extremely tight" after availability had already been delayed. [27] By November 2018, MSI was offering nine different RTX 2080-based graphics cards. [28] Released in December 2018, the line's Titan RTX was initially priced at $2500, significantly more than the $1300 then needed for a GeForce RTX 2080 Ti. [29]
In January 2019, Nvidia announced that GeForce RTX graphics cards would be used in 40 new laptops from various companies. [30] Also that month, in response to negative reactions to the pricing of the GeForce RTX cards, Nvidia CEO Jensen Huang stated "They were right. [We] were anxious to get RTX in the mainstream market... We just weren’t ready. Now we’re ready, and it’s called 2060," in reference to the RTX 2060. [31] In May 2019, a TechSpot review noted that the newly released Radeon VII by AMD was comparable in speeds to the GeForce RTX 2080, if slightly slower in games, with both priced similarly and framed as direct competitors. [32]
On July 2, 2019, the GeForce RTX Super line of cards was announced, which comprises higher-spec versions of the 2060, 2070 and 2080. Each of the Super models were offered for a similar price as older models but with improved specs. [11] In July 2019, NVidia stated the "SUPER" graphics cards in the GeForce RTX 20 series, to be introduced, had a 15% performance advantage over the GeForce RTX 2060. [33] PC World called the super editions a "modest" upgrade for the price, and the 2080 Super chip the "second most-powerful GPU ever released" in terms of speed. [34] In November 2019, PC Gamer wrote "even without an overclock, the 2080 Ti is the best graphics card for gaming." [35] In June 2020, PC Mag listed the Nvidia GeForce RTX 2070 Super as one of the "best [8] graphics cards for 4k gaming in 2020." The GeForce RTX 2080 Founders Edition, Super, and Ti were also listed. [36] In June 2020, graphic cards including the RTX 2060, RTX 2060 Super, RTX 2070 and the RTX 2080 Super were announced as discounted by retailers in expectation of the GeForce RTX 3080 launch. [37] In April 2020, Nvidia announced 100 new laptops licensed to include either GeForce GTX and RTX models. [38]
Due to production problems surrounding the RTX 30-series cards and a general shortage of graphics cards due to production issues caused by the ongoing COVID-19 pandemic, which led to a global semiconductor shortage, and general demand for graphics cards increasing due to an increase in cryptocurrency mining, the RTX 2060 and its Super counterpart, alongside the GTX 1050 Ti, [39] were brought back into production in 2021. [40] [41]
Furthermore, the RTX 2060 was reissued on December 7, 2021 as a variant with 12GB of VRAM. [42] [43] However, availability of the card at launch was scarce. [44] [45]
The RTX 20 series is based on the Turing microarchitecture and features real-time hardware ray tracing. [46] The cards are manufactured on an optimized 14 nm node from TSMC, named 12 nm FinFET NVIDIA (FFN). [47] New example features in Turing included mesh shaders, [48] Ray tracing (RT) cores (bounding volume hierarchy acceleration), [49] tensor (AI) cores, [13] dedicated Integer (INT) cores for concurrent execution of integer and floating-point operations. [50] In the GeForce 20 series, this real-time ray tracing is accelerated by the use of new RT cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles.[ citation needed ]
The ray tracing performed by the RT cores can be used to produce effects such as reflections, refractions, shadows, depth of field, light scattering and caustics, replacing traditional raster techniques such as cube maps and depth maps. [ citation needed ] Notes: Instead of replacing rasterization entirely, however, ray tracing is offered in a hybrid model, in which the information gathered from ray tracing can be used to augment the rasterized shading for more photo-realistic results.[ citation needed ]
The second generation Tensor Cores (succeeding Volta's) work in cooperation with the RT cores, and their AI features are used mainly to two ends: firstly, de-noising a partially ray traced image by filling in the blanks between rays cast; also another application of the Tensor cores is DLSS (deep learning super-sampling), a new method to replace anti-aliasing, by artificially generating detail to upscale the rendered image into a higher resolution. [51] The Tensor cores apply deep learning models (for example, an image resolution enhancement model) which are constructed using supercomputers. The problem to be solved is analyzed on the supercomputer, which is taught by example what results are desired. The supercomputer then outputs a model which is then executed on the consumer's Tensor cores. These methods are delivered to consumers as part of the cards' drivers.[ citation needed ]
Nvidia segregates the GPU dies for Turing into A and non-A variants, which is appended or excluded on the hundreds part of the GPU code name. Non-A variants are not allowed to be factory overclocked, whilst A variants are. [52]
The GeForce 20 series was launched with GDDR6 memory chips from Micron Technology. However, due to reported faults with launch models, Nvidia switched to using GDDR6 memory chips from Samsung Electronics by November 2018. [53]
With the GeForce 20 series, Nvidia introduced the RTX development platform. RTX uses Microsoft's DXR, Nvidia's OptiX, and Vulkan for access to ray tracing. [54] The ray tracing technology used in the RTX Turing GPUs was in development at Nvidia for 10 years. [55] Nvidia's Nsight Visual Studio Edition application is used to inspect the state of the GPUs. [56]
All of the cards in the series have a PCIe 3.0 x16 interface, which connects it to the CPU, manufactured using a 12 nm FinFET process from TSMC, and use GDDR6 memory (initially Micron modules upon launch, and later Samsung modules from November 2018). [53]
Model | Launch date | Launch MSRP (USD) | Code name(s) [57] | Transistors (billion) | Die size (mm2) | Core config [b] | SM count [c] | L2 cache | Clock speeds [d] | Fillrate [e] [f] | Memory | Processing power (TFLOPS) | Ray tracing performance | TDP | NVLink support | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Memory (MT/s) | Pixel (GP/s) | Texture (GT/s) | Size | Bandwidth (GB/s) | Bus width | Half precision (boost) | Single precision (boost) | Double precision (boost) | Rays/s (billions) | RTX-OPS (trillions) | Tensor TFLOPS | |||||||||||
GeForce RTX 2060 [58] [59] | Jan 15, 2019 | $349 | TU106-200A | 10.8 | 445 | 1920 120:48:30:240 | 30 | 3 MB | 1365 (1680) | 14000 | 65.52 | 163.8 | 6 GB | 336 | 192-bit | 10.483 (12.902) | 5.242 (6.451) | 0.164 (0.202) | 5 | 37 | 51.6 | 160 W | No |
Jan 10, 2020 | $300 | TU104-150 | 13.6 | 545 | |||||||||||||||||||
GeForce RTX 2060 (12 GB) [60] | Dec 7, 2021 | ? | TU106-300 | 10.8 | 445 | 2176 136:48:34:272 | 34 | 1470 (1650) | 79.2 | 224.4 | 12 GB | 12.246 (14.362) | 6.123 (7.181) | 0.191 (0.224) | 6 | 41 | 57.4 | 185 W | |||||
GeForce RTX 2060 Super [61] [62] | Jul 9, 2019 | $399 | TU106-410 | 2176 136:64:34:272 | 4 MB | 94.05 | 199.9 | 8 GB | 448 | 256-bit | 175 W | ||||||||||||
GeForce RTX 2070 [63] | Oct 17, 2018 | $499 | TU106-400 | 2304 144:64:36:288 | 36 | 1410 (1620) | 90.24 | 203.04 | 12.994 (14.930) | 6.497 (7.465) | 0.203 (0.233) | 45 | 59.7 | ||||||||||
$599 | TU106-400A | ||||||||||||||||||||||
GeForce RTX 2070 Super [61] [62] | Jul 9, 2019 | $499 | TU104-410 | 13.6 | 545 | 2560 160:64:40:320 | 40 | 1605 (1770) | 102.72 | 256.8 | 16.435 (18.125) | 8.218 (9.062) | 0.257 (0.283) | 7 | 52 | 72.5 | 215 W | 2-way | |||||
GeForce RTX 2080 [64] | Sep 20, 2018 | $699 | TU104-400 | 2944 184:64:46:368 | 46 | 1515 (1710) | 96.96 | 278.76 | 17.840 (20.137) | 8.920 (10.068) | 0.279 (0.315) | 8 | 60 | 80.5 | |||||||||
$799 | TU104-400A | ||||||||||||||||||||||
GeForce RTX 2080 Super [61] [62] | Jul 23, 2019 | $699 | TU104-450 | 3072 192:64:48:384 | 48 | 1650 (1815) | 15500 | 105.6 | 316.8 | 496 | 20.275 (22.303) | 10.138 (11.151) | 0.317 (0.349) | 63 | 89.2 | 250 W | |||||||
GeForce RTX 2080 Ti [65] | Sep 27, 2018 | $999 | TU102-300 | 18.6 | 754 | 4352 272:88:68:544 | 68 | 5.5 MB | 1350 (1545) | 14000 | 118.8 | 367.2 | 11 GB | 616 | 352-bit | 23.500 (26.896) | 11.750 (13.448) | 0.367 (0.421) | 10 | 78 | 107.6 | ||
$1199 | TU102-300A | ||||||||||||||||||||||
Nvidia Titan RTX [66] | Dec 18, 2018 | $2499 | TU102-400 | 4608 288:96:72:576 | 72 | 6 MB | 1350 (1770) | 129.6 | 388.8 | 24 GB | 672 | 384-bit | 24.884 (32.625) | 12.442 (16.312) | 0.389 (0.510) | 11 | 84 | 130.5 | 280 W |
Model | Launch | Code name(s) | Transistors (billion) | Die size (mm2) | Core config [b] | SM count [c] | L2 cache | Clock speeds [d] | Fillrate [e] [f] | Memory | Processing power (TFLOPS) | Ray tracing performance | TDP | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Memory (MT/s) | Pixel (GP/s) | Texture (GT/s) | Size | Bandwidth (GB/s) | Bus width | Half precision (boost) | Single precision (boost) | Double precision (boost) | Rays/s (billions) | RTX-OPS (trillions) | |||||||||
GeForce RTX 2050 [67] [68] [69] [70] | Dec 17, 2021 | GA107 (GN20-S7) | 8.7 | 200 | 2048 64:32:32:256 | 16 | 2 MB | 1155 (1477) | 14000 | 47.26 | 94.53 | 4 GB | 112.0 | 64-bit | (12.10) | (6.050) | (0.189) | ? | ? | 30–45 W |
GeForce RTX 2060 Max-Q [67] [68] [71] | Jan 29, 2019 | TU106 (N18E-G1) | 10.8 | 445 | 1920 120:48:30:240 | 30 | 3 MB | 975 (1175) | 11000 | 56.88 | 142.2 | 6 GB | 264.0 | 192-bit | (9.101) | (4.550) | (0.142) | 65 W | ||
GeForce RTX 2060 [67] [68] [72] | 960 (1200) | 14000 | 57.60 | 144.0 | 336.0 | (9.216) | (4.608) | (0.144) | 3.5 | 26 | 80–90 W | |||||||||
Apr 2, 2020 [73] | TU106 (N18E-G1-B) | 115 W | ||||||||||||||||||
GeForce RTX 2070 Max-Q [67] [68] [74] | Jan 29, 2019 | TU106 (N18E-G2) | 2304 144:64:36:288 | 36 | 4 MB | 885 (1185) | 12000 | 75.84 | 170.6 | 8 GB | 384.0 | 256-bit | (10.92) | (5.460) | (0.171) | 4 | 31 | 80 W | ||
GeForce RTX 2070 [67] [68] [75] | 1215 (1440) | 14000 | 92.16 | 207.4 | 448.0 | (13.27) | (6.636) | (0.207) | 5 | 38 | 115 W | |||||||||
Apr 2, 2020 [73] | TU106 (N18E-G1R) | 1305 (1485) | ||||||||||||||||||
GeForce RTX 2070 Super Max-Q [67] [68] [76] | Apr 2, 2020 | TU104 (N18E-G2R) | 13.6 | 545 | 2560 160:64:40:320 | 40 | 930 (1155) | 12000 | 69.1 | 172.8 | 352.0 | (11.06) | (5.530) | (0.173) | 4 | 34 | 80 W | |||
GeForce RTX 2070 Super [67] [68] [77] | 1140 (1380) | 14000 | 88.3 | 220.8 | 448.0 | (14.13) | (7.066) | (0.221) | 5 | 40 | 115 W | |||||||||
GeForce RTX 2080 Max-Q [67] [68] [78] | Jan 29, 2019 | TU104 (N18E-G3) | 2944 184:64:46:368 | 46 | 735 (1095) | 12000 | 70.08 | 201.5 | 384.0 | (12.89) | (6.447) | (0.202) | 5 | 37 | 80 W | |||||
GeForce RTX 2080 [67] [68] [79] | 1380 (1590) | 14000 | 101.8 | 292.6 | 448.0 | (18.72) | (9.362) | (0.293) | 7 | 53 | 150+ W | |||||||||
GeForce RTX 2080 Super Max-Q [67] [68] [80] | Apr 2, 2020 | TU104 (N18E-G3R) | 3072 192:64:48:384 | 48 | 735 (1080) | 11000 | 69.1 | 207.4 | 352.0 | (13.27) | (6.636) | (0.207) | 5 | 38 | 80 W | |||||
GeForce RTX 2080 Super [67] [68] [81] | 1365 (1560) | 14000 | 99.8 | 299.5 | 448.0 | (19.17) | (9.585) | (0.300) | 7 | 55 | 150+ W |
GeForce is a brand of graphics processing units (GPUs) designed by Nvidia and marketed for the performance market. As of the GeForce 40 series, there have been eighteen iterations of the design. The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards, to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
Alienware Corporation is an American computer hardware subsidiary brand of Dell. Their product range is dedicated to gaming computers and accessories and can be identified by their alien-themed designs. Alienware was founded in 1996 by Nelson Gonzalez and Alex Aguila. The development of the company is also associated with Frank Azor, Arthur Lewis, Joe Balerdi, and Michael S. Dell (CEO). The company's corporate headquarters is located in The Hammocks, Miami, Florida.
Quadro was Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning from 2000 to 2020.
PureVideo is Nvidia's hardware SIP core that performs video decoding. PureVideo is integrated into some of the Nvidia GPUs, and it supports hardware decoding of multiple video codec standards: MPEG-2, VC-1, H.264, HEVC, and AV1. PureVideo occupies a considerable amount of a GPU's die area and should not be confused with Nvidia NVENC. In addition to video decoding on chip, PureVideo offers features such as edge enhancement, noise reduction, deinterlacing, dynamic contrast enhancement and color enhancement.
EVGA Corporation is an American computer hardware company that produces motherboards, gaming laptops, power supplies, all-in-one liquid coolers, computer cases, and gaming mice. Founded on April 13, 1999, its headquarters are in Brea, California. EVGA also produced Nvidia GPU-based video cards until 2022.
The GeForce 900 series is a family of graphics processing units developed by Nvidia, succeeding the GeForce 700 series and serving as the high-end introduction to the Maxwell microarchitecture, named after James Clerk Maxwell. They were produced with TSMC's 28 nm process.
The GeForce 10 series is a series of graphics processing units developed by Nvidia, initially based on the Pascal microarchitecture announced in March 2014. This design series succeeded the GeForce 900 series, and is succeeded by the GeForce 16 series and GeForce 20 series using the Turing microarchitecture.
Pascal is the codename for a GPU microarchitecture developed by Nvidia, as the successor to the Maxwell architecture. The architecture was first introduced in April 2016 with the release of the Tesla P100 (GP100) on April 5, 2016, and is primarily used in the GeForce 10 series, starting with the GeForce GTX 1080 and GTX 1070, which were released on May 27, 2016, and June 10, 2016, respectively. Pascal was manufactured using TSMC's 16 nm FinFET process, and later Samsung's 14 nm FinFET process.
Nvidia NVENC is a feature in Nvidia graphics cards that performs video encoding, offloading this compute-intensive task from the CPU to a dedicated part of the GPU. It was introduced with the Kepler-based GeForce 600 series in March 2012.
ZOTAC Technology Limited is a computer hardware manufacturer founded and based in Hong Kong. The company specializes in producing video cards (GPUs), mini PCs, solid-state drives, motherboards, gaming computers and other computer accessories. All its products are manufactured in the PC Partner factories in Dongguan City, China.
Graphics Double Data Rate 6 Synchronous Dynamic Random-Access Memory is a type of synchronous graphics random-access memory (SGRAM) with a high bandwidth, "double data rate" interface, designed for use in graphics cards, game consoles, and high-performance computing. It is a type of GDDR SDRAM, and is the successor to GDDR5. Just like GDDR5X it uses QDR in reference to the write command clock (WCK) and ODR in reference to the command clock (CK).
Nvidia RTX is a professional visual computing platform created by Nvidia, primarily used in workstations for designing complex large-scale models in architecture and product design, scientific visualization, energy exploration, and film and video production, as well as being used in mainstream PCs for gaming.
Turing is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is named after the prominent mathematician and computer scientist Alan Turing. The architecture was first introduced in August 2018 at SIGGRAPH 2018 in the workstation-oriented Quadro RTX cards, and one week later at Gamescom in consumer GeForce 20 series graphics cards. Building on the preliminary work of Volta, its HPC-exclusive predecessor, the Turing architecture introduces the first consumer products capable of real-time ray tracing, a longstanding goal of the computer graphics industry. Key elements include dedicated artificial intelligence processors and dedicated ray tracing processors. Turing leverages DXR, OptiX, and Vulkan for access to ray tracing. In February 2019, Nvidia released the GeForce 16 series GPUs, which utilizes the new Turing design but lacks the RT and Tensor cores.
The GeForce 16 series is a series of graphics processing units (GPUs) developed by Nvidia, based on the Turing microarchitecture, announced in February 2019. The 16 series, commercialized within the same timeframe as the 20 series, aims to cover the entry-level to mid-range market, not addressed by the latter. As a result, the media have mainly compared it to AMD's Radeon RX 500 series of GPUs.
Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures. It was officially announced on May 14, 2020 and is named after French mathematician and physicist André-Marie Ampère.
The GeForce 30 series is a suite of graphics processing units (GPUs) designed and marketed by Nvidia, succeeding the GeForce 20 series. The GeForce 30 series is based on the Ampere architecture, which features Nvidia's second-generation ray tracing (RT) cores and third-generation Tensor Cores. Through Nvidia RTX, hardware-enabled real-time ray tracing is possible on GeForce 30 series cards.
The Radeon RX 6000 series is a series of graphics processing units developed by AMD, based on their RDNA 2 architecture. It was announced on October 28, 2020 and is the successor to the Radeon RX 5000 series. It consists of the entry-level RX 6400, mid-range RX 6500 XT, high-end RX 6600, RX 6600 XT, RX 6650 XT, RX 6700, RX 6700 XT, upper high-end RX 6750 XT, RX 6800, RX 6800 XT, and enthusiast RX 6900 XT and RX 6950 XT for desktop computers; and the RX 6600M, RX 6700M, and RX 6800M for laptops. A sub-series for mobile, Radeon RX 6000S, was announced in CES 2022, targeting thin and light laptop designs.
The GeForce 40 series is the most recent family of consumer-level graphics processing units developed by Nvidia, succeeding the GeForce 30 series. The series was announced on September 20, 2022, at the GPU Technology Conference (GTC) 2022 event, and launched on October 12, 2022, starting with its flagship model, the RTX 4090.
Ada Lovelace, also referred to simply as Lovelace, is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Ampere architecture, officially announced on September 20, 2022. It is named after the English mathematician Ada Lovelace, one of the first computer programmers. Nvidia announced the architecture along with the GeForce RTX 40 series consumer GPUs and the RTX 6000 Ada Generation workstation graphics card. The Lovelace architecture is fabricated on TSMC's custom 4N process which offers increased efficiency over the previous Samsung 8 nm and TSMC N7 processes used by Nvidia for its previous-generation Ampere architecture.