Release date | January 27, 2003 |
---|---|
Codename | NV30, NV31, NV34, NV35, NV36, NV38 |
Architecture | Rankine |
Models | GeForce FX series
|
Cards | |
Entry-level | FX 5100 FX 5200 FX 5200 LE FX 5300 FX 5500 |
Mid-range | FX 5600 FX 5700 PCX 5750 |
High-end | FX 5800 FX 5900 PCX 5950 |
Enthusiast | 5800 Ultra, 5900 Ultra, 5950 Ultra |
API support | |
DirectX | Direct3D 9.0a Shader Model 2.0a |
OpenGL | OpenGL 2.1 |
History | |
Predecessor | GeForce 4 series |
Successor | GeForce 6 series |
Support status | |
Unsupported |
The GeForce FX or "GeForce 5" series (codenamed NV30) is a line of graphics processing units from the manufacturer Nvidia.
Nvidia's GeForce FX series is the fifth generation of the GeForce line. With GeForce 3, the company introduced programmable shader functionality into their 3D architecture, in line with the release of Microsoft's DirectX 8.0. The GeForce 4 Ti was an enhancement of the GeForce 3 technology. With real-time 3D graphics technology continually advancing, the release of DirectX 9.0 brought further refinement of programmable pipeline technology with the arrival of Shader Model 2.0. The GeForce FX series is Nvidia's first generation Direct3D 9-compliant hardware.
The series was manufactured on TSMC's 130 nm fabrication process. [1] It is compliant with Shader Model 2.0/2.0A, allowing more flexibility in complex shader/fragment programs and much higher arithmetic precision. It supports a number of new memory technologies, including DDR2, GDDR2 and GDDR3 and saw Nvidia's first implementation of a memory data bus wider than 128 bits. [2] The anisotropic filtering implementation has potentially higher quality than previous Nvidia designs. [1] Anti-aliasing methods have been enhanced and additional modes are available compared to GeForce 4. [1] Memory bandwidth and fill-rate optimization mechanisms have been improved. [1] Some members of the series offer double fill-rate in z-buffer/stencil-only passes. [2]
The series also brought improvements to the company's video processing hardware, in the form of the Video Processing Engine (VPE), which was first deployed in the GeForce 4 MX. [3] The primary addition, compared to previous Nvidia GPUs, was per-pixel video-deinterlacing. [3]
The initial version of the GeForce FX (the 5800) was one of the first cards to come equipped with a large dual-slot cooler. Called "Flow FX", the cooler was very large in comparison to ATI's small, single-slot cooler on the 9700 series. [4] It was jokingly referred to as the "Dustbuster", due to a high level of fan noise. [5]
The advertising campaign for the GeForce FX featured the Dawn , which was the work of several veterans from the computer animation Final Fantasy: The Spirits Within. [6] Nvidia touted it as "The Dawn of Cinematic Computing". [7]
Nvidia debuted a new campaign to motivate developers to optimize their titles for Nvidia hardware at the Game Developers Conference (GDC) in 2002. In exchange for prominently displaying the Nvidia logo on the outside of the game packaging, the company offered free access to a state-of-the-art test lab in Eastern Europe, that tested against 500 different PC configurations for compatibility. Developers also had extensive access to Nvidia engineers, who helped produce code optimized for the company's products. [8]
Hardware based on the NV30 project didn't launch until near the end of 2002, several months after ATI had released their competing DirectX 9 architecture. [9]
GeForce FX is an architecture designed with DirectX 7, 8 and 9 software in mind. Its performance for DirectX 7 and 8 was generally equal to ATI's competing products with the mainstream versions of the chips, and somewhat faster in the case of the 5900 and 5950 models, but it is much less competitive across the entire range for software that primarily uses DirectX 9 features. [10]
Its weak performance in processing Shader Model 2 programs is caused by several factors. The NV3x design has less overall parallelism and calculation throughput than its competitors. [11] It is more difficult, compared to GeForce 6 and ATI Radeon R300 series, to achieve high efficiency with the architecture due to architectural weaknesses and a resulting heavy reliance on optimized pixel shader code. While the architecture was compliant overall with the DirectX 9 specification, it was optimized for performance with 16-bit shader code, which is less than the 24-bit minimum that the standard requires. When 32-bit shader code is used, the architecture's performance is severely hampered. [11] Proper instruction ordering and instruction composition of shader code is critical for making most of the available computational resources. [11]
Nvidia's initial release, the GeForce FX 5800, was intended as a high-end part. At the time, there were no GeForce FX products for the other segments of the market. The GeForce 4 MX continued in its role as the budget video card and the older GeForce 4 Ti cards filled in the mid-range.
In April 2003, the company introduced the GeForce FX 5600 and the GeForce FX 5200 to address the other market segments. Each had an "Ultra" variant and a slower, budget-oriented variant and all used conventional single-slot cooling solutions. The 5600 Ultra had respectable performance overall but it was slower than the Radeon 9600 Pro and sometimes slower than the GeForce 4 Ti series. [12] The FX 5200 did not perform as well as the DirectX 7.0 generation GeForce 4 MX440 or Radeon 9000 Pro in some benchmarks. [13]
In May 2003, Nvidia launched the GeForce FX 5900 Ultra, a new high-end product to replace the low-volume and disappointing FX 5800. Based upon a revised GPU called NV35, which fixed some of the DirectX 9 shortcomings of the discontinued NV30, this product was more competitive with the Radeon 9700 and 9800. [14] In addition to redesigning parts of the GPU, the company moved to a 256-bit memory data bus, allowing for significantly higher memory bandwidth than the 5800 even when utilizing more common DDR SDRAM instead of DDR2. [14] The 5900 Ultra performed somewhat better than the Radeon 9800 Pro in games not heavily using shader model 2, and had a quieter cooling system than the 5800. [14]
In October 2003, Nvidia released the GeForce FX 5700 and GeForce FX 5950. The 5700 was a mid-range card using the NV36 GPU with technology from NV35 while the 5950 was a high-end card again using the NV35 GPU but with additional clock speed. The 5950 also featured a redesigned version of the 5800's FlowFX cooler, this time using a larger, slower fan and running much quieter as a result. The 5700 provided strong competition for the Radeon 9600 XT in games limited to light use of shader model 2. [15] The 5950 was competitive with the Radeon 9800 XT, again as long as pixel shaders were lightly used. [16]
In December 2003, the company launched the GeForce FX 5900XT, a graphics card intended for the mid-range segment. It was similar to the 5900 Ultra, but clocked slower and used slower memory. It more thoroughly competed with Radeon 9600 XT, but was still behind in a few shader-intense scenarios. [17]
The GeForce FX line moved to PCI Express in early 2004 with a number of models, including the PCX 5300, PCX 5750, PCX 5900, and PCX 5950. These cards were largely the same as their AGP predecessors with similar model numbers. To operate on the PCIe bus, an AGP-to-PCIe "HSI bridge" chip on the video card converted the PCIe signals into AGP signals for the GPU. [18]
Also in 2004, the GeForce FX 5200 / 5300 series that utilized the NV34 GPU received a new member with the FX 5500. [19]
Model | Launch | Transistors (million) | Die size (mm2) | Core clock (MHz) | Memory clock (MHz) | Core config [lower-alpha 1] | Fillrate | Memory | TDP (Watts) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MOperations/s | MPixels/s | MTexels/s | MVertices/s | Size (MB) | Bandwidth (GB/s) | Bus type | Bus width (bit) | ||||||||||||
GeForce FX 5100 | March 2003 | NV34 | TSMC 150 nm | 45 [21] | 124 | AGP 8x | 200 | 166 | 4:2:4:4 | 800 | 800 | 800 | 100.0 | 64 128 | 2.6 | DDR | 64 | 12.0 | ? |
GeForce FX 5200 LE | 250 | 1,000 | 1,000 | 1,000 | 125.0 | 64 128 256 | 2.6 5.3 | 64 128 | 15.0 | ? | |||||||||
GeForce FX 5200 | AGP 8x PCI | 200 | 3.2 6.4 | 64 128 | 21 | ||||||||||||||
GeForce FX 5200 Ultra | March 6, 2003 | AGP 8x | 325 | 325 | 1,300 | 1,300 | 1,300 | 162.5 | 10.4 | 128 | 19.5 | 32 | |||||||
GeForce PCX 5300 | March 17, 2004 | PCIe x16 | 250 | 166 | 1,000 | 1,000 | 1,000 | 125.0 | 128 256 | 2.6 | 64 | 15.0 | 21 | ||||||
GeForce FX 5500 | March 2004 | NV34B | 45 [22] | 91 | AGP 8x AGP 4x PCI | 270 | 166 200 | 1,080 | 1,080 | 1,080 | 135.0 | 64 128 256 | 5.3 6.4 | 128 | 16.2 | ? | |||
GeForce FX 5600 XT | October 2003 | NV31 | TSMC 130 nm | 80 [23] | 121 | AGP 8x | 235 | 200 | 940 | 940 | 940 | 117.5 | 64 128 | 3.2 6.4 | 64 128 | 14.1 | ? | ||
GeForce FX 5600 | March 2003 | AGP 8x PCI | 325 | 275 | 1,300 | 1,300 | 1,300 | 162.5 | 64 128 256 [24] | 8.8 | 128 | 19.5 | 25 | ||||||
GeForce FX 5600 Ultra | March 6, 2003 | AGP 8x | 350 | 350 | 1,400 | 1,400 | 1,400 | 175.0 | 64 128 | 11.2 | 21.0 | 27 | |||||||
GeForce FX 5600 Ultra Rev.2 | 400 | 400 | 1,600 | 1,600 | 1,600 | 200.0 | 12.8 | 24.0 | 31 | ||||||||||
GeForce FX 5700 VE | September 2004 | NV36 | 82 [25] | 133 | 250 | 200 | 4:3:4:4 | 1000 | 1000 | 1000 | 187.5 | 128 256 | 3.2 6.4 | 64 128 | 17.5 | 20 | |||
GeForce FX 5700 LE | March 2004 | AGP 8x PCI | 21 | ||||||||||||||||
GeForce FX 5700 | 2003 | AGP 8x | 425 | 250 | 1,700 | 1,700 | 1,700 | 318.7 | 8.0 | 128 | 29.7 | 20 | |||||||
GeForce PCX 5750 | March 17, 2004 | PCIe x16 | 128 | 25 | |||||||||||||||
GeForce FX 5700 Ultra | October 23, 2003 | AGP 8x | 475 | 453 | 1,900 | 1,900 | 1,900 | 356.2 | 128 256 | 14.4 | GDDR2 | 33.2 | 43 | ||||||
GeForce FX 5700 Ultra GDDR3 | March 15, 2004 | 475 | 15.2 | GDDR3 | 38 | ||||||||||||||
GeForce FX 5800 | January 27, 2003 | NV30 | 125 [26] | 199 | 400 | 400 | 4:2:8:4 | 1,600 | 1,600 | 3,200 | 300.0 | 128 | 12.8 | GDDR2 | 24.0 | 55 | |||
GeForce FX 5800 Ultra | 500 | 500 | 2,000 | 2,000 | 4,000 | 375.0 | 16.0 | 30.0 | 66 | ||||||||||
GeForce FX 5900 ZT | December 15, 2003 | NV35 | 135 [27] | 207 | 325 | 350 | 4:3:8:4 | 1,300 | 1,300 | 2,600 | 243.7 | 22.4 | DDR | 256 | 22.7 | ? | |||
GeForce FX 5900 XT | December 15, 2003 [28] | 390 | 1,600 | 1,600 | 3,200 | 300.0 | 27.3 | 48 | |||||||||||
GeForce FX 5900 | May 2003 | 400 | 425 | 27.2 | 28.0 | 55 | |||||||||||||
GeForce FX 5900 Ultra | May 12, 2003 | 450 | 1,800 | 1,800 | 3,600 | 337.5 | 128 256 | 31.5 | 65 | ||||||||||
GeForce PCX 5900 | March 17, 2004 | PCIe x16 | 350 | 275 | 1,400 | 1,400 | 2,800 | 262.5 | 17.6 | 24.5 | 49 | ||||||||
GeForce FX 5950 Ultra | October 23, 2003 | NV38 | 135 [29] | 207 | AGP 8x | 475 | 475 | 1,900 | 1,900 | 3,800 | 356.2 | 256 | 30.4 | 33.2 | 83 | ||||
GeForce PCX 5950 | February 17, 2004 | PCIe x16 | 425 | 27.2 | GDDR3 | 83 | |||||||||||||
Model | Launch | Transistors (million) | Die size (mm2) | Core clock (MHz) | Memory clock (MHz) | Core config [lower-alpha 1] | Fillrate | Memory | TDP (Watts) | ||||||||||
MOperations/s | MPixels/s | MTexels/s | MVertices/s | Size (MB) | Bandwidth (GB/s) | Bus type | Bus width (bit) |
The GeForce FX Go 5 series for notebooks architecture.
Model | Launch | Fab (nm) | Core clock (MHz) | Memory clock (MHz) | Core config1 | Fillrate | Memory | Supported API version | TDP (Watts) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pixel (GP/s) | Texture (GT/s) | Size (MB) | Bandwidth (GB/s) | Bus type | Bus width (bit) | OpenGL | |||||||||||
Hardware | Drivers (Software) | ||||||||||||||||
GeForce FX Go 5100* | March 2003 | NV34M | 150 | AGP 8x | 200 | 400 | 4:2:4:4 | 0.8 | 0.8 | 64 | 3.2 | DDR | 64 | 9.0 | 1.5 | 2.1** | Unknown |
GeForce FX Go 5500* | 300 | 600 | 1.2 | 1.2 | 32 64 | 9.6 | 128 | Unknown | |||||||||
GeForce FX Go 5600* | NV31M | 130 | 350 | 1.4 | 1.4 | 32 | Unknown | ||||||||||
GeForce FX Go 5650* | 350 | Unknown | |||||||||||||||
GeForce FX Go 5700* | February 1, 2005 | NV36M | 450 | 550 | 4:3:4:4 | 1.8 | 1.8 | 8.8 | Unknown |
NVIDIA has ceased driver support for GeForce FX series.
Windows 95/98/Me Driver Archive
Windows XP/2000 Driver Archive
The GeForce 2 series (NV15) is the second generation of Nvidia's GeForce line of graphics processing units (GPUs). Introduced in 2000, it is the successor to the GeForce 256.
The GeForce 3 series (NV20) is the third generation of Nvidia's GeForce line of graphics processing units (GPUs). Introduced in February 2001, it advanced the GeForce architecture by adding programmable pixel and vertex shaders, multisample anti-aliasing and improved the overall efficiency of the rendering process.
The GeForce 4 series refers to the fourth generation of Nvidia's GeForce line of graphics processing units (GPUs). There are two different GeForce4 families, the high-performance Ti family, and the budget MX family. The MX family spawned a mostly identical GeForce4 Go (NV17M) family for the laptop market. All three families were announced in early 2002; members within each family were differentiated by core and memory clock speeds. In late 2002, there was an attempt to form a fourth family, also for the laptop market, the only member of it being the GeForce4 4200 Go (NV28M) which was derived from the Ti line.
The GeForce 6 series is the sixth generation of Nvidia's GeForce line of graphics processing units. Launched on April 14, 2004, the GeForce 6 family introduced PureVideo post-processing for video, SLI technology, and Shader Model 3.0 support.
Core Image is a pixel-accurate, near-realtime, non-destructive image processing technology in Mac OS X. Implemented as part of the QuartzCore framework of Mac OS X 10.4 and later, Core Image provides a plugin-based architecture for applying filters and effects within the Quartz graphics rendering layer. The framework was later added to iOS in iOS 5.
The R200 is the second generation of GPUs used in Radeon graphics cards and developed by ATI Technologies. This GPU features 3D acceleration based upon Microsoft Direct3D 8.1 and OpenGL 1.3, a major improvement in features and performance compared to the preceding Radeon R100 design. The GPU also includes 2D GUI acceleration, video acceleration, and multiple display outputs. "R200" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products.
The GeForce 7 series is the seventh generation of Nvidia's GeForce line of graphics processing units. This was the last series available on AGP cards.
AMD FirePro was AMD's brand of graphics cards designed for use in workstations and servers running professional Computer-aided design (CAD), Computer-generated imagery (CGI), Digital content creation (DCC), and High-performance computing/GPGPU applications. The GPU chips on FirePro-branded graphics cards are identical to the ones used on Radeon-branded graphics cards. The end products differentiate substantially by the provided graphics device drivers and through the available professional support for the software. The product line is split into two categories: "W" workstation series focusing on workstation and primarily focusing on graphics and display, and "S" server series focused on virtualization and GPGPU/High-performance computing.
The R520 is a graphics processing unit (GPU) developed by ATI Technologies and produced by TSMC. It was the first GPU produced using a 90 nm photolithography process.
The R300 GPU, introduced in August 2002 and developed by ATI Technologies, is its third generation of GPU used in Radeon graphics cards. This GPU features 3D acceleration based upon Direct3D 9.0 and OpenGL 2.0, a major improvement in features and performance compared to the preceding R200 design. R300 was the first fully Direct3D 9-capable consumer graphics chip. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs.
The Radeon R100 is the first generation of Radeon graphics chips from ATI Technologies. The line features 3D acceleration based upon Direct3D 7.0 and OpenGL 1.3, and all but the entry-level versions offloading host geometry calculations to a hardware transform and lighting (T&L) engine, a major improvement in features and performance compared to the preceding Rage design. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs. "R100" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products.
The Evergreen series is a family of GPUs developed by Advanced Micro Devices for its Radeon line under the ATI brand name. It was employed in Radeon HD 5000 graphics card series and competed directly with Nvidia's GeForce 400 series.
The Northern Islands series is a family of GPUs developed by Advanced Micro Devices (AMD) forming part of its Radeon-brand, based on the 40 nm process. Some models are based on TeraScale 2 (VLIW5), some on the new TeraScale 3 (VLIW4) introduced with them.
Radeon X800 is a series of graphics cards designed by ATI Technologies Inc. introduced in May 2004.
The R300 GPU, introduced in August 2002 and developed by ATI Technologies, is its third generation of GPU used in Radeon graphics cards. This GPU features 3D acceleration based upon Direct3D 9.0 and OpenGL 2.0, a major improvement in features and performance compared to the preceding R200 design. R300 was the first fully Direct3D 9-capable consumer graphics chip. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs.
The R200 is the second generation of GPUs used in Radeon graphics cards and developed by ATI Technologies. This GPU features 3D acceleration based upon Microsoft Direct3D 8.1 and OpenGL 1.3, a major improvement in features and performance compared to the preceding Radeon R100 design. The GPU also includes 2D GUI acceleration, video acceleration, and multiple display outputs. "R200" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products.
TeraScale is the codename for a family of graphics processing unit microarchitectures developed by ATI Technologies/AMD and their second microarchitecture implementing the unified shader model following Xenos. TeraScale replaced the old fixed-pipeline microarchitectures and competed directly with Nvidia's first unified shader microarchitecture named Tesla.
The Radeon RX 5000 series is a series of graphics processors developed by AMD, based on their RDNA architecture. The series is targeting the mainstream mid to high-end segment and is the successor to the Radeon RX Vega series. The launch occurred on July 7, 2019. It is manufactured using TSMC's 7 nm FinFET semiconductor fabrication process.
Rankine is the codename for a GPU microarchitecture developed by Nvidia, and released in 2003, as the successor to Kelvin microarchitecture. It was named with reference to Macquorn Rankine and used with the GeForce FX series.
{{cite web}}
: CS1 maint: unfit URL (link)