Nvidia GRID

Last updated
This GRID K1 GPU provides VDI for four seats using four independent GK107 GPUs with 4 GB of graphics memory each. NVidia GRID K1 standing.jpg
This GRID K1 GPU provides VDI for four seats using four independent GK107 GPUs with 4 GB of graphics memory each.

Nvidia GRID is a family of graphics processing units (GPUs) made by Nvidia, introduced in 2008, that is targeted specifically towards cloud gaming. [1] The Nvidia GRID includes both graphics processing and video encoding into a single device which is able to decrease the input to display latency of cloud based video game streaming. [2] Nvidia offer their own game streaming service that makes use of the Nvidia Grid that supports full 1080p at 60 frames per second over the Internet. [3]

While many of Nvidia’s cards are known for gaming, there has been a recent growth of business applications that are GPU-accelerated.[ timeframe? ] The Nvidia GRID K1 and K2 are being integrated with Supermicro server clusters for use with 3D-intensive applications such as graphics and computer aided design (CAD). [4] In 2015, Microsoft began including Nvidia GRID as part of its Azure Enterprise cloud platform targeted towards professionals such as engineers, designers and researchers. [5]

Specifications [6] [7] [8]
GRID K1Grid K2
Microarchitecture Kepler
Number of GPUs4× GK1072× GK104
Number of CUDA cores4× 1922× 1536
Memory site4× 4 GB DDR32× 4 GB DDR5
Max power130 W225 W


Related Research Articles

<span class="mw-page-title-main">Graphics processing unit</span> Specialized electronic circuit; graphics accelerator

A graphics processing unit (GPU) is a specialized electronic circuit initially designed to accelerate computer graphics and image processing. After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

<span class="mw-page-title-main">Free and open-source graphics device driver</span> Software that controls computer-graphics hardware

A free and open-source graphics device driver is a software stack which controls computer-graphics hardware and supports graphics-rendering application programming interfaces (APIs) and is released under a free and open-source software license. Graphics device drivers are written for specific hardware to work within a specific operating system kernel and to support a range of APIs used by applications to access the graphics hardware. They may also control output to the display if the display driver is part of the graphics hardware. Most free and open-source graphics device drivers are developed by the Mesa project. The driver is made up of a compiler, a rendering API, and software which manages access to the graphics hardware.

<span class="mw-page-title-main">Edge computing</span> Distributed computing paradigm

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.

<span class="mw-page-title-main">CUDA</span> Parallel computing platform and programming model

CUDA is a proprietary and closed-source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels.

In computing, the term remote desktop refers to a software- or operating system feature that allows a personal computer's desktop environment to be run remotely from one system, while being displayed on a separate client device. Remote desktop applications have varying features. Some allow attaching to an existing user's session and "remote controlling", either displaying the remote control session or blanking the screen. Taking over a desktop remotely is a form of remote administration.

<span class="mw-page-title-main">Tegra</span> System on a chip by Nvidia

Tegra is a system on a chip (SoC) series developed by Nvidia for mobile devices such as smartphones, personal digital assistants, and mobile Internet devices. The Tegra integrates an ARM architecture central processing unit (CPU), graphics processing unit (GPU), northbridge, southbridge, and memory controller onto one package. Early Tegra SoCs are designed as efficient multimedia processors. The Tegra-line evolved to emphasize performance for gaming and machine learning applications without sacrificing power efficiency, before taking a drastic shift in direction towards platforms that provide vehicular automation with the applied "Nvidia Drive" brand name on reference boards and its semiconductors; and with the "Nvidia Jetson" brand name for boards adequate for AI applications within e.g. robots or drones, and for various smart high level automation purposes.

Video Decode and Presentation API for Unix (VDPAU) is a royalty-free application programming interface (API) as well as its implementation as free and open-source library distributed under the MIT License. VDPAU is also supported by Nvidia.

Direct2D is a 2D vector graphics application programming interface (API) designed by Microsoft and implemented in Windows 10, Windows 8, Windows 7 and Windows Server 2008 R2, and also Windows Vista and Windows Server 2008.

Cloud gaming, sometimes called gaming on demand or game streaming, is a type of online gaming that runs video games on remote servers and streams the game's output directly to a user's device, or more colloquially, playing a game remotely from a cloud. It contrasts with traditional means of gaming, wherein a game is run locally on a user's video game console, personal computer, or mobile device.

<span class="mw-page-title-main">Nvidia Tesla</span> Nvidias line of general purpose GPUs

Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.

GPU virtualization refers to technologies that allow the use of a GPU to accelerate graphics or GPGPU applications running on a virtual machine. GPU virtualization is used in various applications such as desktop virtualization, cloud gaming and computational science.

Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including: image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.

Multidimensional Digital Signal Processing (MDSP) refers to the extension of Digital signal processing (DSP) techniques to signals that vary in more than one dimension. While conventional DSP typically deals with one-dimensional data, such as time-varying audio signals, MDSP involves processing signals in two or more dimensions. Many of the principles from one-dimensional DSP, such as Fourier transforms and filter design, have analogous counterparts in multidimensional signal processing.

An AI accelerator, deep learning processor, or neural processing unit is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFET transistors.

<span class="mw-page-title-main">Nvidia RTX</span> Development platform for rendering graphics

Nvidia RTX is a professional visual computing platform created by Nvidia, primarily used in workstations for designing complex large-scale models in architecture and product design, scientific visualization, energy exploration, and film and video production, as well as being used in mainstream PCs for gaming.

<span class="mw-page-title-main">Turing (microarchitecture)</span> GPU microarchitecture by Nvidia

Turing is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is named after the prominent mathematician and computer scientist Alan Turing. The architecture was first introduced in August 2018 at SIGGRAPH 2018 in the workstation-oriented Quadro RTX cards, and one week later at Gamescom in consumer GeForce RTX 20 series graphics cards. Building on the preliminary work of its HPC-exclusive predecessor, the Turing architecture introduces the first consumer products capable of real-time ray tracing, a longstanding goal of the computer graphics industry. Key elements include dedicated artificial intelligence processors and dedicated ray tracing processors. Turing leverages DXR, OptiX, and Vulkan for access to ray-tracing. In February 2019, Nvidia released the GeForce 16 series of GPUs, which utilizes the new Turing design but lacks the RT and Tensor cores.

<span class="mw-page-title-main">Ampere Computing</span> American fabless semiconductor company

Ampere Computing LLC is an American fabless semiconductor company based in Santa Clara, California that develops processors for servers operating in large scale environments. Ampere also has offices in: Portland, Oregon; Taipei, Taiwan; Raleigh, North Carolina; Bangalore, India; Warsaw, Poland; and Ho Chi Minh City, Vietnam.

<span class="mw-page-title-main">Amazon Luna</span> Cloud gaming and streaming service

Amazon Luna is a cloud gaming platform developed and operated by Amazon. Available only in the United States, United Kingdom, Canada, Germany, France, Italy, and Spain, the platform is powered by Amazon Web Services, has integration with Twitch, and is available on Windows, Mac, Amazon Fire TV, iOS as well as Android. Luna offers access to a selection of games via the Luna+ subscription as well as to channels from brands such as Ubisoft+ and Jackbox Games.

References

  1. Hou, Qingdong; Qiu, Chu; Mu, Kaihui; Qi, Quan; Lu, Yongquan (2014). A Cloud Gaming System Based on NVIDIA GRID GPU. 2014 13th International Symposium on Distributed Computing and Applications to Business, Engineering and Science. pp. 73–77. doi:10.1109/DCABES.2014.19. ISBN   978-1-4799-4169-8.
  2. Shea, Ryan; Liu, Liu; Ngai, Edith; Cui, Yong (2013). "Cloud gaming: Architecture and performance". IEEE Network. 27 (4): 16–24. CiteSeerX   10.1.1.394.1568 . doi:10.1109/MNET.2013.6574660. S2CID   7712263.
  3. Hardawar, Devindra (May 12, 2015). "NVIDIA's GRID cloud gaming service gets 1080p 60 FPS streaming". Engadget.
  4. "Supermicro server platforms use NVIDIA GRID technology". Internet Business News. 24 May 2013. ProQuest   1354964616.
  5. "NVIDIA GPUs to Accelerate Microsoft Azure" (Press release). NVIDIA. September 29, 2015. Retrieved July 26, 2020.
  6. "NVIDIA GRID K1". techpowerup. Retrieved 2023-03-04.
  7. "NVIDIA GRID K2". techpowerup. Retrieved 2023-03-04.
  8. "NVIDIA GRID K1 AND K2" (PDF). nvidia . Retrieved 2023-03-04.