OptiX

Last updated

Nvidia OptiX
Developer(s) Nvidia
Stable release
8.0 / August 2023 (2023-08)
Written in C / C++
Operating system Linux, OS X, Windows 7 and later
Type Ray tracing
License Proprietary software, free for commercial use
Website Nvidia OptiX developer site

Nvidia OptiX (OptiX Application Acceleration Engine) is a ray tracing API that was first developed around 2009. [1] The computations are offloaded to the GPUs through either the low-level or the high-level API introduced with CUDA. CUDA is only available for Nvidia's graphics products. Nvidia OptiX is part of Nvidia GameWorks. OptiX is a high-level, or "to-the-algorithm" API, meaning that it is designed to encapsulate the entire algorithm of which ray tracing is a part, not just the ray tracing itself. This is meant to allow the OptiX engine to execute the larger algorithm with great flexibility without application-side changes.

Contents

Commonly, video games use rasterization rather than ray tracing for their rendering.

According to Nvidia, OptiX is designed to be flexible enough for "procedural definitions and hybrid rendering approaches". Aside from computer graphics rendering, OptiX also helps in optical and acoustical design, radiation and electromagnetic research, [2] artificial intelligence queries and collision analysis. [3]

Ray tracing with OptiX

A Julia set drawn with NVIDIA OptiX (This is a sample of the SDK.) OptiX Julia set.png
A Julia set drawn with NVIDIA OptiX (This is a sample of the SDK.)

OptiX works by using user-supplied instructions (in the form of CUDA kernels) regarding what a ray should do in particular circumstances to simulate a complete tracing process. [4]

A light ray (or perhaps another kind of ray) might have a different behavior when hitting a particular surface rather than another one, OptiX allows to customize these hit conditions with user-provided programs. These programs are written in CUDA C or directly in PTX code and are linked together when used by the OptiX engine.

In order to use OptiX a CUDA-capable GPU must be available on the system and the CUDA toolkit must be installed.

Using the OptiX engine in a ray tracing application usually involves the following steps:

Several examples for these programs are available with the program's SDK

// Sample code using OptiX APIs ///* Ray generation program */rtProgramCreateFromPTXFile(*context,path_to_ptx,"pinhole_camera",&ray_gen_program);rtContextSetRayGenerationProgram(*context,0,ray_gen_program);/* Miss program */rtProgramCreateFromPTXFile(*context,path_to_ptx,"miss",&miss_program);rtContextSetMissProgram(*context,0,miss_program);/* Bounding box and intersection program */rtProgramCreateFromPTXFile(context,path_to_ptx,"box_bounds",&box_bounding_box_program);rtGeometrySetBoundingBoxProgram(*box,box_bounding_box_program);rtProgramCreateFromPTXFile(context,path_to_ptx,"box_intersect",&box_intersection_program);rtGeometrySetIntersectionProgram(*box,box_intersection_program);

Bounding box programs are used to define bounding volumes used to accelerate ray tracing process within acceleration structures as kd-trees or bounding volume hierarchies

// Sample code using OptiX APIs //rtProgramCreateFromPTXFile(context,path_to_ptx,"closest_hit_radiance",&closest_hit_program);rtProgramCreateFromPTXFile(context,path_to_ptx,"any_hit_shadow",&any_hit_program);/* Associate closest hit and any hit program with a material */rtMaterialCreate(context,material);rtMaterialSetClosestHitProgram(*material,0,closest_hit_program);rtMaterialSetAnyHitProgram(*material,1,any_hit_program);
A sample graph tree for Nvidia OptiX A sample OptiX(c) graph tree.png
A sample graph tree for Nvidia OptiX

In order to render a complex scene or trace different paths for any ray OptiX takes advantage of GPGPU computing by exploiting Nvidia CUDA platform. Since the process of shooting rays and setting their behavior is highly customizable, OptiX may be used in a variety of other applications aside from ray tracing.

OptiX Prime

Starting from OptiX 3.5.0 a second library called OptiX Prime was added to the bundle which aims to provide a fast low-level API for ray tracing - building the acceleration structure, traversing the acceleration structure, and ray-triangle intersection. Prime also features a CPU fallback when no compatible GPU is found on the system. Unlike OptiX, Prime is not a programmable API, so lacks support for custom, non-triangle primitives and shading. Being non-programmable, OptiX Prime does not encapsulate the entire algorithm of which ray tracing is a part. Thus, Prime cannot recompile the algorithm for new GPUs, refactor the computation for performance, or use a network appliance like the Quadro VCA, etc.

Software using OptiX

See also

Related Research Articles

<span class="mw-page-title-main">Ray tracing (graphics)</span> Rendering method

In 3-D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.

<span class="mw-page-title-main">GeForce</span> Range of GPUs by Nvidia

GeForce is a brand of graphics processing units (GPUs) designed by Nvidia and marketed for the consumer market. As of the GeForce 40 series, there have been eighteen iterations of the design. The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards, to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.

<span class="mw-page-title-main">Graphics processing unit</span> Specialized electronic circuit; graphics accelerator

A graphics processing unit (GPU) is a specialized electronic circuit initially designed to accelerate computer graphics and image processing. After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

<span class="mw-page-title-main">Path tracing</span> Computer graphics method

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

<span class="mw-page-title-main">Quadro</span> Brand of Nvidia graphics cards used in workstations

Quadro was Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning from 2000 to 2020.

<span class="mw-page-title-main">Ray-tracing hardware</span> Type of 3D graphics accelerator

Ray-tracing hardware is special-purpose computer hardware designed for accelerating ray tracing calculations.

<span class="mw-page-title-main">CUDA</span> Parallel computing platform and programming model

CUDA is a proprietary and closed source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.

<span class="mw-page-title-main">LuxCoreRender</span> Open-source physically-based rendering engine

LuxCoreRender is a free and open-source physically based rendering software. It began as LuxRender in 2008 before changing its name to LuxCoreRender in 2017 as part of a project reboot. The LuxCoreRender software runs on Linux, Mac OS X, and Windows.

Direct2D is a 2D vector graphics application programming interface (API) designed by Microsoft and implemented in Windows 10, Windows 8, Windows 7 and Windows Server 2008 R2, and also Windows Vista and Windows Server 2008.

Parallel Thread Execution is a low-level parallel thread execution virtual machine and instruction set architecture used in Nvidia's CUDA programming environment. The NVCC compiler translates code written in CUDA, a C++-like language, into PTX instructions, and the graphics driver contains a compiler which translates the PTX instructions into the executable binary code which can be run on the processing cores of Nvidia GPUs. The GNU Compiler Collection also has basic ability for PTX generation in the context of OpenMP offloading. Inline PTX assembly can be used in CUDA.

GPULib is discontinued and unsupported software library developed by Tech-X Corporation for accelerating general-purpose scientific computations from within the Interactive Data Language (IDL) using Nvidia's CUDA platform for programming its graphics processing units (GPUs). GPULib provides basic arithmetic, array indexing, special functions, Fast Fourier Transforms (FFT), interpolation, BLAS matrix operations as well as LAPACK routines provided by MAGMA, and some image processing operations. All numeric data types provided by IDL are supported. GPULib is used in medical imaging, optics, astronomy, earth science, remote sensing, and other scientific areas.

Arnold is a computer program for rendering three-dimensional, computer-generated scenes using unbiased, physically-based, Monte Carlo path tracing techniques. Created in Spain by Marcos Fajardo and later co-developed by his company Solid Angle SL and Sony Pictures Imageworks, Arnold is one of the most widely used photorealistic rendering systems in computer graphics worldwide, particularly in animation and VFX for film and TV. Notable feature films that have used Arnold include Monster House, Cloudy with a Chance of Meatballs, Alice in Wonderland, Thor, Captain America, X-Men: First Class, The Avengers, Space Pirate Captain Harlock, Elysium, Pacific Rim, Gravity, Guardians of the Galaxy, Star Wars: The Force Awakens, Arrival and Blade Runner 2049. Notable television series include Game of Thrones, Westworld, Trollhunters, LOVE DEATH + ROBOTS, Jelly Jamm and The Mandalorian.

GPU virtualization refers to technologies that allow the use of a GPU to accelerate graphics or GPGPU applications running on a virtual machine. GPU virtualization is used in various applications such as desktop virtualization, cloud gaming and computational science.

Caustic Graphics was a computer graphics and fabless semiconductor company that developed technologies to bring real-time ray-traced computer graphics to the mass market.

<span class="mw-page-title-main">Nvidia RTX</span> Development platform for rendering graphics

Nvidia RTX is a professional visual computing platform created by Nvidia, primarily used in workstations for designing complex large-scale models in architecture and product design, scientific visualization, energy exploration, and film and video production, as well as being used in mainstream PCs for gaming.

<span class="mw-page-title-main">GeForce 20 series</span> Series of GPUs by Nvidia

The GeForce 20 series is a family of graphics processing units developed by Nvidia. Serving as the successor to the GeForce 10 series, the line started shipping on September 20, 2018, and after several editions, on July 2, 2019, the GeForce RTX Super line of cards was announced.

<span class="mw-page-title-main">Turing (microarchitecture)</span> GPU microarchitecture by Nvidia

Turing is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is named after the prominent mathematician and computer scientist Alan Turing. The architecture was first introduced in August 2018 at SIGGRAPH 2018 in the workstation-oriented Quadro RTX cards, and one week later at Gamescom in consumer GeForce RTX 20 series graphics cards. Building on the preliminary work of its HPC-exclusive predecessor, the Turing architecture introduces the first consumer products capable of real-time ray tracing, a longstanding goal of the computer graphics industry. Key elements include dedicated artificial intelligence processors and dedicated ray tracing processors. Turing leverages DXR, OptiX, and Vulkan for access to ray-tracing. In February 2019, Nvidia released the GeForce 16 series of GPUs, which utilizes the new Turing design but lacks the RT and Tensor cores.

DirectX Raytracing (DXR) is a feature introduced in Microsoft's DirectX 12 that implements ray tracing, for video graphic rendering. DXR was released with the Windows 10 October update on October 10, 2018. It requires an AMD Radeon RX 6000 series, AMD Radeon RX 7000 series or Nvidia GeForce 10, 20, 30, or 40 series video card, which is designed to handle the high computing load used for ray tracing.

<span class="mw-page-title-main">Ada Lovelace (microarchitecture)</span> GPU microarchitecture by Nvidia

Ada Lovelace, also referred to simply as Lovelace, is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Ampere architecture, officially announced on September 20, 2022. It is named after the English mathematician Ada Lovelace, one of the first computer programmers. Nvidia announced the architecture along with the new GeForce 40 series consumer GPUs and the RTX 6000 Ada Generation pro workstation graphics card. The Lovelace chipset uses TSMC's new 5 nm "4N" process which offers increased efficiency over the previous Samsung 8 nm and TSMC N7 processes used by Nvidia for its previous-generation Ampere architecture.

References

  1. "Scheduling in OptiX, the Nvidia ray tracing engine" (PDF). August 15, 2009.
  2. Felbecker, Robert; Raschkowski, Leszek; Keusgen, Wilhelm; Peter, Michael (2012). "Electromagnetic wave propagation in the millimeter wave band using the NVIDIA OptiX GPU ray tracing engine". 2012 6th European Conference on Antennas and Propagation (EUCAP). IEEE Xplore. pp. 488–492. doi:10.1109/EuCAP.2012.6206198. ISBN   978-1-4577-0920-3. S2CID   45563615.
  3. Steven G. Parker; Heiko Friedrich; David Luebke; Keith Morley; James Bigler; Jared Hoberock; David McAllister; Austin Robison; Andreas Dietrich; Greg Humphreys; Morgan McGuire; Martin Stich (2013). "Magazine Communications of the ACM - GPU ray tracing". Communications of the ACM. ACM. 56 (5): 93–101. doi:10.1145/2447976.2447997. S2CID   17174671 . Retrieved August 14, 2013.
  4. Steven G. Parker; Heiko Friedrich; David Luebke; Keith Morley; James Bigler; Jared Hoberock; David McAllister; Austin Robison; Andreas Dietrich; Greg Humphreys; Morgan McGuire; Martin Stich (2010). "OptiX: a general purpose ray tracing engine". ACM Transactions on Graphics. ACM. 29 (4): 66:1–66:13. doi:10.1145/1778765.1778803 . Retrieved August 14, 2013.
  5. "Blender 2.81 Benchmarks On 19 NVIDIA Graphics Cards - RTX OptiX Rendering Performance Is Incredible". phoronix.com. 2019. Retrieved November 26, 2019.
  6. "D-NOISE: Rapid AI Denoising for Blender". Remington Creative. July 20, 2019. Retrieved December 14, 2019.
  7. "Adobe showcasing OptiX in a technology demo for ray tracing motion graphics with GPUs". NVIDIA. 2013. Archived from the original on December 20, 2021. Retrieved August 14, 2013.
  8. "Nvidia announces Gameworks Program at Montreal 2013; supports SteamOS". NVIDIA. 2013. Retrieved October 29, 2013.
  9. "GPU changes (for CUDA and OpenGL) in After Effects CC (12.1) | After Effects region of interest" . Retrieved February 22, 2015.
  10. "Daz Studio Changelog". DAZ 3D. Retrieved December 14, 2019.
  11. "New Features in v2.5 – LuxCoreRender".