OptiX Programming Guide

1.1. OptiX Overview

GPUs are best at exploiting very high degrees of parallelism, and ray tracing fits that requirement perfectly. However, typical ray tracing algorithms can be highly irregular, which poses serious challenges for anyone trying to exploit the full raw computational potential of a GPU. The NVIDIA OptiX ray tracing engine and API address those challenges and provide a framework for harnessing the enormous computational power of both current- and future-generation graphics hardware to incorporate ray tracing into interactive applications. By using OptiX together with NVIDIA’s CUDA™ architecture, interactive ray tracing is finally feasible for developers without a Ph.D. in computer graphics and a team of ray tracing engineers.

OptiX is not itself a renderer. Instead, it is a scalable framework for building ray tracing based applications. The OptiX engine is composed of two symbiotic parts: 1) a host-based API that defines data structures for ray tracing, and 2) a CUDA C++-based programming system that can produce new rays, intersect rays with surfaces, and respond to those intersections. Together, these two pieces provide low-level support for “raw ray tracing.” This allows user-written applications that use ray tracing for graphics, collision detection, sound propagation, visibility determination, etc.

1.1.1. Motivation

By abstracting the execution model of a generic ray tracer, OptiX makes it easier to assemble a ray tracing system, leveraging custom-built algorithms for object traversal, shader dispatch and memory management. Furthermore, the resulting system will be able to take advantage of future evolution in GPU hardware and OptiX SDK releases – similar to the manner that OpenGL and Direct3D provide an abstraction for the rasterization pipeline.

Wherever possible, the OptiX engine avoids specification of ray tracing behaviors and instead provides mechanisms to execute user-provided CUDA C code to implement shading (including recursive rays), camera models, and even color representations. Consequently, the OptiX engine can be used for Whitted-style ray tracing, path tracing, collision detection, photon mapping, or any other ray tracing-based algorithm. It is designed to operate either standalone or in conjunction with an OpenGL or DirectX application for hybrid ray tracing-rasterization applications.

1.1.2. Programming model

At the core of OptiX is a simple but powerful abstract model of a ray tracer. This ray tracer employs user-provided programs to control the initiation of rays, intersection of rays with surfaces, shading with materials, and spawning of new rays. Rays carry user-specified payloads that describe per-ray variables such as color, recursion depth, importance, or other attributes. Developers provide these functions to OptiX in the form of CUDA C-based functions. Because ray tracing is an inherently recursive algorithm, OptiX allows user programs to recursively spawn new rays, and the internal execution mechanism manages all the details of a recursion stack. OptiX also provides flexible dynamic function dispatch and a sophisticated variable inheritance mechanism so that ray tracing systems can be written very generically and compactly.

1.2. Ray tracing basics

“Ray tracing” is an overloaded term whose meaning can depend on context. Sometimes it refers to the computation of the intersection points between a 3D line and a set of 3D objects such as spheres. Sometimes it refers to a specific algorithm such as Whitted's method of generating pictures or the oil exploration industry's algorithm for simulating ground wave propagation. Other times it refers to a family of algorithms that include Whitted's algorithm along with others such as distribution ray tracing. OptiX is a ray tracing engine in the first sense of the word: it allows the user to intersect rays and 3D objects. As such it can be used to build programs that fit the other use of "ray tracing" such as Whitted's algorithm. In addition OptiX provides the ability for users to write their own programs to generate rays and to define behavior for when rays hit objects.

For graphics, ray tracing was originally proposed by Arthur Appel in 1968 for rendering solid objects. In 1980, Turner Whitted pursued the idea further by introducing recursion to enable reflective and refractive effects. Subsequent advances in ray tracing increased accuracy by introducing effects for depth of field, diffuse inter-reflection, soft shadows, motion blur, and other optical effects. Simultaneously, numerous researchers have improved the performance of ray tracing using new algorithms for indexing the objects in the scene.

Realistic rendering algorithms based on ray tracing have been used to accurately simulate light transport. Some of these algorithms simulate the propagation of photons in a virtual environment. Others follow adjoint photons “backward” from a virtual camera to determine where they originated. Still other algorithms use bidirectional methods. OptiX operates at a level below such algorithmic decisions, so can be used to build any of those algorithms.

Ray tracing has often been used for non-graphics applications. In the computer-aided design community, ray tracing has been used to estimate the volume of complex parts. This is accomplished by sending a set of parallel rays at the part; the fraction of rays that hit the part gives the cross-sectional area, and the average length that those rays are inside the part gives the average depth. Ray tracing has also often been used to determine proximity (including collision) for complex moving objects. This is usually done by sending “feeler” rays from the surfaces of objects to “see” what is nearby. Rays are also commonly used for mouse-based object selection to determine what object is seen in a pixel, and for projectile-object collision in games. OptiX can be used for any of those applications.

The common feature in ray tracing algorithms is that they compute the intersection points of 3D rays (an origin and a propagation direction) and a collection of 3D surfaces (the “model” or “scene”). In rendering applications, the optical properties of the point where the ray intersects the model determine what happens to the ray (e.g., it might be reflected, absorbed or refracted). Other applications might not care about information other than where the intersection happens, or even if an intersection occurs at all. This variety of needs means it is desirable for OptiX to support a variety of ray-scene queries and user-defined behavior when rays intersect the scene.

One of ray tracing's nice features is that it is easy to support any geometric object that can be intersected with a 3D line. For example, it is straightforward to support spheres natively with no tessellation. Another nice feature is that ray tracing's execution is normally "sub-linear" in the number of objects---doubling the number of objects in the scene should less than double the running time. This is accomplished by organizing the objects into an acceleration structure that can quickly reject whole groups of primitives as not candidates for intersection with any given ray. For static parts of the scene, this structure can be reused for the life of the application. For dynamic parts of the scene, OptiX supports rebuilding the acceleration structure when needed. The structure only queries the bounding box of any geometric objects it contains, so new types of primitives can be added and the acceleration structures will continue to work without modification, so long as the new primitives can provide a bounding box.

For graphics applications, ray tracing has advantages over rasterization. One of these is that general camera models are easy to support; the user can associate points on the screen with any direction they want, and there is no requirement that rays originate at the same point. Another advantage is that important optical effects such as reflection and refraction can be supported with only a few lines of code Hard shadows are easy to produce with none of the artifacts typically associated with shadow maps, and soft shadows are not much harder. Furthermore, ray tracing can be added to more traditional graphics programs as a pass that produces a texture, letting the developer leverage the best of both worlds. For example, just the specular reflections could be computed by using points in the depth buffer as ray origins. There are a number of such “hybrid algorithms” that use both z-buffer and ray tracing techniques.


NVIDIA® GameWorks™ Documentation Rev. 1.0.180103 ©2014-2018. NVIDIA Corporation. All Rights Reserved.