Release Notes

NVIDIA Nsight DL Designer Release Notes.

Release notes and system requirements.

New Features and Major Changes

1.1. Updates in 2022.2

NVIDIA Nsight Deep Learning Designer changes in version 2022.2:

  • We added support to launch the PyTorch exporter from a virtual environment (Conda or virtualenv).
  • We improved the overall performance of Channel Inspector by separating the visualization of per-layer weights from per-layer features and changing the way how we visualize NxCx1x1 weights.
  • We added an experimental feature that allows users to directly import existing PyTorch models into NVIDIA Nsight Deep Learning Designer without starting from scratch.
  • We switched to using cubic curves to represent layer links to reduce path-finding overhead.
  • We added support to visualize inference results of classification networks in the Analysis Mode.
  • We added support for custom padding (in addition to the current same and valid options) to the relevant layers.
  • We fixed numerous bugs.

We have also decided to remove some uncommonly used operators as we modernize the inference library and unify our model exporters. The following layers are now deprecated and will be removed in a future product release:

  • Local response normalization (LRN) layer: Only region values of within are being removed. Normalization across channels is still fully supported.
  • Mono-four-stack layer: Replace with a custom layer.
  • Mono-to-RGB layer: Replace with a custom layer.
  • Network layer: Import the subnetwork as a template instead.
  • Output layer: Use of output layers for tensor slicing (the width, height, channels, and offset parameters) is deprecated. Use an explicit slice layer if these operations are required.

The following activations are now deprecated:

  • Leaky sigmoid activation: Replace with a custom layer.
  • Leaky tanh activation: Replace with a custom layer.
  • ReLU activation with alpha values other than zero is being removed. Normal ReLU is still fully supported. The element-wise max layer can replace these activations.

NvNeural changes in version 2022.2:

  • Changed the signature of nvneural::XmlNetworkBuilder::createLayerObject to receive the original serialized type used to select the layer object being instantiated. Custom classes deriving from XmlNetworkBuilder must be updated if they override this function.

1.2. Updates in 2022.1

NVIDIA Nsight Deep Learning Designer changes in version 2022.1:

  • Added support to save all tensors in the analysis mode.
  • Added support for using nested templates to construct hierarchical network graphs.
  • We significantly improved the performance of the type-checking process in the Editor.
  • Fixed a bug that prevented PyTorch exports on Linux from succeeding.
  • Removed clamping behavior from the Affine layer. It no longer restricts the values of its scale and offset parameters. The options_on parameter has been deprecated; users wishing to hide interactive controls for this layer during analysis should set the new include_ui parameter to false.
  • Fixed a bug that blocked FP16 inference when fusing 7x7 convolutions with batch normalizations.

NvNeural changes in version 2022.1:

  • Added a new analysis layer: Signal Injector.
  • Added a new Input (Constant) layer which supports direct embedding of scalar constants.
  • Optimized the performance of the BatchNorm layer.
  • Optimized the performance of the Upscale layer.
  • Added support for downscaling and fixed-size scaling to the Upscale layer.
  • The NvRTC wrapper in nvneural::ICudaRuntimeCompiler has been replaced with a stub when type-checking networks from the GUI. Plugins that rely on the ability to execute generated kernel code during initialization or nvneural::ILayer::reshape should call NvRTC directly, but for performance reasons we do not recommend this approach.
  • The forward() function in the exported PyTorch class now takes in keyword-only arguments. User should explicitly name the input paramters while calling the model/function.
  • The INetwork::inferenceSubgraph method now applies queued reshape operations. Queued reshapes are not cleared upon failure and will continue to block inference and inferenceSubgraph calls until they succeed.

1.3. Updates in 2021.2

NVIDIA Nsight Deep Learning Designer changes in version 2021.2:

  • The Channel Inspector can display summary and per-channel statistics about a layer's output tensor: mean, minimum/maximum, standard deviation, and sparsity (percentage of tensor elements close to zero).
  • Output tensor shapes are now visible during editing.
  • ConverenceNG can now save network outputs as .npy files.
  • Users can now expand or collapse the parameters list of a layer glyph in the editor view.
  • In Channel Inspector, users can now toggle a checkbox to perform auto scale and shift of the channels.
  • Users can now save a template as a file that can be imported into another model.

We have added more data to network profiling reports:

  • Per-layer device memory footprints
  • Whole-network device memory footprint
  • Percentage view for inference timings
  • Layers' distance from the nearest input, for sorting by network depth
  • Template-level inference timings

NvNeural changes in version 2021.2:

  • Plugin initialization has been refactored to reduce its reliance on translation-unit-scoped static initialization. The ExportPlugin framework now expects user plugin code to provide an implementation of the function void nvneural::plugin::InitializePluginTypes(). This function should call static ClassRegistry methods to make its export types visible to the client application.
  • The SkipConcatenation optimization has been rewritten. Custom concatenation layers should implement nvneural::IConcatenationLayer2 to participate in this optimization.
  • We added two new analysis layers: Saliency Generator and Saliency Mix. The Saliency Generator layer converts their input tensors into a single-channel tensor with the same H and W as their inputs. The saliency mix layer simply overlays saliency information (output of a Saliency Generator) onto another input tensor.

Known Issues

  • NVIDIA Nsight Deep Learning Designer's profiling feature is affected by the administrative restriction on access to the NVIDIA GPU performance counters. If you ever see profiling failure with messages like "ERR: xxx: Error 19 returned from Perfworks", please refer to this page for detailed instructions on how to lift the administrative restriction.
  • With JetPack 5.02 running on AGX Orin, some device nodes are not assigned correct permissions and this will cause GPU profiling to fail. As a workaround, you need to reboot twice after the system is flashed and also follow the instructions from the above link to enable profiling in NVIDIA Nsight Deep Learning Designer.
  • When you have a high resolution monitor and you set the DPI scaling at larger than 100%, you may see rendering corruptions inside the editor.
  • The DirectML exporter does not currently honor the apply_bias layer parameter during code generation.
  • Command lines containing spaces are not quoted correctly for copy/paste in dialog boxes.
  • Custom layers using PrimaryInfinite inputs should provide no more than one Primary input.
  • The PyTorch exporter does not support connections to secondary layer inputs. Layer weights are defined exclusively using the weights system.

Platform Support

NVIDIA Nsight Deep Learning Designer runs on the following systems:

Windows 10: 20H1 or newer

Linux: Ubuntu 18.04 LTS or newer

GPU Support

NVIDIA Nsight Deep Learning Designer requires a NVIDIA Volta or newer GPU to run.

Recommended Display Driver

You must have a recent NVIDIA display driver installed on your system to run NVIDIA Nsight Deep Learning Designer. The following display drivers are recommended:

Windows: Release 511.23 or newer

Linux: Release 510.39.01 or newer




Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.


NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.