Release Notes
Release Notes
Updates in 2025.3.1
Fixed an issue where racecheck on Windows may hang at the end of the program in some cases.
Updates in 2025.3
Removed support for Maxwell, Pascal and Volta GPUs.
Added support for PTX instructions
st.asyncandred.asyncwith.releasesemantics on Blackwell.Added additional types of deadlock detection with the
--racecheck-deadlock-timeout.Fixed potential false positives in racecheck related to warpsync on Blackwell.
Fixed an issue where some events were not reported when the first lane of a warp was not active when using the
racecheck-trace-syncoption.Fixed potential false positives or negatives in racecheck when using
mbarrierwith different addresses within a warp.Fixed potential false negatives in
synccheckwhen using a barrier from different function calls across warps.Fixed an issue on Blackwell where
--num-cuda-barrierswould require a larger number than expected to function properly.
Updates in 2025.2.1
Fixed an issue where using
cp.async.mbarrier.arrivesynchronization forcp.asyncwould lead to corruption, crash or invalid results when using racecheck.Fixed potential false positives in memcheck when using a device-side malloc heap size superior to 4GB.
Updates in 2025.2
Added public API and memcheck support for all previously unimplemented directions of the PTX
cp.async.bulkandcp.reduce.async.bulk.Added racecheck deadlock detection with the
--racecheck-deadlock-timeoutoption. See the racecheck deadlock detection documentation.Added the
--racecheck-trace-syncoption to trace synchronization information in racecheck. See the racecheck synchronization tracing documentation.Added extra reporting for tensor memory leak detection.
Added support for new vector and sizes configurations for
ldandston Blackwell.Added
SANITIZER_INSTRUCTION_CUDA_BARRIER_ATTEMPTcallback to the patching API for all attempts at waiting on PTXmbarrier.Fixed an issue with mbarrier arrive on transactional barrier in racecheck and synccheck in Hopper+.
Fixed a crash when using
cudaMallocAsyncandcuMemmapin the same program with enabled peer-GPU access in memcheck.Fixed issues in racecheck and synccheck when using PTX
mbarrierwith different addresses in the same warp.Fixed a potential crash with the error suppression option when using reference files without host backtrace.
Updates in 2025.1
Added support for additional Blackwell GPUs.
Modified the way backtraces are displayed: By default, strip paths from files, and avoid displaying frames below main and extra CUDA runtime frames. Added the
--strip-pathand--backtrace-shortoptions to control this behavior.Added support for native Python host backtraces.
Added guardrails support for the PTX
tcgen05instructions. See the Tensor Core MMA guardrails documentation for more information.Added reports when encountering user-issued breakpoints.
Added support for OptiX 9.0.
Added Sanitizer API patching support for the PTX
cp.async.mbarrier.arriveasynchronous join operation.Added Sanitizer API patching support for post-release
syncwarp.Fixed a potential false positive in racecheck when using
cp.async.mbarrier.arriveto synchronize memcpy async operations and using the same thread to immediately access the copy destination.Fixed a potential false positive in racecheck when using
syncwarp.Fixed excessive synccheck and racecheck device and host memory consumption in some circumstances with programs initializing PTX
mbarrierin a loop.Fixed potentially wrong block coordinates being displayed in racecheck reports when having multiple clusters.
Updates in 2024.4
Added support for the Blackwell architecture.
Added support for Tensor core barriers.
Added public API and memcheck support for the
st.bulkPTX instruction.Added racecheck support for the
cp.async.bulkPTX instruction from global to shared memory.Improved performance of racecheck indirect barrier dependency option and enabled it by default.
Fixed issues where device callbacks would in some cases be called with the wrong threads for a limited set of instructions.
Fixed racecheck and syncheck issues with
cuda::barriertracking overflow.Fixed racecheck and syncheck issues with cluster barriers.
Fixed memcheck and public API issues when importing memory from other processes and having
CUDA_VISIBLE_DEVICESset.Fixed potential false positives with initcheck when doing 3D memcpy/memset operations.
Updates in 2024.3
Fixed potential invalid results on Ampere with synccheck and racecheck relating to
cuda::barrier.cuda::barrierwait events are now called after the wait as completed in the same fashion as Hopper. This may require target applications to be recompiled with a CUDA 12.4 or more recent compiler.Fixed multiple issues with coredump generation that resulted in failures or hangs.
Fixed potential false positives in memcheck with the PTX instruction
st.asyncon the barrier address.Fixed a potential hang when using per-thread default streams or green contexts and synchronous
cudaMemcpyfrom device to host.Fixed potential issues with host backtraces on Windows.
Fixed potential false positives misaligned accesses in memcheck with the PTX instruction
cp.async.bulk.
Updates in 2024.2
Added public API and memcheck support for PTX cp.async.bulk operations from global to shared memory on Hopper.
Added support for OptiX 8.1.
Dropped support for Linux ppc64le: this platform is no longer supported.
Fixed an issue where memcheck would report misaligned or out of bounds false positives with accesses of a larger size than the actual operation in specific cases.
Fixed potential issues when using Heterogeneous Memory Management.
Fixed potential crashes or false results in racecheck when using clusters on Hopper.
Added a warning when clearing unsupported coredump-related environment variables.
Clarified in documentation when leak errors are reported.
Updates in 2024.1.1
Fixed an issue where errors would not be reported in libraries called from .NET applications.
Updates in 2024.1
Enable shared addressing support by default: removed option
--hmm-supportand replaced it with environment variableNV_COMPUTE_SANITIZER_SHARED_ADDRESSING_SUPPORT. See the environment variables documentation for more information.Changed default value for option
--target-processestoall.Added detection for
cuda::barrierinitialization race conditions in racecheck on Hopper.Added support for initcheck API errors suppression.
Added memcheck support for
cuMemPoolImportPointer.Added support for CUDA green contexts.
Added support for CUDA graph device-side node update.
Fixed potential false positives with synccheck when using different
cuda::barrierin a single warp.Fixed potential false negatives with memcheck when using floating point atomics on Hopper.
Updates in 2023.1.1
Fixed error output for WGMMA instructions.
Updates in 2023.3
Added support for Heterogeneous Memory Management (HMM) and Address Translation Service (ATS). The feature is opt-in using the
--hmm-supportcommand-line option.Added racecheck support for device graph launches.
Added the ability to suppress known issues using the
--suppressionscommand-line option. See the suppressions documentation for more information.Added support for external memory objects. This effectively adds support for Vulkan and D3D12 interop.
Added device backtrace support for WSL.
Improve PC offset output. It is now printed next to the function name to clarify it is an assembly offset within that function.
Several command-line options no longer require to explicitly specify “yes” or “no” when they are used.
Renamed the options
--kernel-regexand--kernel-regex-excludeto--kernel-nameand--kernel-name-exclude.Added the regex filtering key to
--kernel-nameand--kernel-name-exclude.Added new command-line option
--racecheck-indirect-barrier-dependencyto enable indirectcuda::barriertracking in racecheck.Added new command-line option
--coredump-behaviorto control the target application behavior after generating a GPU coredump.Added new command-line option
--detect-missing-module-unloadto detect missing calls to thecuModuleUnloaddriver API.Added new command-line option
--preload-libraryto make the target application load a shared library before the injection libraries.Fix initcheck false positive when memory loads are widened and include padding bytes.
Fix potential hang in racecheck and synccheck tools when the
bar.arriveinstruction is used.Added patching API support for the
setsmemsizeinstruction.Added patching API support for
__syncthreads()after the barrier is released.
Updates in 2023.2.2
Updated version print output to include build and config information.
Fix potential hang on QNX when capturing the host backtrace.
Updates in 2023.2.1
Fixed potential racecheck hang on H100 when using thread block clusters.
Compute Sanitizer 2023.2.1 is incorrectly versioned as 2023.2.0 and need to be differentiated by its build ID 33053471.
Updates in 2023.2
Added support for CUDA device graph launches.
Added racecheck support for cluster entry and exit race detection for remote shared memory accesses. See the cluster entry and exit race detection documentation for more information.
Added support for CUDA lazy loading when device heap checking is enabled. Requires CUDA driver version 535 or newer.
Added support for tracking child processes launched with
system()orposix_spawn(p)when using--target-processes all.Added support for
st.asyncandred.asyncinstructions.Improved support for partial warp synchronization using cooperative groups in racecheck.
Improved support for
cuda::barrier::wait()on SM 9.x.Added coredump support for Pascal architecture and multi-context applications.
Added support for OptiX 8.0.
Improved performance when using initcheck in OptiX applications in some cases. Using initcheck to track OptiX applications now requires the option
--check-optix yes.
Updates in 2023.1.1
Fixed bug where memcheck would report out-of-bound accesses when loading user parameter values using a ternary operator.
Fixed potential crash when using leakcheck with applications using CUBLAS.
Fixed potential false positives when using synccheck or racecheck with applications using CUDA barriers.
Updates in 2023.1
Added racecheck support for distributed shared memory.
Extended stream-ordered race detection to
cudaMemcpyAPIs.Added memcheck, synccheck and patching API support for warpgroup operations.
Added
--coredump-nameCLI option to set the coredump file name.Added support for Unicode file paths.
Added support for OptiX 7.7.
Updates in 2022.4.1
Fixed bug where synccheck would incorrectly report illegal instructions for code using
cluster.sync()and compiled with--device-debugFixed incorrect address reports in SanitizerCallbackMemcpyAsync in some specific cases, leading to potential invalid results in memcheck and racecheck.
Fixed potential hangs and invalid results with racecheck on OptiX applications.
Fixed potential crash or invalid results when using CUDA Lazy Module Loading with memcheck or initcheck if
--check-device-heapis enabled. Lazy Module Loading will be automatically disabled in these cases.
Updates in 2022.4
Added support for
__nv_aligned_device_malloc.Added support for
ldmatrixandstmatrixinstructions.Added support for cache control operations when using the
--check-cache-controlcommand-line option.Added new command-line option
--unused-memory-thresholdto control the threshold for unused memory reports.Improved support for CUDA pipeline memcpy-async related hazards in racecheck.
Updates in 2022.3
Added support for the NVIDIA GH100/SM 9.x GPU architecture.
Added support for the NVIDIA AD10x/SM 8.9 GPU architecture.
Added support for lazy kernel loading.
Added memcheck support for distributed shared memory.
Added new options
--num-callers-deviceand--num-callers-hostto control the number of callers to print in stack traces.Added support for OptiX 7.6 applications.
Fix bug on Linux ppc64le where the host stack trace was incomplete.
Updates in 2022.2.1
Fixed incorrect device backtrace for applications compiled with
-lineinfo.
Updates in 2022.2
Added memcheck support for use-before-alloc and use-after-free race detection. See the stream-ordered race detection documentation for more information.
Added leakcheck support for asynchronous allocations, OptiX resources and CUDA memmap (on Linux only for the latter).
Added option to ignore
CUDA_ERROR_NOT_FOUNDerror codes returned by thecuGetProcAddressAPI.Added new sanitizer API functions to allocate and free page-locked host memory.
Added sanitizer API callbacks for the event management API.
Updates in 2022.1.1
Fixed initcheck issue where the tool would incorrectly abort a CUDA kernel launch after reporting an uninitialized access on Windows with hardware scheduling enabled.
Updates in 2022.1
Added support for generating coredumps.
Improved support for stack overflow detection.
Added new option
--target-processes-filterto filter the processes being tracked by name.Added initcheck support for asynchronous allocations. Requires CUDA driver version 510 or newer.
Added initcheck support for accesses on peer devices. Requires CUDA driver version 510 or newer.
Added support for OptiX 7 applications.
Added support for tracking the child processes of 32-bit processes in multi-process applications on Linux and Windows x86_64.
Updates in 2021.3.1
Fixed intermittent issue on vGPU where synccheck would incorrectly detect divergent threads.
Fixed potential hang when tracking several graph launches.
Updates in 2021.3
Improved Linux host backtrace.
Removed requirement to call
cudaDeviceReset()for accurate reporting of memory leaks and unused memory features.Fixed synccheck potential hang when calling
__syncthreadsin divergent code paths on Volta GPUs or newer.Added print of nearest allocation information for memcheck precise errors in global memory.
Added warning when calling device-side
mallocwith an empty size.Added separate sanitizer API device callback for
cuda::memcpy_async.Added new command-line option
--num-cuda-barriersto override the expected number ofcuda::barrierused by the target application.Added new command-line options
--print-session-detailsto print session information and--save-session-detailsto save it to the output file.Added support for WSL2.
Updates in 2021.2.3
Enabled SLS hardening and branch protection for L4T builds.
Updates in 2021.2.2
Enabled stack canaries with random canary values for L4T builds.
Updates in 2021.2.1
Added device backtrace for malloc/free errors in CUDA kernels.
Improved racecheck host memory footprint.
Updates in 2021.2
Added racecheck and synccheck support for
cuda::barrieron Ampere GPUs or newer.Added racecheck support for
__syncwarpwith partial mask.Added
--launch-countand--launch-skipfiltering options. See the Command Line Options documentation for more information.--filterand--excludeoptions have been respectively renamed to--kernel-regexand--kernel-regex-exclude.Added support for QNX and Linux aarch64 platforms.
Added support for CUDA graphs memory nodes.
Updates in 2021.1.1
Fixed an issue where incorrect line numbers could be shown in errors reports.
Updates in 2021.1
Added support for allocation padding via the
--paddingoption.Added experimental support for NVTX memory API using option
--nvtx yes. Please refer to NVTX API for Compute Sanitizer Reference Manual for more information.
Updates in 2020.3.1
Fixed issue when launching a CUDA graph multiple times.
Fixed false positives when using cooperative groups synchronization primitives with initcheck and synccheck.
Updates in 2020.3
Added support for CUDA memory pools and CUDA API reduced serialization.
Added host backtrace for unused memory reports.
Updates in 2020.2.1
Fixed crash when loading cubins of size larger than 2 GiB.
Fixed error detection on systems with multiple GPUs.
Fixed issue when using CUDA Virtual Memory Management API
cuMemSetAccessto remove access to a subset of devices on a system with multiple GPUs.Added sanitizer API to translate between sanitizer and CUDA stream handles.
Updates in 2020.2
Added support for CUDA graphs and CUDA memmap APIs.
The memory access callback of the sanitizer API has been split into three distinct callbacks corresponding to global, shared and local memory accesses.
Updates in 2020.1.2
Added sanitizer stream API. This fixes tool crashes when per-thread streams are being used.
Updates in 2020.1.1
Added support for Windows Hardware-accelerated GPU scheduling
Added support for tracking child processes spawned by the application launched under the tool via the
--target-processesCLI option.
Updates in 2020.1
Initial release of the Compute Sanitizer (with CUDA 11.0)
Updates to the Sanitizer API :
Added support for per-thread streams
Added APIs to retrieve the PC and size of a CUDA function or patch
Added callback for
cudaStreamAttachMemAsyncAdded direction to memcpy callback data
Added stream to memcpy and memset callbacks data
Added launch callback after syscall setup
Added visibility field to allocation callback data
Added PC argument to block entry callback
Added incoming value to memory access callbacks
Added threadCount to barrier callbacks
Added cooperative group flags for barrier and function callbacks
Updates in 2019.1
Initial release of the Compute Sanitizer API (with CUDA 10.1)
Known Limitations
Applications run much slower under the Compute Sanitizer tools. This may cause some kernel launches to fail with a launch timeout error when running with the Compute Sanitizer enabled.
Compute Sanitizer does not support checking host-side memory access violations and leaks for accesses made outside of CUDA API calls (e.g. accessing a buffer from the CPU).
Compute Sanitizer tools do not support coredumps on WSL2.
The memcheck tool does not support CUDA API error checking for API calls made on the GPU using dynamic parallelism.
The racecheck, synccheck and initcheck tools do not support CUDA dynamic parallelism.
CUDA dynamic parallelism is not supported when Windows Hardware-accelerated GPU scheduling is enabled.
Compute Sanitizer tools cannot interoperate with other CUDA developer tools. This includes CUDA coredumps which are automatically disabled by the Compute Sanitizer. They can be enabled instead by using the
--generate-coredumpoption.The initcheck tool does not support IPC allocations. Using it will result in false positives.
Compute Sanitizer tools are not supported when SLI is enabled.
The racecheck tool is not supported under Confidential Computing.
The memcheck tool does not detect out-of-bounds accesses into the reserved shared memory region. For more information on reserved shared memory, refer to the Special Registers Reserved for Shared Memory section of the PTX documentation.
Some
tensormapandtcgen05instructions accessing global memory are not currently supported, which may result in initcheck false positives.
Known Issues
The synccheck tool may incorrectly emit divergence in block errors in applications compiled with the 13.0 CUDA toolkit using cooperative group synchronizations. The issue can be avoided by using the compiler provided in the CUDA toolkit 13.0 Update 1 or more recent.
The racecheck tool may print incorrect data for “Current value” when reporting a hazard on a shared memory location where the last access was an atomic operation. This can also impact the severity of this hazard.
On QNX, when using the
--target-processes alloption, analyzing shell scripts may hang after the script has completed. End the application using Ctrl-C on the command line in that case.The initcheck tool might report false positives for device-to-host cudaMemcpy operations on padded structs that were initialized by a CUDA kernel. The
#pragma packdirective can be used to disable the padding as a workaround.When a hardware exception occur during a kernel launch that was skipped due to the usage of the
kernel-name,kernel-name-exclude,launch-countorlaunch-skipoptions, the memcheck tool will not be able to report additional details as an imprecise error.The leakcheck feature is disabled under Confidential Computing.
Support
Information on supported platforms and GPUs.
Platform Support
Platform |
Support |
|---|---|
Windows |
Yes |
Linux (x86_64) |
Yes |
Linux (ppc64le) |
No |
Linux (aarch64sbsa) |
Yes |
Linux (aarch64) |
Yes |
QNX |
Yes |
MacOSX |
No |
GPU Support
The compute-sanitizer tools are supported on all CUDA capable GPUs with SM versions 7.5 and above.
Notices
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.
Trademarks
NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.