CUPTI

Overview

The CUDA Profiling Tools Interface (CUPTI) enables the creation of profiling and tracing tools that target CUDA applications. CUPTI provides four APIs: the Activity API, the Callback API, the Event API, and the Metric API. Using these APIs, you can develop profiling tools that give insight into the CPU and GPU behavior of CUDA applications. CUPTI is delivered as a dynamic library on all platforms supported by CUDA.

What's New

CUPTI contains below changes as part of the CUDA Toolkit 10.0 release.
  • Added tracing support for devices with compute capability 7.5.
  • A new set of metric APIs are added for devices with compute capability 7.0 and higher. These provide low and deterministic profiling overhead on the target system. These APIs are currently supported only on Linux x86 64-bit and Windows 64-bit platforms. Refer to the CUPTI web page for documentation and details to download the package with support for these new APIs. Note that both the old and new metric APIs are supported for compute capability 7.0. This is to enable transition of code to the new metric APIs. But one cannot mix the usage of the old and new metric APIs.
  • CUPTI supports profiling of OpenMP applications. OpenMP profiling information is provided in the form of new activity records CUpti_ActivityOpenMp. New API cuptiOpenMpInitialize is used to initialize profiling for supported OpenMP runtimes.
  • Activity record for kernel CUpti_ActivityKernel4 provides shared memory size set by the CUDA driver.
  • Tracing support for CUDA kernels, memcpy and memset nodes launched by a CUDA Graph.
  • Added support for resource callbacks for resources associated with the CUDA Graph. Refer enum CUpti_CallbackIdResource for new callback IDs.

1. Usage

1.1. CUPTI Compatibility and Requirements

New versions of the CUDA driver are backwards compatible with older versions of CUPTI. For example, a developer using a profiling tool based on CUPTI 9.0 can update to a more recently released CUDA driver. However, new versions of CUPTI are not backwards compatible with older versions of the CUDA driver. For example, a developer using a profiling tool based on CUPTI 9.0 must have a version of the CUDA driver released with CUDA Toolkit 9.0 (or later) installed as well. CUPTI calls will fail with CUPTI_ERROR_NOT_INITIALIZED if the CUDA driver version is not compatible with the CUPTI version.

1.2. CUPTI Initialization

CUPTI initialization occurs lazily the first time you invoke any CUPTI function. For the Activity, Event, Metric, and Callback APIs there are no requirements on when this initialization must occur (i.e. you can invoke the first CUPTI function at any point). See the CUPTI Activity API section for more information on CUPTI initialization requirements for the activity API.

1.3. CUPTI Activity API

The CUPTI Activity API allows you to asynchronously collect a trace of an application's CPU and GPU CUDA activity. The following terminology is used by the activity API.

Activity Record
CPU and GPU activity is reported in C data structures called activity records. There is a different C structure type for each activity kind (e.g. CUpti_ActivityMemcpy). Records are generically referred to using the CUpti_Activity type. This type contains only a kind field that indicates the kind of the activity record. Using this kind, the object can be cast from the generic CUpti_Activity type to the specific type representing the activity. See the printActivity function in the activity_trace_async sample for an example.
Activity Buffer
An activity buffer is used to transfer one or more activity records from CUPTI to the client. CUPTI fills activity buffers with activity records as the corresponding activities occur on the CPU and GPU. The CUPTI client is responsible for providing empty activity buffers as necessary to ensure that no records are dropped.

An asynchronous buffering API is implemented by cuptiActivityRegisterCallbacks and cuptiActivityFlushAll.

It is not required that the activity API be initalized before CUDA initialization. All related activities occuring after initializing the activity API are collected. You can force initialization of the activity API by enabling one or more activity kinds using cuptiActivityEnable or cuptiActivityEnableContext, as shown in the initTrace function of the activity_trace_async sample. Some activity kinds cannot be directly enabled, see the API documentation for for CUpti_ActivityKind for details. Functions cuptiActivityEnable and cuptiActivityEnableContext will return CUPTI_ERROR_NOT_COMPATIBLE if the requested activity kind cannot be enabled.

The activity buffer API uses callbacks to request and return buffers of activity records. To use the asynchronous buffering API you must first register two callbacks using cuptiActivityRegisterCallbacks. One of these callbacks will be invoked whenever CUPTI needs an empty activity buffer. The other callback is used to deliver a buffer containing one or more activity records to the client. To minimize profiling overhead the client should return as quickly as possible from these callbacks. Function cuptiActivityFlushAll can be used to force CUPTI to deliver any activity buffers that contain completed activity records. Functions cuptiActivityGetAttribute and cuptiActivitySetAttribute can be used to read and write attributes that control how the buffering API behaves. See the API documentation for more information.

The activity_trace_async sample shows how to use the activity buffer API to collect a trace of CPU and GPU activity for a simple application.

1.3.1. SASS Source Correlation

While high-level languages for GPU programming like CUDA C offer a useful level of abstraction, convenience, and maintainability, they inherently hide some of the details of the execution on the hardware. It is sometimes helpful to analyze performance problems for a kernel at the assembly instruction level. Reading assembly language is tedious and challenging; CUPTI can help you to build the correlation between lines in your high-level source code and the executed assembly instructions.
Building SASS source correlation for a PC can be split into two parts -
  • Correlation of the PC to SASS instruction - subscribe to any one of CUPTI_CBID_RESOURCE_MODULE_LOADED or CUPTI_CBID_RESOURCE_MODULE_UNLOAD_STARTING or CUPTI_CBID_RESOURCE_MODULE_PROFILED callbacks. This returns a CUpti_ModuleResourceData structure having the CUDA binary. The binary can be disassembled using nvdisasm utility that comes with the CUDA toolkit. An application can have multiple functions and modules, to uniquely identify there is a functionId field in all source level activity records. This uniquely corresponds to a CUPTI_ACTIVITY_KIND_FUNCTION which has the unique module ID and function ID in the module.
  • Correlation of the SASS instruction to CUDA source line - every source level activity has a sourceLocatorId field which uniquely maps to a record of kind CUPTI_ACTIVITY_KIND_SOURCE_LOCATOR containing the line and file name information. Please note that multiple PCs can correspond to single source line.

When any source level activity (global access, branch, PC Sampling etc) is enabled, source locator record is generated for the PCs that have the source level results. Record CUpti_ActivityInstructionCorrelation can be used along with source level activities to generate SASS assembly instructions to CUDA C source code mapping for all the PCs of the function and not just the PCs that have the source level results. This can be enabled using activity kind CUPTI_ACTIVITY_KIND_INSTRUCTION_CORRELATION.

The sass_source_map sample shows how to map SASS assembly instructions to CUDA C source.

1.3.2. PC Sampling

CUPTI supports device-wide sampling of the program counter (PC). The PC Sampling gives the number of samples for each source and assembly line with various stall reasons. Using this information you can pinpoint portions of your kernel that are introducing latencies and the reason for the latency. Samples are taken in round robin order for all active warps at a fixed number of cycles regardless of whether the warp is issuing an instruction or not.

Devices with compute capability 6.0 and higher have a new feature that gives latency reasons. The latency samples indicate the reasons for holes in the issue pipeline. While collecting these samples, there is no instruction issued in the respective warp scheduler and hence these give the latency reasons. The latency reasons will be one of the stall reasons listed in the enum CUpti_ActivityPCSamplingStallReason except stall reason CUPTI_ACTIVITY_PC_SAMPLING_STALL_NOT_SELECTED.

Activity record CUpti_ActivityPCSampling3 enabled using activity kind CUPTI_ACTIVITY_KIND_PC_SAMPLING outputs stall reason along with PC and other related information. Enum CUpti_ActivityPCSamplingStallReason lists all the stall reasons. Sampling period is configurable and can be tuned using API cuptiActivityConfigurePCSampling. A wide range of sampling periods ranging from 2^5 cycles to 2^31 cycles per sample is supported. This can be controlled through field samplingPeriod2 in the PC sampling configuration struct CUpti_ActivityPCSamplingConfig. Activity record CUpti_ActivityPCSamplingRecordInfo provides the total and dropped samples for each kernel profiled for PC sampling.

This feature is available on devices with compute capability 5.2 and higher, excluding mobile devices.

The pc_sampling sample shows how to use these APIs to collect PC Sampling profiling information for a kernel.

1.3.4. OpenACC

On Linux x86_64, CUPTI supports collecting information for OpenACC applications using the OpenACC tools interface implementation of the PGI runtime. In addition to being available only on 64bit Linux platforms, this feature also requires PGI runtime version 15.7 or higher.
Activity records CUpti_ActivityOpenAccData, CUpti_ActivityOpenAccLaunch and CUpti_ActivityOpenAccOther are created, representing the three groups of callback events specified in the OpenACC tools interface. CUPTI_ACTIVITY_KIND_OPENACC_DATA, CUPTI_ACTIVITY_KIND_OPENACC_LAUNCH and CUPTI_ACTIVITY_KIND_OPENACC_OTHER can be enabled to collect the respective activity records.
Due to restrictions of the OpenACC tools interface, CUPTI cannot record OpenACC records from within the client application. Instead, a shared library that exports the acc_register_library function defined in the OpenACC tools interface specification must be implemented. Parameters passed into this function from the OpenACC runtime can be used to initialize CUPTI OpenACC measurement using cuptiOpenACCInitialize. Before starting the client application, the environment variable ACC_PROFLIB must be set to point to this shared library.
cuptiOpenACCInitialize is defined in cupti_openacc.h, which is included by cupti_activity.h. Since the CUPTI OpenACC header is only available on supported platforms, CUPTI clients must define CUPTI_OPENACC_SUPPORT when compiling.
The openacc_trace sample shows how to use CUPTI APIs for OpenACC data collection.

External Correlation

Starting with CUDA 8.0, CUPTI supports correlation of CUDA API activity records with external APIs. Such APIs include e.g. OpenACC, OpenMP and MPI. The correlation associates CUPTI correlation IDs with IDs provided by the external API. Both IDs are stored in a new activity record of type CUpti_ActivityExternalCorrelation.
CUPTI maintains a stack of external correlation IDs per CPU thread and per CUpti_ExternalCorrelationKind. Clients must use cuptiActivityPushExternalCorrelationId to push an external ID of a specific kind to this stack and cuptiActivityPopExternalCorrelationId to remove the latest ID. If a CUDA API activity record is generated while any CUpti_ExternalCorrelationKind-stack on the same CPU thread is non-empty, one CUpti_ActivityExternalCorrelation record per CUpti_ExternalCorrelationKind-stack is inserted into the activity buffer before the respective CUDA API activity record. The CUPTI client is responsible for tracking passed external API correlation IDs in order to eventually associate external API calls with CUDA API calls.
If both CUPTI_ACTIVITY_KIND_EXTERNAL_CORRELATION and any of CUPTI_ACTIVITY_KIND_OPENACC_* activity kinds are enabled, CUPTI will generate external correlation activity records for OpenACC with externalKindCUPTI_EXTERNAL_CORRELATION_KIND_OPENACC.

1.4. CUPTI Callback API

The CUPTI Callback API allows you to register a callback into your own code. Your callback will be invoked when the application being profiled calls a CUDA runtime or driver function, or when certain events occur in the CUDA driver. The following terminology is used by the callback API.

Callback Domain
Callbacks are grouped into domains to make it easier to associate your callback functions with groups of related CUDA functions or events. There are currently four callback domains, as defined by CUpti_CallbackDomain: a domain for CUDA runtime functions, a domain for CUDA driver functions, a domain for CUDA resource tracking, and a domain for CUDA synchronization notification.
Callback ID
Each callback is given a unique ID within the corresponding callback domain so that you can identify it within your callback function. The CUDA driver API IDs are defined in cupti_driver_cbid.h and the CUDA runtime API IDs are defined in cupti_runtime_cbid.h. Both of these headers are included for you when you include cupti.h. The CUDA resource callback IDs are defined by CUpti_CallbackIdResource and the CUDA synchronization callback IDs are defined by CUpti_CallbackIdSync.
Callback Function
Your callback function must be of type CUpti_CallbackFunc. This function type has two arguments that specify the callback domain and ID so that you know why the callback is occurring. The type also has a cbdata argument that is used to pass data specific to the callback.
Subscriber
A subscriber is used to associate each of your callback functions with one or more CUDA API functions. There can be at most one subscriber initialized with cuptiSubscribe() at any time. Before initializing a new subscriber, the existing subscriber must be finalized with cuptiUnsubscribe().

Each callback domain is described in detail below. Unless explicitly stated, it is not supported to call any CUDA runtime or driver API from within a callback function. Doing so may cause the application to hang.

1.4.1. Driver and Runtime API Callbacks

Using the callback API with the CUPTI_CB_DOMAIN_DRIVER_API or CUPTI_CB_DOMAIN_RUNTIME_API domains, you can associate a callback function with one or more CUDA API functions. When those CUDA functions are invoked in the application, your callback function is invoked as well. For these domains, the cbdata argument to your callback function will be of the type CUpti_CallbackData.

It is legal to call cudaThreadSynchronize(), cudaDeviceSynchronize(), cudaStreamSynchronize(), cuCtxSynchronize(), and cuStreamSynchronize() from within a driver or runtime API callback function.

The following code shows a typical sequence used to associate a callback function with one or more CUDA API functions. To simplify the presentation error checking code has been removed.

  CUpti_SubscriberHandle subscriber;
  MyDataStruct *my_data = ...;
  ...
  cuptiSubscribe(&subscriber, 
                 (CUpti_CallbackFunc)my_callback , my_data);
  cuptiEnableDomain(1, subscriber, 
                    CUPTI_CB_DOMAIN_RUNTIME_API);

First, cuptiSubscribe is used to initialize a subscriber with the my_callback callback function. Next, cuptiEnableDomain is used to associate that callback with all the CUDA runtime API functions. Using this code sequence will cause my_callback to be called twice each time any of the CUDA runtime API functions are invoked, once on entry to the CUDA function and once just before exit from the CUDA function. CUPTI callback API functions cuptiEnableCallback and cuptiEnableAllDomains can also be used to associate CUDA API functions with a callback (see reference below for more information).

The following code shows a typical callback function.

void CUPTIAPI
my_callback(void *userdata, CUpti_CallbackDomain domain,
            CUpti_CallbackId cbid, const void *cbdata)
{
  const CUpti_CallbackData *cbInfo = (CUpti_CallbackData *)cbdata;
  MyDataStruct *my_data = (MyDataStruct *)userdata;
      
  if ((domain == CUPTI_CB_DOMAIN_RUNTIME_API) &&
      (cbid == CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy_v3020))  { 
    if (cbInfo->callbackSite == CUPTI_API_ENTER) {
        cudaMemcpy_v3020_params *funcParams = 
             (cudaMemcpy_v3020_params *)(cbInfo->
                 functionParams);

        size_t count = funcParams->count;
        enum cudaMemcpyKind kind = funcParams->kind;
        ...
      }
  ...

In your callback function, you use the CUpti_CallbackDomain and CUpti_CallbackID parameters to determine which CUDA API function invocation is causing this callback. In the example above, we are checking for the CUDA runtime cudaMemcpy function. The cbdata parameter holds a structure of useful information that can be used within the callback. In this case we use the callbackSite member of the structure to detect that the callback is occurring on entry to cudaMemcpy, and we use the functionParams member to access the parameters that were passed to cudaMemcpy. To access the parameters we first cast functionParams to a structure type corresponding to the cudaMemcpy function. These parameter structures are contained in generated_cuda_runtime_api_meta.h, generated_cuda_meta.h, and a number of other files. When possible these files are included for you by cupti.h.

The callback_event and callback_timestamp samples described on the samples page both show how to use the callback API for the driver and runtime API domains.

1.4.2. Resource Callbacks

Using the callback API with the CUPTI_CB_DOMAIN_RESOURCE domain, you can associate a callback function with some CUDA resource creation and destruction events. For example, when a CUDA context is created, your callback function will be invoked with a callback ID equal to CUPTI_CBID_RESOURCE_CONTEXT_CREATED. For this domain, the cbdata argument to your callback function will be of the type CUpti_ResourceData.

Note that, APIs cuptiActivityFlush and cuptiActivityFlushAll will result in deadlock when called from stream destroy starting callback identified using callback ID CUPTI_CBID_RESOURCE_STREAM_DESTROY_STARTING.

1.4.3. Synchronization Callbacks

Using the callback API with the CUPTI_CB_DOMAIN_SYNCHRONIZE domain, you can associate a callback function with CUDA context and stream synchronizations. For example, when a CUDA context is synchronized, your callback function will be invoked with a callback ID equal to CUPTI_CBID_SYNCHRONIZE_CONTEXT_SYNCHRONIZED. For this domain, the cbdata argument to your callback function will be of the type CUpti_SynchronizeData.

1.4.4. NVIDIA Tools Extension Callbacks

Using the callback API with the CUPTI_CB_DOMAIN_NVTX domain, you can associate a callback function with NVIDIA Tools Extension (NVTX) API functions. When an NVTX function is invoked in the application, your callback function is invoked as well. For these domains, the cbdata argument to your callback function will be of the type CUpti_NvtxData.

The NVTX library has its own convention for discovering the profiling library that will provide the implementation of the NVTX callbacks. To receive callbacks you must set the NVTX environment variables appropriately so that when the application calls an NVTX function, your profiling library recieve the callbacks. The following code sequence shows a typical initialization sequence to enable NVTX callbacks and activity records.
/* Set env so CUPTI-based profiling library loads on first nvtx call. */
char *inj32_path = "/path/to/32-bit/version/of/cupti/based/profiling/library";
char *inj64_path = "/path/to/64-bit/version/of/cupti/based/profiling/library";
setenv("NVTX_INJECTION32_PATH", inj32_path, 1);
setenv("NVTX_INJECTION64_PATH", inj64_path, 1);

The following code shows a typical sequence used to associate a callback function with one or more NVTX functions. To simplify the presentation error checking code has been removed.

CUpti_SubscriberHandle subscriber;
MyDataStruct *my_data = ...;
...
cuptiSubscribe(&subscriber, 
               (CUpti_CallbackFunc)my_callback , my_data);
cuptiEnableDomain(1, subscriber, 
                  CUPTI_CB_DOMAIN_NVTX);

First, cuptiSubscribe is used to initialize a subscriber with the my_callback callback function. Next, cuptiEnableDomain is used to associate that callback with all the NVTX functions. Using this code sequence will cause my_callback to be called once each time any of the NVTX functions are invoked. CUPTI callback API functions cuptiEnableCallback and cuptiEnableAllDomains can also be used to associate NVTX API functions with a callback (see reference below for more information).

The following code shows a typical callback function.

void CUPTIAPI
my_callback(void *userdata, CUpti_CallbackDomain domain,
            CUpti_CallbackId cbid, const void *cbdata)
{
  const CUpti_NvtxData *nvtxInfo = (CUpti_NvtxData *)cbdata;
  MyDataStruct *my_data = (MyDataStruct *)userdata;
      
  if ((domain == CUPTI_CB_DOMAIN_NVTX) &&
      (cbid == NVTX_CBID_CORE_NameOsThreadA))  { 
    nvtxNameOsThreadA_params *params = (nvtxNameOsThreadA_params *)nvtxInfo->
             functionParams;
    ...
  }
  ...

In your callback function, you use the CUpti_CallbackDomain and CUpti_CallbackID parameters to determine which NVTX API function invocation is causing this callback. In the example above, we are checking for the nvtxNameOsThreadA function. The cbdata parameter holds a structure of useful information that can be used within the callback. In this case, we use the functionParams member to access the parameters that were passed to nvtxNameOsThreadA. To access the parameters we first cast functionParams to a structure type corresponding to the nvtxNameOsThreadA function. These parameter structures are contained in generated_nvtx_meta.h.

1.5. CUPTI Event API

The CUPTI Event API allows you to query, configure, start, stop, and read the event counters on a CUDA-enabled device. The following terminology is used by the event API.

Event
An event is a countable activity, action, or occurrence on a device.
Event ID
Each event is assigned a unique identifier. A named event will represent the same activity, action, or occurrence on all device types. But the named event may have different IDs on different device families. Use cuptiEventGetIdFromName to get the ID for a named event on a particular device.
Event Category
Each event is placed in one of the categories defined by CUpti_EventCategory. The category indicates the general type of activity, action, or occurrence measured by the event.
Event Domain
A device exposes one or more event domains. Each event domain represents a group of related events available on that device. A device may have multiple instances of a domain, indicating that the device can simultaneously record multiple instances of each event within that domain.
Event Group
An event group is a collection of events that are managed together. The number and type of events that can be added to an event group are subject to device-specific limits. At any given time, a device may be configured to count events from a limited number of event groups. All events in an event group must belong to the same event domain.
Event Group Set
An event group set is a collection of event groups that can be enabled at the same time. Event group sets are created by cuptiEventGroupSetsCreate and cuptiMetricCreateEventGroupSets.

You can determine the events available on a device using the cuptiDeviceEnumEventDomains and cuptiEventDomainEnumEvents functions. The cupti_query sample described on the samples page shows how to use these functions. You can also enumerate all the CUPTI events available on any device using the cuptiEnumEventDomains function.

Configuring and reading event counts requires the following steps. First, select your event collection mode. If you want to count events that occur during the execution of a kernel, use cuptiSetEventCollectionMode to set mode CUPTI_EVENT_COLLECTION_MODE_KERNEL. If you want to continuously sample the event counts, use mode CUPTI_EVENT_COLLECTION_MODE_CONTINUOUS. Next determine the names of the events that you want to count, and then use the cuptiEventGroupCreate, cuptiEventGetIdFromName, and cuptiEventGroupAddEvent functions to create and initialize an event group with those events. If you are unable to add all the events to a single event group then you will need to create multiple event groups. Alternatively, you can use the cuptiEventGroupSetsCreate function to automatically create the event group(s) required for a set of events.

To begin counting a set of events, enable the event group or groups that contain those events by using the cuptiEventGroupEnable function. If your events are contained in multiple event groups you may be unable to enable all of the event groups at the same time, due to device limitations. In this case, you can gather the events across multiple executions of the application or you can enable kernel replay. If you enable kernel replay using cuptiEnableKernelReplayMode you will be able to enabled any number of event groups and all the contained events will be collect.

Use the cuptiEventGroupReadEvent and/or cuptiEventGroupReadAllEvents functions to read the event values. When you are done collecting events, use the cuptiEventGroupDisable function to stop counting of the events contained in an event group. The callback_event sample described on the samples page shows how to use these functions to create, enable, and disable event groups, and how to read event counts.

Note: For event collection mode CUPTI_EVENT_COLLECTION_MODE_KERNEL, events or metrics collection may significantly change the overall performance characteristics of the application because all kernel executions that occur between the cuptiEventGroupEnable and cuptiEventGroupDisable calls are serialized on the GPU. This can be avoided by using mode CUPTI_EVENT_COLLECTION_MODE_CONTINUOUS and restricting profiling to events and metrics that can be collected in a single pass.
Note: All the events and metrics except NVLink metrics are collected at the context level irrespective of the event collection mode. That is, events or metrics can be attributed to the context being profiled and values can be accurately collected when multiple contexts are executing on the GPU. NVLink metrics are collected at device level for all event collection modes.

In a system with multiple GPUs, events can be collected simultaneously on all the GPUs i.e. event profiling doesn't enforce any serialization of work across GPUs. The event_multi_gpu sample shows how to use the CUPTI event and CUDA APIs on such setups.

1.5.1. Collecting Kernel Execution Events

A common use of the event API is to count a set of events during the execution of a kernel (as demonstrated by the callback_event sample). The following code shows a typical callback used for this purpose. Assume that the callback was enabled only for a kernel launch using the CUDA runtime (i.e. by cuptiEnableCallback(1, subscriber, CUPTI_CB_DOMAIN_RUNTIME_API, CUPTI_RUNTIME_TRACE_CBID_cudaLaunch_v3020). To simplify the presentation error checking code has been removed.

static void CUPTIAPI
getEventValueCallback(void *userdata,
                      CUpti_CallbackDomain domain,
                      CUpti_CallbackId cbid,
                      const void *cbdata)
{
  const CUpti_CallbackData *cbData = 
                (CUpti_CallbackData *)cbdata;
     
  if (cbData->callbackSite == CUPTI_API_ENTER) {
    cudaDeviceSynchronize();
    cuptiSetEventCollectionMode(cbInfo->context, 
                                CUPTI_EVENT_COLLECTION_MODE_KERNEL);
    cuptiEventGroupEnable(eventGroup);
  }
    
  if (cbData->callbackSite == CUPTI_API_EXIT) {
    cudaDeviceSynchronize();
    cuptiEventGroupReadEvent(eventGroup, 
                             CUPTI_EVENT_READ_FLAG_NONE, 
                             eventId, 
                             &bytesRead, &eventVal);
      
    cuptiEventGroupDisable(eventGroup);
  }
}

Two synchronization points are used to ensure that events are counted only for the execution of the kernel. If the application contains other threads that launch kernels, then additional thread-level synchronization must also be introduced to ensure that those threads do not launch kernels while the callback is collecting events. When the cudaLaunch API is entered (that is, before the kernel is actually launched on the device), cudaDeviceSynchronize is used to wait until the GPU is idle. The event collection mode is set to CUPTI_EVENT_COLLECTION_MODE_KERNEL so that the event counters are automatically started and stopped just before and after the kernel executes. Then event collection is enabled with cuptiEventGroupEnable.

When the cudaLaunch API is exited (that is, after the kernel is queued for execution on the GPU) another cudaDeviceSynchronize is used to cause the CPU thread to wait for the kernel to finish execution. Finally, the event counts are read with cuptiEventGroupReadEvent.

1.5.2. Sampling Events

The event API can also be used to sample event values while a kernel or kernels are executing (as demonstrated by the event_sampling sample). The sample shows one possible way to perform the sampling. The event collection mode is set to CUPTI_EVENT_COLLECTION_MODE_CONTINUOUS so that the event counters run continuously. Two threads are used in event_sampling: one thread schedules the kernels and memcpys that perform the computation, while another thread wakes up periodically to sample an event counter. In this sample there is no correlation of the event samples with what is happening on the GPU. To get some coarse correlation, you can use cuptiDeviceGetTimestamp to collect the GPU timestamp at the time of the sample and also at other interesting points in your application.

1.6. CUPTI Metric API

The CUPTI Metric API allows you to collect application metrics calculated from one or more event values. The following terminology is used by the metric API.

Metric
An characteristic of an application that is calculated from one or more event values.
Metric ID
Each metric is assigned a unique identifier. A named metric will represent the same characteristic on all device types. But the named metric may have different IDs on different device families. Use cuptiMetricGetIdFromName to get the ID for a named metric on a particular device.
Metric Category
Each metric is placed in one of the categories defined by CUpti_MetricCategory. The category indicates the general type of the characteristic measured by the metric.
Metric Property
Each metric is calculated from input values. These input values can be events or properties of the device or system. The available properties are defined by CUpti_MetricPropertyID.
Metric Value
Each metric has a value that represents one of the kinds defined by CUpti_MetricValueKind. For each value kind, there is a corresponding member of the CUpti_MetricValue union that is used to hold the metric's value.

The tables included in this section list the metrics available for each device, as determined by the device's compute capability. You can also determine the metrics available on a device using the cuptiDeviceEnumMetrics function. The cupti_query sample described on the samples page shows how to use this function. You can also enumerate all the CUPTI metrics available on any device using the cuptiEnumMetrics function.

CUPTI provides two functions for calculating a metric value. cuptiMetricGetValue2 can be used to calculate a metric value when the device is not available. All required event values and metric properties must be provided by the caller. cuptiMetricGetValue can be used to calculate a metric value when the device is available (as a CUdevice object). All required event values must be provided by the caller but CUPTI will determine the appropriate property values from the CUdevice object.

Configuring and calculating metric values requires the following steps. First, determine the name of the metric that you want to collect, and then use the cuptiMetricGetIdFromName to get the metric ID. Use cuptiMetricEnumEvents to get the events required to calculate the metric and follow instructions in the CUPTI Event API section to create the event groups for those events. When creating event groups in this manner it is important to use the result of cuptiMetricGetRequiredEventGroupSets to properly group together events that must be collected in the same pass to ensure proper metric calculation.

Alternatively, you can use the cuptiMetricCreateEventGroupSets function to automatically create the event group(s) required for metric's events. When using this function events will be grouped as required to most accurately calculate the metric, as a result it is not necessary to use cuptiMetricGetRequiredEventGroupSets.

If you are using cuptiMetricGetValue2 then you must also collect the required metric property values using cuptiMetricEnumProperties.

Collect event counts as described in the CUPTI Event API section, and then use either cuptiMetricGetValue or cuptiMetricGetValue2 to calculate the metric value from the collected event and property values. The callback_metric sample described on the samples page shows how to use the functions to calculate event values and calculate a metric using cuptiMetricGetValue. Note that, as shown in the example, you should collect event counts from all domain instances and normalize the counts to get the most accurate metric values. It is necessary to normalize the event counts because the number of event counter instances varies by device and by the event being counted.

For example, a device might have 8 multiprocessors but only have event counters for 4 of the multiprocessors, and might have 3 memory units and only have events counters for one memory unit. When calculating a metric that requires a multiprocessor event and a memory unit event, the 4 multiprocessor counters should be summed and multiplied by 2 to normalize the event count across the entire device. Similarly, the one memory unit counter should be multiplied by 3 to normalize the event count across the entire device. The normalized values can then be passed to cuptiMetricGetValue or cuptiMetricGetValue2 to calculate the metric value.

As described, the normalization assumes the kernel executes a sufficient number of blocks to completely load the device. If the kernel has only a small number of blocks, normalizing across the entire device may skew the result.

1.6.1. Metrics Reference

This section contains detailed descriptions of the metrics that can be collected by the CUPTI. A scope value of "Single-context" indicates that the metric can only be accurately collected when a single context (CUDA or graphics) is executing on the GPU. A scope value of "Multi-context" indicates that the metric can be accurately collected when multiple contexts are executing on the GPU. A scope value of "Device" indicates that the metric will be collected at device level, that is, it will include values for all the contexts executing on the GPU. The events for these metrics can be collected at device level using CUPTI_EVENT_COLLECTION_MODE_CONTINUOUS. When these metrics are collected for a kernel using CUPTI_EVENT_COLLECTION_MODE_KERNEL, they exhibit the behavior of single-context. Note that NVLink metrics collected for kernel mode exhibit the behavior of "Single-context".

1.6.1.1. Metrics for Capability 3.x

Devices with compute capability 3.x implement the metrics shown in the following table. Note that for some metrics the "Multi-context" scope is supported only for specific devices. Such metrics are marked with "Multi-context*" under the "Scope" column. Refer to the note at the bottom of the table.

Table 1. Capability 3.x Metrics
Metric Name Description Scope
achieved_occupancy Ratio of the average active warps per active cycle to the maximum number of warps supported on a multiprocessor Multi-context
alu_fu_utilization The utilization level of the multiprocessor function units that execute integer and floating-point arithmetic instructions on a scale of 0 to 10 Multi-context
atomic_replay_overhead Average number of replays due to atomic and reduction bank conflicts for each instruction executed Multi-context
atomic_throughput Global memory atomic and reduction throughput Multi-context
atomic_transactions Global memory atomic and reduction transactions Multi-context
atomic_transactions_per_request Average number of global memory atomic and reduction transactions performed for each atomic and reduction instruction Multi-context
branch_efficiency Ratio of non-divergent branches to total branches expressed as percentage. This is available for compute capability 3.0. Multi-context
cf_executed Number of executed control-flow instructions Multi-context
cf_fu_utilization The utilization level of the multiprocessor function units that execute control-flow instructions on a scale of 0 to 10 Multi-context
cf_issued Number of issued control-flow instructions Multi-context
dram_read_throughput Device memory read throughput. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
dram_read_transactions Device memory read transactions. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
dram_utilization The utilization level of the device memory relative to the peak utilization on a scale of 0 to 10 Multi-context*
dram_write_throughput Device memory write throughput. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
dram_write_transactions Device memory write transactions. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
ecc_throughput ECC throughput from L2 to DRAM. This is available for compute capability 3.5 and 3.7. Multi-context*
ecc_transactions Number of ECC transactions between L2 and DRAM. This is available for compute capability 3.5 and 3.7. Multi-context*
eligible_warps_per_cycle Average number of warps that are eligible to issue per active cycle Multi-context
flop_count_dp Number of double-precision floating-point operations executed by non-predicated threads (add, multiply and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. Multi-context
flop_count_dp_add Number of double-precision floating-point add operations executed by non-predicated threads Multi-context
flop_count_dp_fma Number of double-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_dp_mul Number of double-precision floating-point multiply operations executed by non-predicated threads Multi-context
flop_count_sp Number of single-precision floating-point operations executed by non-predicated threads (add, multiply and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. The count does not include special operations. Multi-context
flop_count_sp_add Number of single-precision floating-point add operations executed by non-predicated threads Multi-context
flop_count_sp_fma Number of single-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_sp_mul Number of single-precision floating-point multiply operations executed by non-predicated threads Multi-context
flop_count_sp_special Number of single-precision floating-point special operations executed by non-predicated threads Multi-context
flop_dp_efficiency Ratio of achieved to peak double-precision floating-point operations Multi-context
flop_sp_efficiency Ratio of achieved to peak single-precision floating-point operations Multi-context
gld_efficiency Ratio of requested global memory load throughput to required global memory load throughput expressed as percentage Multi-context*
gld_requested_throughput Requested global memory load throughput Multi-context
gld_throughput Global memory load throughput Multi-context*
gld_transactions Number of global memory load transactions Multi-context*
gld_transactions_per_request Average number of global memory load transactions performed for each global memory load Multi-context*
global_cache_replay_overhead Average number of replays due to global memory cache misses for each instruction executed Multi-context
global_replay_overhead Average number of replays due to global memory cache misses Multi-context
gst_efficiency Ratio of requested global memory store throughput to required global memory store throughput expressed as percentage Multi-context*
gst_requested_throughput Requested global memory store throughput Multi-context
gst_throughput Global memory store throughput Multi-context*
gst_transactions Number of global memory store transactions Multi-context*
gst_transactions_per_request Average number of global memory store transactions performed for each global memory store Multi-context*
inst_bit_convert Number of bit-conversion instructions executed by non-predicated threads Multi-context
inst_compute_ld_st Number of compute load/store instructions executed by non-predicated threads Multi-context
inst_control Number of control-flow instructions executed by non-predicated threads (jump, branch, etc.) Multi-context
inst_executed The number of instructions executed Multi-context
inst_fp_32 Number of single-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_fp_64 Number of double-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_integer Number of integer instructions executed by non-predicated threads Multi-context
inst_inter_thread_communication Number of inter-thread communication instructions executed by non-predicated threads Multi-context
inst_issued The number of instructions issued Multi-context
inst_misc Number of miscellaneous instructions executed by non-predicated threads Multi-context
inst_per_warp Average number of instructions executed by each warp Multi-context
inst_replay_overhead Average number of replays for each instruction executed Multi-context
ipc Instructions executed per cycle Multi-context
ipc_instance Instructions executed per cycle for a single multiprocessor Multi-context
issue_slot_utilization Percentage of issue slots that issued at least one instruction, averaged across all cycles Multi-context
issue_slots The number of issue slots used Multi-context
issued_ipc Instructions issued per cycle Multi-context
l1_cache_global_hit_rate Hit rate in L1 cache for global loads Multi-context*
l1_cache_local_hit_rate Hit rate in L1 cache for local loads and stores Multi-context*
l1_shared_utilization The utilization level of the L1/shared memory relative to peak utilization on a scale of 0 to 10. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_atomic_throughput Memory read throughput seen at L2 cache for atomic and reduction requests Multi-context*
l2_atomic_transactions Memory read transactions seen at L2 cache for atomic and reduction requests Multi-context*
l2_l1_read_hit_rate Hit rate at L2 cache for all read requests from L1 cache. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_l1_read_throughput Memory read throughput seen at L2 cache for read requests from L1 cache. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_l1_read_transactions Memory read transactions seen at L2 cache for all read requests from L1 cache. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_l1_write_throughput Memory write throughput seen at L2 cache for write requests from L1 cache. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_l1_write_transactions Memory write transactions seen at L2 cache for all write requests from L1 cache. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_read_throughput Memory read throughput seen at L2 cache for all read requests Multi-context*
l2_read_transactions Memory read transactions seen at L2 cache for all read requests Multi-context*
l2_tex_read_transactions Memory read transactions seen at L2 cache for read requests from the texture cache Multi-context*
l2_tex_read_hit_rate Hit rate at L2 cache for all read requests from texture cache. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
l2_tex_read_throughput Memory read throughput seen at L2 cache for read requests from the texture cache Multi-context*
l2_utilization The utilization level of the L2 cache relative to the peak utilization on a scale of 0 to 10 Multi-context*
l2_write_throughput Memory write throughput seen at L2 cache for all write requests Multi-context*
l2_write_transactions Memory write transactions seen at L2 cache for all write requests Multi-context*
ldst_executed Number of executed local, global, shared and texture memory load and store instructions Multi-context
ldst_fu_utilization The utilization level of the multiprocessor function units that execute global, local and shared memory instructions on a scale of 0 to 10 Multi-context
ldst_issued Number of issued local, global, shared and texture memory load and store instructions Multi-context
local_load_throughput Local memory load throughput Multi-context*
local_load_transactions Number of local memory load transactions Multi-context*
local_load_transactions_per_request Average number of local memory load transactions performed for each local memory load Multi-context*
local_memory_overhead Ratio of local memory traffic to total memory traffic between the L1 and L2 caches expressed as percentage. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
local_replay_overhead Average number of replays due to local memory accesses for each instruction executed Multi-context
local_store_throughput Local memory store throughput Multi-context*
local_store_transactions Number of local memory store transactions Multi-context*
local_store_transactions_per_request Average number of local memory store transactions performed for each local memory store Multi-context*
nc_cache_global_hit_rate Hit rate in non coherent cache for global loads Multi-context*
nc_gld_efficiency Ratio of requested non coherent global memory load throughput to required non coherent global memory load throughput expressed as percentage Multi-context*
nc_gld_requested_throughput Requested throughput for global memory loaded via non-coherent cache Multi-context
nc_gld_throughput Non coherent global memory load throughput Multi-context*
nc_l2_read_throughput Memory read throughput for non coherent global read requests seen at L2 cache Multi-context*
nc_l2_read_transactions Memory read transactions seen at L2 cache for non coherent global read requests Multi-context*
shared_efficiency Ratio of requested shared memory throughput to required shared memory throughput expressed as percentage Multi-context*
shared_load_throughput Shared memory load throughput Multi-context*
shared_load_transactions Number of shared memory load transactions Multi-context*
shared_load_transactions_per_request Average number of shared memory load transactions performed for each shared memory load Multi-context*
shared_replay_overhead Average number of replays due to shared memory conflicts for each instruction executed Multi-context
shared_store_throughput Shared memory store throughput Multi-context*
shared_store_transactions Number of shared memory store transactions Multi-context*
shared_store_transactions_per_request Average number of shared memory store transactions performed for each shared memory store Multi-context*
sm_efficiency The percentage of time at least one warp is active on a multiprocessor averaged over all multiprocessors on the GPU Multi-context*
sm_efficiency_instance The percentage of time at least one warp is active on a specific multiprocessor Multi-context*
stall_constant_memory_dependency Percentage of stalls occurring because of immediate constant cache miss. This is available for compute capability 3.2, 3.5 and 3.7. Multi-context
stall_exec_dependency Percentage of stalls occurring because an input required by the instruction is not yet available Multi-context
stall_inst_fetch Percentage of stalls occurring because the next assembly instruction has not yet been fetched Multi-context
stall_memory_dependency Percentage of stalls occurring because a memory operation cannot be performed due to the required resources not being available or fully utilized, or because too many requests of a given type are outstanding. Multi-context
stall_memory_throttle Percentage of stalls occurring because of memory throttle. Multi-context
stall_not_selected Percentage of stalls occurring because warp was not selected. Multi-context
stall_other Percentage of stalls occurring due to miscellaneous reasons Multi-context
stall_pipe_busy Percentage of stalls occurring because a compute operation cannot be performed because the compute pipeline is busy. This is available for compute capability 3.2, 3.5 and 3.7. Multi-context
stall_sync Percentage of stalls occurring because the warp is blocked at a __syncthreads() call Multi-context
stall_texture Percentage of stalls occurring because the texture sub-system is fully utilized or has too many outstanding requests Multi-context
sysmem_read_throughput System memory read throughput. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
sysmem_read_transactions System memory read transactions. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
sysmem_read_utilization The read utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context
sysmem_utilization The utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
sysmem_write_throughput System memory write throughput. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
sysmem_write_transactions System memory write transactions. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context*
sysmem_write_utilization The write utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 3.0, 3.5 and 3.7. Multi-context
tex_cache_hit_rate Texture cache hit rate Multi-context*
tex_cache_throughput Texture cache throughput Multi-context*
tex_cache_transactions Texture cache read transactions Multi-context*
tex_fu_utilization The utilization level of the multiprocessor function units that execute texture instructions on a scale of 0 to 10 Multi-context
tex_utilization The utilization level of the texture cache relative to the peak utilization on a scale of 0 to 10 Multi-context*
warp_execution_efficiency Ratio of the average active threads per warp to the maximum number of threads per warp supported on a multiprocessor expressed as percentage Multi-context
warp_nonpred_execution_efficiency Ratio of the average active threads per warp executing non-predicated instructions to the maximum number of threads per warp supported on a multiprocessor expressed as percentage Multi-context

* The "Multi-context" scope for this metric is supported only for devices with compute capability 3.0, 3.5 and 3.7.

1.6.1.2. Metrics for Capability 5.x

Devices with compute capability 5.x implement the metrics shown in the following table. Note that for some metrics the "Multi-context" scope is supported only for specific devices. Such metrics are marked with "Multi-context*" under the "Scope" column. Refer to the note at the bottom of the table.

Table 2. Capability 5.x Metrics
Metric Name Description Scope
achieved_occupancy Ratio of the average active warps per active cycle to the maximum number of warps supported on a multiprocessor Multi-context
atomic_transactions Global memory atomic and reduction transactions Multi-context
atomic_transactions_per_request Average number of global memory atomic and reduction transactions performed for each atomic and reduction instruction Multi-context
branch_efficiency Ratio of non-divergent branches to total branches expressed as percentage Multi-context
cf_executed Number of executed control-flow instructions Multi-context
cf_fu_utilization The utilization level of the multiprocessor function units that execute control-flow instructions on a scale of 0 to 10 Multi-context
cf_issued Number of issued control-flow instructions Multi-context
double_precision_fu_utilization The utilization level of the multiprocessor function units that execute double-precision floating-point instructions on a scale of 0 to 10 Multi-context
dram_read_bytes Total bytes read from DRAM to L2 cache. This is available for compute capability 5.0 and 5.2. Multi-context*
dram_read_throughput Device memory read throughput. This is available for compute capability 5.0 and 5.2. Multi-context*
dram_read_transactions Device memory read transactions. This is available for compute capability 5.0 and 5.2. Multi-context*
dram_utilization The utilization level of the device memory relative to the peak utilization on a scale of 0 to 10 Multi-context*
dram_write_bytes Total bytes written from L2 cache to DRAM. This is available for compute capability 5.0 and 5.2. Multi-context*
dram_write_throughput Device memory write throughput. This is available for compute capability 5.0 and 5.2. Multi-context*
dram_write_transactions Device memory write transactions. This is available for compute capability 5.0 and 5.2. Multi-context*
ecc_throughput ECC throughput from L2 to DRAM. This is available for compute capability 5.0 and 5.2. Multi-context*
ecc_transactions Number of ECC transactions between L2 and DRAM. This is available for compute capability 5.0 and 5.2. Multi-context*
eligible_warps_per_cycle Average number of warps that are eligible to issue per active cycle Multi-context
flop_count_dp Number of double-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. Multi-context
flop_count_dp_add Number of double-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_dp_fma Number of double-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_dp_mul Number of double-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_hp Number of half-precision floating-point operations executed by non-predicated threads (add, multiply and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. This is available for compute capability 5.3. Multi-context*
flop_count_hp_add Number of half-precision floating-point add operations executed by non-predicated threads. This is available for compute capability 5.3. Multi-context*
flop_count_hp_fma Number of half-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. This is available for compute capability 5.3. Multi-context*
flop_count_hp_mul Number of half-precision floating-point multiply operations executed by non-predicated threads. This is available for compute capability 5.3. Multi-context*
flop_count_sp Number of single-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. The count does not include special operations. Multi-context
flop_count_sp_add Number of single-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_sp_fma Number of single-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_sp_mul Number of single-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_sp_special Number of single-precision floating-point special operations executed by non-predicated threads. Multi-context
flop_dp_efficiency Ratio of achieved to peak double-precision floating-point operations Multi-context
flop_hp_efficiency Ratio of achieved to peak half-precision floating-point operations. This is available for compute capability 5.3. Multi-context*
flop_sp_efficiency Ratio of achieved to peak single-precision floating-point operations Multi-context
gld_efficiency Ratio of requested global memory load throughput to required global memory load throughput expressed as percentage. Multi-context*
gld_requested_throughput Requested global memory load throughput Multi-context
gld_throughput Global memory load throughput Multi-context*
gld_transactions Number of global memory load transactions Multi-context*
gld_transactions_per_request Average number of global memory load transactions performed for each global memory load. Multi-context*
global_atomic_requests Total number of global atomic(Atom and Atom CAS) requests from Multiprocessor Multi-context
global_hit_rate Hit rate for global loads in unified l1/tex cache. Metric value maybe wrong if malloc is used in kernel. Multi-context*
global_load_requests Total number of global load requests from Multiprocessor Multi-context
global_reduction_requests Total number of global reduction requests from Multiprocessor Multi-context
global_store_requests Total number of global store requests from Multiprocessor. This does not include atomic requests. Multi-context
gst_efficiency Ratio of requested global memory store throughput to required global memory store throughput expressed as percentage. Multi-context*
gst_requested_throughput Requested global memory store throughput Multi-context
gst_throughput Global memory store throughput Multi-context*
gst_transactions Number of global memory store transactions Multi-context*
gst_transactions_per_request Average number of global memory store transactions performed for each global memory store Multi-context*
half_precision_fu_utilization The utilization level of the multiprocessor function units that execute 16 bit floating-point instructions and integer instructions on a scale of 0 to 10. This is available for compute capability 5.3. Multi-context*
inst_bit_convert Number of bit-conversion instructions executed by non-predicated threads Multi-context
inst_compute_ld_st Number of compute load/store instructions executed by non-predicated threads Multi-context
inst_control Number of control-flow instructions executed by non-predicated threads (jump, branch, etc.) Multi-context
inst_executed The number of instructions executed Multi-context
inst_executed_global_atomics Warp level instructions for global atom and atom cas Multi-context
inst_executed_global_loads Warp level instructions for global loads Multi-context
inst_executed_global_reductions Warp level instructions for global reductions Multi-context
inst_executed_global_stores Warp level instructions for global stores Multi-context
inst_executed_local_loads Warp level instructions for local loads Multi-context
inst_executed_local_stores Warp level instructions for local stores Multi-context
inst_executed_shared_atomics Warp level shared instructions for atom and atom CAS Multi-context
inst_executed_shared_loads Warp level instructions for shared loads Multi-context
inst_executed_shared_stores Warp level instructions for shared stores Multi-context
inst_executed_surface_atomics Warp level instructions for surface atom and atom cas Multi-context
inst_executed_surface_loads Warp level instructions for surface loads Multi-context
inst_executed_surface_reductions Warp level instructions for surface reductions Multi-context
inst_executed_surface_stores Warp level instructions for surface stores Multi-context
inst_executed_tex_ops Warp level instructions for texture Multi-context
inst_fp_16 Number of half-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) This is available for compute capability 5.3. Multi-context*
inst_fp_32 Number of single-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_fp_64 Number of double-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_integer Number of integer instructions executed by non-predicated threads Multi-context
inst_inter_thread_communication Number of inter-thread communication instructions executed by non-predicated threads Multi-context
inst_issued The number of instructions issued Multi-context
inst_misc Number of miscellaneous instructions executed by non-predicated threads Multi-context
inst_per_warp Average number of instructions executed by each warp Multi-context
inst_replay_overhead Average number of replays for each instruction executed Multi-context
ipc Instructions executed per cycle Multi-context
issue_slot_utilization Percentage of issue slots that issued at least one instruction, averaged across all cycles Multi-context
issue_slots The number of issue slots used Multi-context
issued_ipc Instructions issued per cycle Multi-context
l2_atomic_throughput Memory read throughput seen at L2 cache for atomic and reduction requests Multi-context
l2_atomic_transactions Memory read transactions seen at L2 cache for atomic and reduction requests Multi-context*
l2_global_atomic_store_bytes Bytes written to L2 from Unified cache for global atomics (ATOM and ATOM CAS) Multi-context*
l2_global_load_bytes Bytes read from L2 for misses in Unified Cache for global loads Multi-context*
l2_global_reduction_bytes Bytes written to L2 from Unified cache for global reductions Multi-context*
l2_local_global_store_bytes Bytes written to L2 from Unified Cache for local and global stores. This does not include global atomics. Multi-context*
l2_local_load_bytes Bytes read from L2 for misses in Unified Cache for local loads Multi-context*
l2_read_throughput Memory read throughput seen at L2 cache for all read requests Multi-context*
l2_read_transactions Memory read transactions seen at L2 cache for all read requests Multi-context*
l2_surface_atomic_store_bytes Bytes transferred between Unified Cache and L2 for surface atomics (ATOM and ATOM CAS) Multi-context*
l2_surface_load_bytes Bytes read from L2 for misses in Unified Cache for surface loads Multi-context*
l2_surface_reduction_bytes Bytes written to L2 from Unified Cache for surface reductions Multi-context*
l2_surface_store_bytes Bytes written to L2 from Unified Cache for surface stores. This does not include surface atomics. Multi-context*
l2_tex_hit_rate Hit rate at L2 cache for all requests from texture cache Multi-context*
l2_tex_read_hit_rate Hit rate at L2 cache for all read requests from texture cache. This is available for compute capability 5.0 and 5.2. Multi-context*
l2_tex_read_throughput Memory read throughput seen at L2 cache for read requests from the texture cache Multi-context*
l2_tex_read_transactions Memory read transactions seen at L2 cache for read requests from the texture cache Multi-context*
l2_tex_write_hit_rate Hit Rate at L2 cache for all write requests from texture cache. This is available for compute capability 5.0 and 5.2. Multi-context*
l2_tex_write_throughput Memory write throughput seen at L2 cache for write requests from the texture cache Multi-context*
l2_tex_write_transactions Memory write transactions seen at L2 cache for write requests from the texture cache Multi-context*
l2_utilization The utilization level of the L2 cache relative to the peak utilization on a scale of 0 to 10 Multi-context*
l2_write_throughput Memory write throughput seen at L2 cache for all write requests Multi-context*
l2_write_transactions Memory write transactions seen at L2 cache for all write requests Multi-context*
ldst_executed Number of executed local, global, shared and texture memory load and store instructions Multi-context
ldst_fu_utilization The utilization level of the multiprocessor function units that execute shared load, shared store and constant load instructions on a scale of 0 to 10 Multi-context
ldst_issued Number of issued local, global, shared and texture memory load and store instructions Multi-context
local_hit_rate Hit rate for local loads and stores Multi-context*
local_load_requests Total number of local load requests from Multiprocessor Multi-context*
local_load_throughput Local memory load throughput Multi-context*
local_load_transactions Number of local memory load transactions Multi-context*
local_load_transactions_per_request Average number of local memory load transactions performed for each local memory load Multi-context*
local_memory_overhead Ratio of local memory traffic to total memory traffic between the L1 and L2 caches expressed as percentage Multi-context*
local_store_requests Total number of local store requests from Multiprocessor Multi-context*
local_store_throughput Local memory store throughput Multi-context*
local_store_transactions Number of local memory store transactions Multi-context*
local_store_transactions_per_request Average number of local memory store transactions performed for each local memory store Multi-context*
pcie_total_data_received Total data bytes received through PCIe Device
pcie_total_data_transmitted Total data bytes transmitted through PCIe Device
shared_efficiency Ratio of requested shared memory throughput to required shared memory throughput expressed as percentage Multi-context*
shared_load_throughput Shared memory load throughput Multi-context*
shared_load_transactions Number of shared memory load transactions Multi-context*
shared_load_transactions_per_request Average number of shared memory load transactions performed for each shared memory load Multi-context*
shared_store_throughput Shared memory store throughput Multi-context*
shared_store_transactions Number of shared memory store transactions Multi-context*
shared_store_transactions_per_request Average number of shared memory store transactions performed for each shared memory store Multi-context*
shared_utilization The utilization level of the shared memory relative to peak utilization on a scale of 0 to 10 Multi-context*
single_precision_fu_utilization The utilization level of the multiprocessor function units that execute single-precision floating-point instructions and integer instructions on a scale of 0 to 10 Multi-context
sm_efficiency The percentage of time at least one warp is active on a specific multiprocessor Multi-context*
special_fu_utilization The utilization level of the multiprocessor function units that execute sin, cos, ex2, popc, flo, and similar instructions on a scale of 0 to 10 Multi-context
stall_constant_memory_dependency Percentage of stalls occurring because of immediate constant cache miss Multi-context
stall_exec_dependency Percentage of stalls occurring because an input required by the instruction is not yet available Multi-context
stall_inst_fetch Percentage of stalls occurring because the next assembly instruction has not yet been fetched Multi-context
stall_memory_dependency Percentage of stalls occurring because a memory operation cannot be performed due to the required resources not being available or fully utilized, or because too many requests of a given type are outstanding Multi-context
stall_memory_throttle Percentage of stalls occurring because of memory throttle Multi-context
stall_not_selected Percentage of stalls occurring because warp was not selected Multi-context
stall_other Percentage of stalls occurring due to miscellaneous reasons Multi-context
stall_pipe_busy Percentage of stalls occurring because a compute operation cannot be performed because the compute pipeline is busy Multi-context
stall_sync Percentage of stalls occurring because the warp is blocked at a __syncthreads() call Multi-context
stall_texture Percentage of stalls occurring because the texture sub-system is fully utilized or has too many outstanding requests Multi-context
surface_atomic_requests Total number of surface atomic(Atom and Atom CAS) requests from Multiprocessor Multi-context
surface_load_requests Total number of surface load requests from Multiprocessor Multi-context
surface_reduction_requests Total number of surface reduction requests from Multiprocessor Multi-context
surface_store_requests Total number of surface store requests from Multiprocessor Multi-context
sysmem_read_bytes Number of bytes read from system memory Multi-context*
sysmem_read_throughput System memory read throughput Multi-context*
sysmem_read_transactions Number of system memory read transactions Multi-context*
sysmem_read_utilization The read utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 5.0 and 5.2. Multi-context
sysmem_utilization The utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 5.0 and 5.2. Multi-context*
sysmem_write_bytes Number of bytes written to system memory Multi-context*
sysmem_write_throughput System memory write throughput Multi-context*
sysmem_write_transactions Number of system memory write transactions Multi-context*
sysmem_write_utilization The write utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 5.0 and 5.2. Multi-context*
tex_cache_hit_rate Unified cache hit rate Multi-context*
tex_cache_throughput Unified cache throughput Multi-context*
tex_cache_transactions Unified cache read transactions Multi-context*
tex_fu_utilization The utilization level of the multiprocessor function units that execute global, local and texture memory instructions on a scale of 0 to 10 Multi-context
tex_utilization The utilization level of the unified cache relative to the peak utilization on a scale of 0 to 10 Multi-context*
texture_load_requests Total number of texture Load requests from Multiprocessor Multi-context
warp_execution_efficiency Ratio of the average active threads per warp to the maximum number of threads per warp supported on a multiprocessor Multi-context
warp_nonpred_execution_efficiency Ratio of the average active threads per warp executing non-predicated instructions to the maximum number of threads per warp supported on a multiprocessor Multi-context

* The "Multi-context" scope for this metric is supported only for devices with compute capability 5.0 and 5.2.

1.6.1.3. Metrics for Capability 6.x

Devices with compute capability 6.x implement the metrics shown in the following table.

Table 3. Capability 6.x Metrics
Metric Name Description Scope
achieved_occupancy Ratio of the average active warps per active cycle to the maximum number of warps supported on a multiprocessor Multi-context
atomic_transactions Global memory atomic and reduction transactions Multi-context
atomic_transactions_per_request Average number of global memory atomic and reduction transactions performed for each atomic and reduction instruction Multi-context
branch_efficiency Ratio of non-divergent branches to total branches expressed as percentage Multi-context
cf_executed Number of executed control-flow instructions Multi-context
cf_fu_utilization The utilization level of the multiprocessor function units that execute control-flow instructions on a scale of 0 to 10 Multi-context
cf_issued Number of issued control-flow instructions Multi-context
double_precision_fu_utilization The utilization level of the multiprocessor function units that execute double-precision floating-point instructions on a scale of 0 to 10 Multi-context
dram_read_bytes Total bytes read from DRAM to L2 cache Multi-context
dram_read_throughput Device memory read throughput. This is available for compute capability 6.0 and 6.1. Multi-context
dram_read_transactions Device memory read transactions. This is available for compute capability 6.0 and 6.1. Multi-context
dram_utilization The utilization level of the device memory relative to the peak utilization on a scale of 0 to 10 Multi-context
dram_write_bytes Total bytes written from L2 cache to DRAM Multi-context
dram_write_throughput Device memory write throughput. This is available for compute capability 6.0 and 6.1. Multi-context
dram_write_transactions Device memory write transactions. This is available for compute capability 6.0 and 6.1. Multi-context
ecc_throughput ECC throughput from L2 to DRAM. This is available for compute capability 6.1. Multi-context
ecc_transactions Number of ECC transactions between L2 and DRAM. This is available for compute capability 6.1. Multi-context
eligible_warps_per_cycle Average number of warps that are eligible to issue per active cycle Multi-context
flop_count_dp Number of double-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. Multi-context
flop_count_dp_add Number of double-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_dp_fma Number of double-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_dp_mul Number of double-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_hp Number of half-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. Multi-context
flop_count_hp_add Number of half-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_hp_fma Number of half-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_hp_mul Number of half-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_sp Number of single-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. The count does not include special operations. Multi-context
flop_count_sp_add Number of single-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_sp_fma Number of single-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_sp_mul Number of single-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_sp_special Number of single-precision floating-point special operations executed by non-predicated threads. Multi-context
flop_dp_efficiency Ratio of achieved to peak double-precision floating-point operations Multi-context
flop_hp_efficiency Ratio of achieved to peak half-precision floating-point operations Multi-context
flop_sp_efficiency Ratio of achieved to peak single-precision floating-point operations Multi-context
gld_efficiency Ratio of requested global memory load throughput to required global memory load throughput expressed as percentage. Multi-context
gld_requested_throughput Requested global memory load throughput Multi-context
gld_throughput Global memory load throughput Multi-context
gld_transactions Number of global memory load transactions Multi-context
gld_transactions_per_request Average number of global memory load transactions performed for each global memory load. Multi-context
global_atomic_requests Total number of global atomic(Atom and Atom CAS) requests from Multiprocessor Multi-context
global_hit_rate Hit rate for global loads in unified l1/tex cache. Metric value maybe wrong if malloc is used in kernel. Multi-context
global_load_requests Total number of global load requests from Multiprocessor Multi-context
global_reduction_requests Total number of global reduction requests from Multiprocessor Multi-context
global_store_requests Total number of global store requests from Multiprocessor. This does not include atomic requests. Multi-context
gst_efficiency Ratio of requested global memory store throughput to required global memory store throughput expressed as percentage. Multi-context
gst_requested_throughput Requested global memory store throughput Multi-context
gst_throughput Global memory store throughput Multi-context
gst_transactions Number of global memory store transactions Multi-context
gst_transactions_per_request Average number of global memory store transactions performed for each global memory store Multi-context
half_precision_fu_utilization The utilization level of the multiprocessor function units that execute 16 bit floating-point instructions on a scale of 0 to 10 Multi-context
inst_bit_convert Number of bit-conversion instructions executed by non-predicated threads Multi-context
inst_compute_ld_st Number of compute load/store instructions executed by non-predicated threads Multi-context
inst_control Number of control-flow instructions executed by non-predicated threads (jump, branch, etc.) Multi-context
inst_executed The number of instructions executed Multi-context
inst_executed_global_atomics Warp level instructions for global atom and atom cas Multi-context
inst_executed_global_loads Warp level instructions for global loads Multi-context
inst_executed_global_reductions Warp level instructions for global reductions Multi-context
inst_executed_global_stores Warp level instructions for global stores Multi-context
inst_executed_local_loads Warp level instructions for local loads Multi-context
inst_executed_local_stores Warp level instructions for local stores Multi-context
inst_executed_shared_atomics Warp level shared instructions for atom and atom CAS Multi-context
inst_executed_shared_loads Warp level instructions for shared loads Multi-context
inst_executed_shared_stores Warp level instructions for shared stores Multi-context
inst_executed_surface_atomics Warp level instructions for surface atom and atom cas Multi-context
inst_executed_surface_loads Warp level instructions for surface loads Multi-context
inst_executed_surface_reductions Warp level instructions for surface reductions Multi-context
inst_executed_surface_stores Warp level instructions for surface stores Multi-context
inst_executed_tex_ops Warp level instructions for texture Multi-context
inst_fp_16 Number of half-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_fp_32 Number of single-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_fp_64 Number of double-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_integer Number of integer instructions executed by non-predicated threads Multi-context
inst_inter_thread_communication Number of inter-thread communication instructions executed by non-predicated threads Multi-context
inst_issued The number of instructions issued Multi-context
inst_misc Number of miscellaneous instructions executed by non-predicated threads Multi-context
inst_per_warp Average number of instructions executed by each warp Multi-context
inst_replay_overhead Average number of replays for each instruction executed Multi-context
ipc Instructions executed per cycle Multi-context
issue_slot_utilization Percentage of issue slots that issued at least one instruction, averaged across all cycles Multi-context
issue_slots The number of issue slots used Multi-context
issued_ipc Instructions issued per cycle Multi-context
l2_atomic_throughput Memory read throughput seen at L2 cache for atomic and reduction requests Multi-context
l2_atomic_transactions Memory read transactions seen at L2 cache for atomic and reduction requests Multi-context
l2_global_atomic_store_bytes Bytes written to L2 from Unified cache for global atomics (ATOM and ATOM CAS) Multi-context
l2_global_load_bytes Bytes read from L2 for misses in Unified Cache for global loads Multi-context
l2_global_reduction_bytes Bytes written to L2 from Unified cache for global reductions Multi-context
l2_local_global_store_bytes Bytes written to L2 from Unified Cache for local and global stores. This does not include global atomics. Multi-context
l2_local_load_bytes Bytes read from L2 for misses in Unified Cache for local loads Multi-context
l2_read_throughput Memory read throughput seen at L2 cache for all read requests Multi-context
l2_read_transactions Memory read transactions seen at L2 cache for all read requests Multi-context
l2_surface_atomic_store_bytes Bytes transferred between Unified Cache and L2 for surface atomics (ATOM and ATOM CAS) Multi-context
l2_surface_load_bytes Bytes read from L2 for misses in Unified Cache for surface loads Multi-context
l2_surface_reduction_bytes Bytes written to L2 from Unified Cache for surface reductions Multi-context
l2_surface_store_bytes Bytes written to L2 from Unified Cache for surface stores. This does not include surface atomics. Multi-context
l2_tex_hit_rate Hit rate at L2 cache for all requests from texture cache Multi-context
l2_tex_read_hit_rate Hit rate at L2 cache for all read requests from texture cache. This is available for compute capability 6.0 and 6.1. Multi-context
l2_tex_read_throughput Memory read throughput seen at L2 cache for read requests from the texture cache Multi-context
l2_tex_read_transactions Memory read transactions seen at L2 cache for read requests from the texture cache Multi-context
l2_tex_write_hit_rate Hit Rate at L2 cache for all write requests from texture cache. This is available for compute capability 6.0 and 6.1. Multi-context
l2_tex_write_throughput Memory write throughput seen at L2 cache for write requests from the texture cache Multi-context
l2_tex_write_transactions Memory write transactions seen at L2 cache for write requests from the texture cache Multi-context
l2_utilization The utilization level of the L2 cache relative to the peak utilization on a scale of 0 to 10 Multi-context
l2_write_throughput Memory write throughput seen at L2 cache for all write requests Multi-context
l2_write_transactions Memory write transactions seen at L2 cache for all write requests Multi-context
ldst_executed Number of executed local, global, shared and texture memory load and store instructions Multi-context
ldst_fu_utilization The utilization level of the multiprocessor function units that execute shared load, shared store and constant load instructions on a scale of 0 to 10 Multi-context
ldst_issued Number of issued local, global, shared and texture memory load and store instructions Multi-context
local_hit_rate Hit rate for local loads and stores Multi-context
local_load_requests Total number of local load requests from Multiprocessor Multi-context
local_load_throughput Local memory load throughput Multi-context
local_load_transactions Number of local memory load transactions Multi-context
local_load_transactions_per_request Average number of local memory load transactions performed for each local memory load Multi-context
local_memory_overhead Ratio of local memory traffic to total memory traffic between the L1 and L2 caches expressed as percentage Multi-context
local_store_requests Total number of local store requests from Multiprocessor Multi-context
local_store_throughput Local memory store throughput Multi-context
local_store_transactions Number of local memory store transactions Multi-context
local_store_transactions_per_request Average number of local memory store transactions performed for each local memory store Multi-context
nvlink_overhead_data_received Ratio of overhead data to the total data, received through NVLink. This is available for compute capability 6.0. Device
nvlink_overhead_data_transmitted Ratio of overhead data to the total data, transmitted through NVLink. This is available for compute capability 6.0. Device
nvlink_receive_throughput Number of bytes received per second through NVLinks. This is available for compute capability 6.0. Device
nvlink_total_data_received Total data bytes received through NVLinks including headers. This is available for compute capability 6.0. Device
nvlink_total_data_transmitted Total data bytes transmitted through NVLinks including headers. This is available for compute capability 6.0. Device
nvlink_total_nratom_data_transmitted Total non-reduction atomic data bytes transmitted through NVLinks. This is available for compute capability 6.0. Device
nvlink_total_ratom_data_transmitted Total reduction atomic data bytes transmitted through NVLinks This is available for compute capability 6.0. Device
nvlink_total_response_data_received Total response data bytes received through NVLink, response data includes data for read requests and result of non-reduction atomic requests. This is available for compute capability 6.0. Device
nvlink_total_write_data_transmitted Total write data bytes transmitted through NVLinks. This is available for compute capability 6.0. Device
nvlink_transmit_throughput Number of Bytes Transmitted per second through NVLinks. This is available for compute capability 6.0. Device
nvlink_user_data_received User data bytes received through NVLinks, doesn't include headers. This is available for compute capability 6.0. Device
nvlink_user_data_transmitted User data bytes transmitted through NVLinks, doesn't include headers. This is available for compute capability 6.0. Device
nvlink_user_nratom_data_transmitted Total non-reduction atomic user data bytes transmitted through NVLinks. This is available for compute capability 6.0. Device
nvlink_user_ratom_data_transmitted Total reduction atomic user data bytes transmitted through NVLinks. This is available for compute capability 6.0. Device
nvlink_user_response_data_received Total user response data bytes received through NVLink, response data includes data for read requests and result of non-reduction atomic requests. This is available for compute capability 6.0. Device
nvlink_user_write_data_transmitted User write data bytes transmitted through NVLinks. This is available for compute capability 6.0. Device
pcie_total_data_received Total data bytes received through PCIe Device
pcie_total_data_transmitted Total data bytes transmitted through PCIe Device
shared_efficiency Ratio of requested shared memory throughput to required shared memory throughput expressed as percentage Multi-context
shared_load_throughput Shared memory load throughput Multi-context
shared_load_transactions Number of shared memory load transactions Multi-context
shared_load_transactions_per_request Average number of shared memory load transactions performed for each shared memory load Multi-context
shared_store_throughput Shared memory store throughput Multi-context
shared_store_transactions Number of shared memory store transactions Multi-context
shared_store_transactions_per_request Average number of shared memory store transactions performed for each shared memory store Multi-context
shared_utilization The utilization level of the shared memory relative to peak utilization on a scale of 0 to 10 Multi-context
single_precision_fu_utilization The utilization level of the multiprocessor function units that execute single-precision floating-point instructions and integer instructions on a scale of 0 to 10 Multi-context
sm_efficiency The percentage of time at least one warp is active on a specific multiprocessor Multi-context
special_fu_utilization The utilization level of the multiprocessor function units that execute sin, cos, ex2, popc, flo, and similar instructions on a scale of 0 to 10 Multi-context
stall_constant_memory_dependency Percentage of stalls occurring because of immediate constant cache miss Multi-context
stall_exec_dependency Percentage of stalls occurring because an input required by the instruction is not yet available Multi-context
stall_inst_fetch Percentage of stalls occurring because the next assembly instruction has not yet been fetched Multi-context
stall_memory_dependency Percentage of stalls occurring because a memory operation cannot be performed due to the required resources not being available or fully utilized, or because too many requests of a given type are outstanding Multi-context
stall_memory_throttle Percentage of stalls occurring because of memory throttle Multi-context
stall_not_selected Percentage of stalls occurring because warp was not selected Multi-context
stall_other Percentage of stalls occurring due to miscellaneous reasons Multi-context
stall_pipe_busy Percentage of stalls occurring because a compute operation cannot be performed because the compute pipeline is busy Multi-context
stall_sync Percentage of stalls occurring because the warp is blocked at a __syncthreads() call Multi-context
stall_texture Percentage of stalls occurring because the texture sub-system is fully utilized or has too many outstanding requests Multi-context
surface_atomic_requests Total number of surface atomic(Atom and Atom CAS) requests from Multiprocessor Multi-context
surface_load_requests Total number of surface load requests from Multiprocessor Multi-context
surface_reduction_requests Total number of surface reduction requests from Multiprocessor Multi-context
surface_store_requests Total number of surface store requests from Multiprocessor Multi-context
sysmem_read_bytes Number of bytes read from system memory Multi-context
sysmem_read_throughput System memory read throughput Multi-context
sysmem_read_transactions Number of system memory read transactions Multi-context
sysmem_read_utilization The read utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 6.0 and 6.1. Multi-context
sysmem_utilization The utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 6.0 and 6.1. Multi-context
sysmem_write_bytes Number of bytes written to system memory Multi-context
sysmem_write_throughput System memory write throughput Multi-context
sysmem_write_transactions Number of system memory write transactions Multi-context
sysmem_write_utilization The write utilization level of the system memory relative to the peak utilization on a scale of 0 to 10. This is available for compute capability 6.0 and 6.1. Multi-context
tex_cache_hit_rate Unified cache hit rate Multi-context
tex_cache_throughput Unified cache throughput Multi-context
tex_cache_transactions Unified cache read transactions Multi-context
tex_fu_utilization The utilization level of the multiprocessor function units that execute global, local and texture memory instructions on a scale of 0 to 10 Multi-context
tex_utilization The utilization level of the unified cache relative to the peak utilization on a scale of 0 to 10 Multi-context
texture_load_requests Total number of texture Load requests from Multiprocessor Multi-context
unique_warps_launched Number of warps launched. Value is unaffected by compute preemption. Multi-context
warp_execution_efficiency Ratio of the average active threads per warp to the maximum number of threads per warp supported on a multiprocessor Multi-context
warp_nonpred_execution_efficiency Ratio of the average active threads per warp executing non-predicated instructions to the maximum number of threads per warp supported on a multiprocessor Multi-context
1.6.1.4. Metrics for Capability 7.x

Devices with compute capability 7.x implement the metrics shown in the following table. (7.x refers to 7.0 and 7.2 here.)

Table 4. Capability 7.x (7.0 and 7.2) Metrics
Metric Name Description Scope
achieved_occupancy Ratio of the average active warps per active cycle to the maximum number of warps supported on a multiprocessor Multi-context
atomic_transactions Global memory atomic and reduction transactions Multi-context
atomic_transactions_per_request Average number of global memory atomic and reduction transactions performed for each atomic and reduction instruction Multi-context
branch_efficiency Ratio of branch instruction to sum of branch and divergent branch instruction Multi-context
cf_executed Number of executed control-flow instructions Multi-context
cf_fu_utilization The utilization level of the multiprocessor function units that execute control-flow instructions on a scale of 0 to 10 Multi-context
cf_issued Number of issued control-flow instructions Multi-context
double_precision_fu_utilization The utilization level of the multiprocessor function units that execute double-precision floating-point instructions on a scale of 0 to 10 Multi-context
dram_read_bytes Total bytes read from DRAM to L2 cache Multi-context
dram_read_throughput Device memory read throughput Multi-context
dram_read_transactions Device memory read transactions Multi-context
dram_utilization The utilization level of the device memory relative to the peak utilization on a scale of 0 to 10 Multi-context
dram_write_bytes Total bytes written from L2 cache to DRAM Multi-context
dram_write_throughput Device memory write throughput Multi-context
dram_write_transactions Device memory write transactions Multi-context
eligible_warps_per_cycle Average number of warps that are eligible to issue per active cycle Multi-context
flop_count_dp Number of double-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. Multi-context
flop_count_dp_add Number of double-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_dp_fma Number of double-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_dp_mul Number of double-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_hp Number of half-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate contributes 2 or 4 to the count based on the number of inputs. Multi-context
flop_count_hp_add Number of half-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_hp_fma Number of half-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate contributes 2 or 4 to the count based on the number of inputs. Multi-context
flop_count_hp_mul Number of half-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_sp Number of single-precision floating-point operations executed by non-predicated threads (add, multiply, and multiply-accumulate). Each multiply-accumulate operation contributes 2 to the count. The count does not include special operations. Multi-context
flop_count_sp_add Number of single-precision floating-point add operations executed by non-predicated threads. Multi-context
flop_count_sp_fma Number of single-precision floating-point multiply-accumulate operations executed by non-predicated threads. Each multiply-accumulate operation contributes 1 to the count. Multi-context
flop_count_sp_mul Number of single-precision floating-point multiply operations executed by non-predicated threads. Multi-context
flop_count_sp_special Number of single-precision floating-point special operations executed by non-predicated threads. Multi-context
flop_dp_efficiency Ratio of achieved to peak double-precision floating-point operations Multi-context
flop_hp_efficiency Ratio of achieved to peak half-precision floating-point operations Multi-context
flop_sp_efficiency Ratio of achieved to peak single-precision floating-point operations Multi-context
gld_efficiency Ratio of requested global memory load throughput to required global memory load throughput expressed as percentage. Multi-context
gld_requested_throughput Requested global memory load throughput Multi-context
gld_throughput Global memory load throughput Multi-context
gld_transactions Number of global memory load transactions Multi-context
gld_transactions_per_request Average number of global memory load transactions performed for each global memory load. Multi-context
global_atomic_requests Total number of global atomic(Atom and Atom CAS) requests from Multiprocessor Multi-context
global_hit_rate Hit rate for global load and store in unified l1/tex cache Multi-context
global_load_requests Total number of global load requests from Multiprocessor Multi-context
global_reduction_requests Total number of global reduction requests from Multiprocessor Multi-context
global_store_requests Total number of global store requests from Multiprocessor. This does not include atomic requests. Multi-context
gst_efficiency Ratio of requested global memory store throughput to required global memory store throughput expressed as percentage. Multi-context
gst_requested_throughput Requested global memory store throughput Multi-context
gst_throughput Global memory store throughput Multi-context
gst_transactions Number of global memory store transactions Multi-context
gst_transactions_per_request Average number of global memory store transactions performed for each global memory store Multi-context
half_precision_fu_utilization The utilization level of the multiprocessor function units that execute 16 bit floating-point instructions on a scale of 0 to 10. Note that this doesn't specify the utilization level of tensor core unit Multi-context
inst_bit_convert Number of bit-conversion instructions executed by non-predicated threads Multi-context
inst_compute_ld_st Number of compute load/store instructions executed by non-predicated threads Multi-context
inst_control Number of control-flow instructions executed by non-predicated threads (jump, branch, etc.) Multi-context
inst_executed The number of instructions executed Multi-context
inst_executed_global_atomics Warp level instructions for global atom and atom cas Multi-context
inst_executed_global_loads Warp level instructions for global loads Multi-context
inst_executed_global_reductions Warp level instructions for global reductions Multi-context
inst_executed_global_stores Warp level instructions for global stores Multi-context
inst_executed_local_loads Warp level instructions for local loads Multi-context
inst_executed_local_stores Warp level instructions for local stores Multi-context
inst_executed_shared_atomics Warp level shared instructions for atom and atom CAS Multi-context
inst_executed_shared_loads Warp level instructions for shared loads Multi-context
inst_executed_shared_stores Warp level instructions for shared stores Multi-context
inst_executed_surface_atomics Warp level instructions for surface atom and atom cas Multi-context
inst_executed_surface_loads Warp level instructions for surface loads Multi-context
inst_executed_surface_reductions Warp level instructions for surface reductions Multi-context
inst_executed_surface_stores Warp level instructions for surface stores Multi-context
inst_executed_tex_ops Warp level instructions for texture Multi-context
inst_fp_16 Number of half-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_fp_32 Number of single-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_fp_64 Number of double-precision floating-point instructions executed by non-predicated threads (arithmetic, compare, etc.) Multi-context
inst_integer Number of integer instructions executed by non-predicated threads Multi-context
inst_inter_thread_communication Number of inter-thread communication instructions executed by non-predicated threads Multi-context
inst_issued The number of instructions issued Multi-context
inst_misc Number of miscellaneous instructions executed by non-predicated threads Multi-context
inst_per_warp Average number of instructions executed by each warp Multi-context
inst_replay_overhead Average number of replays for each instruction executed Multi-context
ipc Instructions executed per cycle Multi-context
issue_slot_utilization Percentage of issue slots that issued at least one instruction, averaged across all cycles Multi-context
issue_slots The number of issue slots used Multi-context
issued_ipc Instructions issued per cycle Multi-context
l2_atomic_throughput Memory read throughput seen at L2 cache for atomic and reduction requests Multi-context
l2_atomic_transactions Memory read transactions seen at L2 cache for atomic and reduction requests Multi-context
l2_global_atomic_store_bytes Bytes written to L2 from L1 for global atomics (ATOM and ATOM CAS) Multi-context
l2_global_load_bytes Bytes read from L2 for misses in L1 for global loads Multi-context
l2_local_global_store_bytes Bytes written to L2 from L1 for local and global stores. This does not include global atomics. Multi-context
l2_local_load_bytes Bytes read from L2 for misses in L1 for local loads Multi-context
l2_read_throughput Memory read throughput seen at L2 cache for all read requests Multi-context
l2_read_transactions Memory read transactions seen at L2 cache for all read requests Multi-context
l2_surface_load_bytes Bytes read from L2 for misses in L1 for surface loads Multi-context
l2_surface_store_bytes Bytes read from L2 for misses in L1 for surface stores Multi-context
l2_tex_hit_rate Hit rate at L2 cache for all requests from texture cache Multi-context
l2_tex_read_hit_rate Hit rate at L2 cache for all read requests from texture cache Multi-context
l2_tex_read_throughput Memory read throughput seen at L2 cache for read requests from the texture cache Multi-context
l2_tex_read_transactions Memory read transactions seen at L2 cache for read requests from the texture cache Multi-context
l2_tex_write_hit_rate Hit Rate at L2 cache for all write requests from texture cache Multi-context
l2_tex_write_throughput Memory write throughput seen at L2 cache for write requests from the texture cache Multi-context
l2_tex_write_transactions Memory write transactions seen at L2 cache for write requests from the texture cache Multi-context
l2_utilization The utilization level of the L2 cache relative to the peak utilization on a scale of 0 to 10 Multi-context
l2_write_throughput Memory write throughput seen at L2 cache for all write requests Multi-context
l2_write_transactions Memory write transactions seen at L2 cache for all write requests Multi-context
ldst_executed Number of executed local, global, shared and texture memory load and store instructions Multi-context
ldst_fu_utilization The utilization level of the multiprocessor function units that execute shared load, shared store and constant load instructions on a scale of 0 to 10 Multi-context
ldst_issued Number of issued local, global, shared and texture memory load and store instructions Multi-context
local_hit_rate Hit rate for local loads and stores Multi-context
local_load_requests Total number of local load requests from Multiprocessor Multi-context
local_load_throughput Local memory load throughput Multi-context
local_load_transactions Number of local memory load transactions Multi-context
local_load_transactions_per_request Average number of local memory load transactions performed for each local memory load Multi-context
local_memory_overhead Ratio of local memory traffic to total memory traffic between the L1 and L2 caches expressed as percentage Multi-context
local_store_requests Total number of local store requests from Multiprocessor Multi-context
local_store_throughput Local memory store throughput Multi-context
local_store_transactions Number of local memory store transactions Multi-context
local_store_transactions_per_request Average number of local memory store transactions performed for each local memory store Multi-context
nvlink_overhead_data_received Ratio of overhead data to the total data, received through NVLink. Device
nvlink_overhead_data_transmitted Ratio of overhead data to the total data, transmitted through NVLink. Device
nvlink_receive_throughput Number of bytes received per second through NVLinks. Device
nvlink_total_data_received Total data bytes received through NVLinks including headers. Device
nvlink_total_data_transmitted Total data bytes transmitted through NVLinks including headers. Device
nvlink_total_nratom_data_transmitted Total non-reduction atomic data bytes transmitted through NVLinks. Device
nvlink_total_ratom_data_transmitted Total reduction atomic data bytes transmitted through NVLinks. Device
nvlink_total_response_data_received Total response data bytes received through NVLink, response data includes data for read requests and result of non-reduction atomic requests. Device
nvlink_total_write_data_transmitted Total write data bytes transmitted through NVLinks. Device
nvlink_transmit_throughput Number of Bytes Transmitted per second through NVLinks. Device
nvlink_user_data_received User data bytes received through NVLinks, doesn't include headers. Device
nvlink_user_data_transmitted User data bytes transmitted through NVLinks, doesn't include headers. Device
nvlink_user_nratom_data_transmitted Total non-reduction atomic user data bytes transmitted through NVLinks. Device
nvlink_user_ratom_data_transmitted Total reduction atomic user data bytes transmitted through NVLinks. Device
nvlink_user_response_data_received Total user response data bytes received through NVLink, response data includes data for read requests and result of non-reduction atomic requests. Device
nvlink_user_write_data_transmitted User write data bytes transmitted through NVLinks. Device
pcie_total_data_received Total data bytes received through PCIe Device
pcie_total_data_transmitted Total data bytes transmitted through PCIe Device
shared_efficiency Ratio of requested shared memory throughput to required shared memory throughput expressed as percentage Multi-context
shared_load_throughput Shared memory load throughput Multi-context
shared_load_transactions Number of shared memory load transactions Multi-context
shared_load_transactions_per_request Average number of shared memory load transactions performed for each shared memory load Multi-context
shared_store_throughput Shared memory store throughput Multi-context
shared_store_transactions Number of shared memory store transactions Multi-context
shared_store_transactions_per_request Average number of shared memory store transactions performed for each shared memory store Multi-context
shared_utilization The utilization level of the shared memory relative to peak utilization on a scale of 0 to 10 Multi-context
single_precision_fu_utilization The utilization level of the multiprocessor function units that execute single-precision floating-point instructions on a scale of 0 to 10 Multi-context
sm_efficiency The percentage of time at least one warp is active on a specific multiprocessor Multi-context
special_fu_utilization The utilization level of the multiprocessor function units that execute sin, cos, ex2, popc, flo, and similar instructions on a scale of 0 to 10 Multi-context
stall_constant_memory_dependency Percentage of stalls occurring because of immediate constant cache miss Multi-context
stall_exec_dependency Percentage of stalls occurring because an input required by the instruction is not yet available Multi-context
stall_inst_fetch Percentage of stalls occurring because the next assembly instruction has not yet been fetched Multi-context
stall_memory_dependency Percentage of stalls occurring because a memory operation cannot be performed due to the required resources not being available or fully utilized, or because too many requests of a given type are outstanding Multi-context
stall_memory_throttle Percentage of stalls occurring because of memory throttle Multi-context
stall_not_selected Percentage of stalls occurring because warp was not selected Multi-context
stall_other Percentage of stalls occurring due to miscellaneous reasons Multi-context
stall_pipe_busy Percentage of stalls occurring because a compute operation cannot be performed because the compute pipeline is busy Multi-context
stall_sleeping Percentage of stalls occurring because warp was sleeping Multi-context
stall_sync Percentage of stalls occurring because the warp is blocked at a __syncthreads() call Multi-context
stall_texture Percentage of stalls occurring because the texture sub-system is fully utilized or has too many outstanding requests Multi-context
surface_atomic_requests Total number of surface atomic(Atom and Atom CAS) requests from Multiprocessor Multi-context
surface_load_requests Total number of surface load requests from Multiprocessor Multi-context
surface_reduction_requests Total number of surface reduction requests from Multiprocessor Multi-context
surface_store_requests Total number of surface store requests from Multiprocessor Multi-context
sysmem_read_bytes Number of bytes read from system memory Multi-context
sysmem_read_throughput System memory read throughput Multi-context
sysmem_read_transactions Number of system memory read transactions Multi-context
sysmem_read_utilization The read utilization level of the system memory relative to the peak utilization on a scale of 0 to 10 Multi-context
sysmem_utilization The utilization level of the system memory relative to the peak utilization on a scale of 0 to 10 Multi-context
sysmem_write_bytes Number of bytes written to system memory Multi-context
sysmem_write_throughput System memory write throughput Multi-context
sysmem_write_transactions Number of system memory write transactions Multi-context
sysmem_write_utilization The write utilization level of the system memory relative to the peak utilization on a scale of 0 to 10 Multi-context
tensor_precision_fu_utilization The utilization level of the multiprocessor function units that execute tensor core instructions on a scale of 0 to 10 Multi-context
tensor_int_fu_utilization The utilization level of the multiprocessor function units that execute tensor core int8 instructions on a scale of 0 to 10. This metric is only available for device with compute capability 7.2. Multi-context
tex_cache_hit_rate Unified cache hit rate Multi-context
tex_cache_throughput Unified cache to Multiprocessor read throughput Multi-context
tex_cache_transactions Unified cache to Multiprocessor read transactions Multi-context
tex_fu_utilization The utilization level of the multiprocessor function units that execute global, local and texture memory instructions on a scale of 0 to 10 Multi-context
tex_utilization The utilization level of the unified cache relative to the peak utilization on a scale of 0 to 10 Multi-context
texture_load_requests Total number of texture Load requests from Multiprocessor Multi-context
warp_execution_efficiency Ratio of the average active threads per warp to the maximum number of threads per warp supported on a multiprocessor Multi-context
warp_nonpred_execution_efficiency Ratio of the average active threads per warp executing non-predicated instructions to the maximum number of threads per warp supported on a multiprocessor Multi-context

1.7. Samples

The CUPTI installation includes several samples that demonstrate the use of the CUPTI APIs. The samples are:

activity_trace_async
This sample shows how to collect a trace of CPU and GPU activity using the new asynchronous activity buffer APIs.
callback_event
This sample shows how to use both the callback and event APIs to record the events that occur during the execution of a simple kernel. The sample shows the required ordering for synchronization, and for event group enabling, disabling and reading.
callback_metric
This sample shows how to use both the callback and metric APIs to record the metric's events during the execution of a simple kernel, and then use those events to calculate the metric value.
callback_timestamp
This sample shows how to use the callback API to record a trace of API start and stop times.
cupti_query
This sample shows how to query CUDA-enabled devices for their event domains, events, and metrics.
event_sampling
This sample shows how to use the event APIs to sample events using a separate host thread.
event_multi_gpu
This sample shows how to use the CUPTI event and CUDA APIs to sample events on a setup with multiple GPUs. The sample shows the required ordering for synchronization, and for event group enabling, disabling and reading.
sass_source_map
This sample shows how to generate CUpti_ActivityInstructionExecution records and how to map SASS assembly instructions to CUDA C source.
unified_memory
This sample shows how to collect information about page transfers for unified memory.
pc_sampling
This sample shows how to collect PC Sampling profiling information for a kernel.
nvlink_bandwidth
This sample shows how to collect NVLink topology and NVLink throughput metrics in continuous mode.
openacc_trace
This sample shows how to use CUPTI APIs for OpenACC data collection.

2. Modules

2.1. CUPTI Version

Function and macro to determine the CUPTI version.

Defines

#define CUPTI_API_VERSION 12
The API version for this implementation of CUPTI.

Functions

CUptiResult cuptiGetVersion ( uint32_t* version )
Get the CUPTI API version.

Defines

#define CUPTI_API_VERSION 12

The API version for this implementation of CUPTI. This define along with cuptiGetVersion can be used to dynamically detect if the version of CUPTI compiled against matches the version of the loaded CUPTI library.

v1 : CUDAToolsSDK 4.0 v2 : CUDAToolsSDK 4.1 v3 : CUDA Toolkit 5.0 v4 : CUDA Toolkit 5.5 v5 : CUDA Toolkit 6.0 v6 : CUDA Toolkit 6.5 v7 : CUDA Toolkit 6.5(with sm_52 support) v8 : CUDA Toolkit 7.0 v9 : CUDA Toolkit 8.0 v10 : CUDA Toolkit 9.0 v11 : CUDA Toolkit 9.1 v12 : CUDA Toolkit 10.0

Functions

CUptiResult cuptiGetVersion ( uint32_t* version )
Get the CUPTI API version.
Parameters
version
Returns the version
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_INVALID_PARAMETER

    if version is NULL

Description

Return the API version in *version.

See also:

CUPTI_API_VERSION

2.2. CUPTI Result Codes

Error and result codes returned by CUPTI functions.

Enumerations

enum CUptiResult
CUPTI result codes.

Functions

CUptiResult cuptiGetResultString ( CUptiResult result, const char** str )
Get the descriptive string for a CUptiResult.

Enumerations

enum CUptiResult

Error and result codes returned by CUPTI functions.

Values
CUPTI_SUCCESS = 0
No error.
CUPTI_ERROR_INVALID_PARAMETER = 1
One or more of the parameters is invalid.
CUPTI_ERROR_INVALID_DEVICE = 2
The device does not correspond to a valid CUDA device.
CUPTI_ERROR_INVALID_CONTEXT = 3
The context is NULL or not valid.
CUPTI_ERROR_INVALID_EVENT_DOMAIN_ID = 4
The event domain id is invalid.
CUPTI_ERROR_INVALID_EVENT_ID = 5
The event id is invalid.
CUPTI_ERROR_INVALID_EVENT_NAME = 6
The event name is invalid.
CUPTI_ERROR_INVALID_OPERATION = 7
The current operation cannot be performed due to dependency on other factors.
CUPTI_ERROR_OUT_OF_MEMORY = 8
Unable to allocate enough memory to perform the requested operation.
CUPTI_ERROR_HARDWARE = 9
An error occurred on the performance monitoring hardware.
CUPTI_ERROR_PARAMETER_SIZE_NOT_SUFFICIENT = 10
The output buffer size is not sufficient to return all requested data.
CUPTI_ERROR_API_NOT_IMPLEMENTED = 11
API is not implemented.
CUPTI_ERROR_MAX_LIMIT_REACHED = 12
The maximum limit is reached.
CUPTI_ERROR_NOT_READY = 13
The object is not yet ready to perform the requested operation.
CUPTI_ERROR_NOT_COMPATIBLE = 14
The current operation is not compatible with the current state of the object
CUPTI_ERROR_NOT_INITIALIZED = 15
CUPTI is unable to initialize its connection to the CUDA driver.
CUPTI_ERROR_INVALID_METRIC_ID = 16
The metric id is invalid.
CUPTI_ERROR_INVALID_METRIC_NAME = 17
The metric name is invalid.
CUPTI_ERROR_QUEUE_EMPTY = 18
The queue is empty.
CUPTI_ERROR_INVALID_HANDLE = 19
Invalid handle (internal?).
CUPTI_ERROR_INVALID_STREAM = 20
Invalid stream.
CUPTI_ERROR_INVALID_KIND = 21
Invalid kind.
CUPTI_ERROR_INVALID_EVENT_VALUE = 22
Invalid event value.
CUPTI_ERROR_DISABLED = 23
CUPTI is disabled due to conflicts with other enabled profilers
CUPTI_ERROR_INVALID_MODULE = 24
Invalid module.
CUPTI_ERROR_INVALID_METRIC_VALUE = 25
Invalid metric value.
CUPTI_ERROR_HARDWARE_BUSY = 26
The performance monitoring hardware is in use by other client.
CUPTI_ERROR_NOT_SUPPORTED = 27
The attempted operation is not supported on the current system or device.
CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED = 28
Unified memory profiling is not supported on the system. Potential reason could be unsupported OS or architecture.
CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_ON_DEVICE = 29
Unified memory profiling is not supported on the device
CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_ON_NON_P2P_DEVICES = 30
Unified memory profiling is not supported on a multi-GPU configuration without P2P support between any pair of devices
CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_WITH_MPS = 31
Unified memory profiling is not supported under the Multi-Process Service (MPS) environment. CUDA 7.5 removes this restriction.
CUPTI_ERROR_CDP_TRACING_NOT_SUPPORTED = 32
In CUDA 9.0, devices with compute capability 7.0 don't support CDP tracing
CUPTI_ERROR_VIRTUALIZED_DEVICE_NOT_SUPPORTED = 33
Profiling on virtualized GPU is not supported.
CUPTI_ERROR_CUDA_COMPILER_NOT_COMPATIBLE = 34
Profiling results might be incorrect for CUDA applications compiled with nvcc version older than 9.0 for devices with compute capability 6.0 and 6.1. Profiling session will continue and CUPTI will notify it using this error code. User is advised to recompile the application code with nvcc version 9.0 or later. Ignore this warning if code is already compiled with the recommended nvcc version.
CUPTI_ERROR_INSUFFICIENT_PRIVILEGES = 35
User doesn't have sufficient priviledges which are required to start the profiling session.
CUPTI_ERROR_UNKNOWN = 999
An unknown internal error has occurred.
CUPTI_ERROR_FORCE_INT = 0x7fffffff

Functions

CUptiResult cuptiGetResultString ( CUptiResult result, const char** str )
Get the descriptive string for a CUptiResult.
Parameters
result
The result to get the string for
str
Returns the string
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_INVALID_PARAMETER

    if str is NULL or result is not a valid CUptiResult

Description

Return the descriptive string for a CUptiResult in *str.

Note:

Thread-safety: this function is thread safe.

2.3. CUPTI Activity API

Functions, types, and enums that implement the CUPTI Activity API.

Classes

struct 
The base activity record.
struct 
The activity record for a driver or runtime API invocation.
struct 
Device auto boost state structure.
struct 
The activity record for source level result branch. (deprecated).
struct 
The activity record for source level result branch.
struct 
The activity record for CDP (CUDA Dynamic Parallelism) kernel.
struct 
The activity record for a context.
struct 
The activity record for CUDA event.
struct 
The activity record for a device. (deprecated).
struct 
The activity record for a device. (CUDA 7.0 onwards).
struct 
The activity record for a device attribute.
struct 
The activity record for CUPTI environmental data.
struct 
The activity record for a CUPTI event.
struct 
The activity record for a CUPTI event with instance information.
struct 
The activity record for correlation with external records.
struct 
The activity record for global/device functions.
struct 
The activity record for source-level global access. (deprecated).
struct 
The activity record for source-level global access. (deprecated in CUDA 9.0).
struct 
The activity record for source-level global access.
struct 
The activity record for an instantaneous CUPTI event.
struct 
The activity record for an instantaneous CUPTI event with event domain instance information.
struct 
The activity record for an instantaneous CUPTI metric.
struct 
The instantaneous activity record for a CUPTI metric with instance information.
struct 
The activity record for source-level sass/source line-by-line correlation.
struct 
The activity record for source-level instruction execution.
struct 
The activity record for kernel. (deprecated).
struct 
The activity record for kernel. (deprecated).
struct 
The activity record for a kernel (CUDA 6.5(with sm_52 support) onwards). (deprecated in CUDA 9.0).
struct 
The activity record for a kernel.
struct 
The activity record providing a marker which is an instantaneous point in time. (deprecated in CUDA 8.0).
struct 
The activity record providing a marker which is an instantaneous point in time.
struct 
The activity record providing detailed information for a marker.
struct 
The activity record for memory copies.
struct 
The activity record for peer-to-peer memory copies.
struct 
The activity record for memory.
struct 
The activity record for memset.
struct 
The activity record for a CUPTI metric.
struct 
The activity record for a CUPTI metric with instance information.
struct 
The activity record for a CUDA module.
struct 
The activity record providing a name.
struct 
NVLink information. (deprecated in CUDA 9.0).
struct 
NVLink information. (deprecated in CUDA 10.0).
struct 
NVLink information.
union 
Identifiers for object kinds as specified by CUpti_ActivityObjectKind.
struct 
The base activity record for OpenAcc records.
struct 
The activity record for OpenACC data.
struct 
The activity record for OpenACC launch.
struct 
The activity record for OpenACC other.
struct 
The base activity record for OpenMp records.
struct 
The activity record for CUPTI and driver overheads.
struct 
The activity record for PC sampling. (deprecated in CUDA 8.0).
struct 
The activity record for PC sampling. (deprecated in CUDA 9.0).
struct 
The activity record for PC sampling.
struct 
PC sampling configuration structure.
struct 
The activity record for record status for PC sampling.
struct 
PCI devices information required to construct topology.
struct 
The activity record for a preemption of a CDP kernel.
struct 
The activity record for source-level shared access.
struct 
The activity record for source locator.
struct 
The activity record for CUDA stream.
struct 
The activity record for synchronization management.
struct 
The activity record for Unified Memory counters (deprecated in CUDA 7.0).
struct 
The activity record for Unified Memory counters (CUDA 7.0 and beyond).
struct 
Unified Memory counters configuration structure.

Defines

#define CUPTI_AUTO_BOOST_INVALID_CLIENT_PID 0
#define CUPTI_CORRELATION_ID_UNKNOWN 0
#define CUPTI_GRID_ID_UNKNOWN 0LL
#define CUPTI_MAX_NVLINK_PORTS 16
#define CUPTI_NVLINK_INVALID_PORT -1
#define CUPTI_SOURCE_LOCATOR_ID_UNKNOWN 0
#define CUPTI_SYNCHRONIZATION_INVALID_VALUE -1
#define CUPTI_TIMESTAMP_UNKNOWN 0LL

Typedefs

typedef void  ( *CUpti_BuffersCallbackCompleteFunc )( CUcontext context,  uint32_t streamId, uint8_t*  buffer,  size_t size,  size_t validSize )
Function type for callback used by CUPTI to return a buffer of activity records.
typedef void  ( *CUpti_BuffersCallbackRequestFunc )( uint8_t*  *buffer, size_t*  size, size_t*  maxNumRecords )
Function type for callback used by CUPTI to request an empty buffer for storing activity records.

Enumerations

enum CUpti_ActivityAttribute
Activity attributes.
enum CUpti_ActivityComputeApiKind
The kind of a compute API.
enum CUpti_ActivityEnvironmentKind
The kind of environment data. Used to indicate what type of data is being reported by an environment activity record.
enum CUpti_ActivityFlag
Flags associated with activity records.
enum CUpti_ActivityInstructionClass
SASS instruction classification.
enum CUpti_ActivityKind
The kinds of activity records.
enum CUpti_ActivityLaunchType
The type of the CUDA kernel launch.
enum CUpti_ActivityMemcpyKind
The kind of a memory copy, indicating the source and destination targets of the copy.
enum CUpti_ActivityMemoryKind
The kinds of memory accessed by a memory operation/copy.
enum CUpti_ActivityObjectKind
The kinds of activity objects.
enum CUpti_ActivityOverheadKind
The kinds of activity overhead.
enum CUpti_ActivityPCSamplingPeriod
Sampling period for PC sampling method Sampling period can be set using /ref cuptiActivityConfigurePCSampling.
enum CUpti_ActivityPCSamplingStallReason
The stall reason for PC sampling activity.
enum CUpti_ActivityPartitionedGlobalCacheConfig
Partitioned global caching option.
enum CUpti_ActivityPreemptionKind
The kind of a preemption activity.
enum CUpti_ActivityStreamFlag
stream type.
enum CUpti_ActivitySynchronizationType
Synchronization type.
enum CUpti_ActivityThreadIdType
Thread-Id types.
enum CUpti_ActivityUnifiedMemoryAccessType
Memory access type for unified memory page faults.
enum CUpti_ActivityUnifiedMemoryCounterKind
Kind of the Unified Memory counter.
enum CUpti_ActivityUnifiedMemoryCounterScope
Scope of the unified memory counter (deprecated in CUDA 7.0).
enum CUpti_ActivityUnifiedMemoryMigrationCause
Migration cause of the Unified Memory counter.
enum CUpti_DevType
The device type for device connected to NVLink.
enum CUpti_DeviceSupport
Device support.
enum CUpti_EnvironmentClocksThrottleReason
Reasons for clock throttling.
enum CUpti_ExternalCorrelationKind
The kind of external APIs supported for correlation.
enum CUpti_LinkFlag
Link flags.
enum CUpti_OpenAccConstructKind
The OpenAcc parent construct kind for OpenAcc activity records.
enum CUpti_OpenAccEventKind
The OpenAcc event kind for OpenAcc activity records.
enum CUpti_PcieDeviceType

Functions

CUptiResult cuptiActivityConfigurePCSampling ( CUcontext ctx, CUpti_ActivityPCSamplingConfig* config )
Set PC sampling configuration.
CUptiResult cuptiActivityConfigureUnifiedMemoryCounter ( CUpti_ActivityUnifiedMemoryCounterConfig* config, uint32_t count )
Set Unified Memory Counter configuration.
CUptiResult cuptiActivityDisable ( CUpti_ActivityKind kind )
Disable collection of a specific kind of activity record.
CUptiResult cuptiActivityDisableContext ( CUcontext context, CUpti_ActivityKind kind )
Disable collection of a specific kind of activity record for a context.
CUptiResult cuptiActivityEnable ( CUpti_ActivityKind kind )
Enable collection of a specific kind of activity record.
CUptiResult cuptiActivityEnableContext ( CUcontext context, CUpti_ActivityKind kind )
Enable collection of a specific kind of activity record for a context.
CUptiResult cuptiActivityEnableLatencyTimestamps ( uint8_t enable )
Controls the collection of queued and submitted timestamps for kernels.
CUptiResult cuptiActivityFlush ( CUcontext context, uint32_t streamId, uint32_t flag )
Wait for all activity records are delivered via the completion callback.
CUptiResult cuptiActivityFlushAll ( uint32_t flag )
Wait for all activity records are delivered via the completion callback.
CUptiResult cuptiActivityGetAttribute ( CUpti_ActivityAttribute attr, size_t* valueSize, void* value )
Read an activity API attribute.
CUptiResult cuptiActivityGetNextRecord ( uint8_t* buffer, size_t validBufferSizeBytes, CUpti_Activity** record )
Iterate over the activity records in a buffer.
CUptiResult cuptiActivityGetNumDroppedRecords ( CUcontext context, uint32_t streamId, size_t* dropped )
Get the number of activity records that were dropped of insufficient buffer space.
CUptiResult cuptiActivityPopExternalCorrelationId ( CUpti_ExternalCorrelationKind kind, uint64_t* lastId )
Pop an external correlation id for the calling thread.
CUptiResult cuptiActivityPushExternalCorrelationId ( CUpti_ExternalCorrelationKind kind, uint64_t id )
Push an external correlation id for the calling thread.
CUptiResult cuptiActivityRegisterCallbacks ( CUpti_BuffersCallbackRequestFunc funcBufferRequested, CUpti_BuffersCallbackCompleteFunc funcBufferCompleted )
Registers callback functions with CUPTI for activity buffer handling.
CUptiResult cuptiActivitySetAttribute ( CUpti_ActivityAttribute attr, size_t* valueSize, void* value )
Write an activity API attribute.
CUptiResult cuptiComputeCapabilitySupported ( int  major, int  minor, int* support )
Check support for a compute capability.
CUptiResult cuptiDeviceSupported ( CUdevice dev, int* support )
Check support for a compute device.
CUptiResult cuptiFinalize ( void )
Cleanup CUPTI.
CUptiResult cuptiGetAutoBoostState ( CUcontext context, CUpti_ActivityAutoBoostState* state )
Get auto boost state.
CUptiResult cuptiGetContextId ( CUcontext context, uint32_t* contextId )
Get the ID of a context.
CUptiResult cuptiGetDeviceId ( CUcontext context, uint32_t* deviceId )
Get the ID of a device.
CUptiResult cuptiGetLastError ( void )
Returns the last error from a cupti call or callback.
CUptiResult cuptiGetStreamId ( CUcontext context, CUstream stream, uint32_t* streamId )
Get the ID of a stream.
CUptiResult cuptiGetStreamIdEx ( CUcontext context, CUstream stream, uint8_t perThreadStream, uint32_t* streamId )
Get the ID of a stream.
CUptiResult cuptiGetThreadIdType ( CUpti_ActivityThreadIdType* type )
Get the thread-id type.
CUptiResult cuptiGetTimestamp ( uint64_t* timestamp )
Get the CUPTI timestamp.
CUptiResult cuptiSetThreadIdType ( CUpti_ActivityThreadIdType type )
Set the thread-id type.

Defines

#define CUPTI_AUTO_BOOST_INVALID_CLIENT_PID 0

An invalid/unknown process id.

#define CUPTI_CORRELATION_ID_UNKNOWN 0

An invalid/unknown correlation ID. A correlation ID of this value indicates that there is no correlation for the activity record.

#define CUPTI_GRID_ID_UNKNOWN 0LL

An invalid/unknown grid ID.

#define CUPTI_MAX_NVLINK_PORTS 16

Maximum NVLink port numbers.

#define CUPTI_NVLINK_INVALID_PORT -1

Invalid/unknown NVLink port number.

#define CUPTI_SOURCE_LOCATOR_ID_UNKNOWN 0

The source-locator ID that indicates an unknown source location. There is not an actual CUpti_ActivitySourceLocator object corresponding to this value.

#define CUPTI_SYNCHRONIZATION_INVALID_VALUE -1

An invalid/unknown value.

#define CUPTI_TIMESTAMP_UNKNOWN 0LL

An invalid/unknown timestamp for a start, end, queued, submitted, or completed time.

Typedefs

void ( *CUpti_BuffersCallbackCompleteFunc )( CUcontext context,  uint32_t streamId, uint8_t*  buffer,  size_t size,  size_t validSize )

Function type for callback used by CUPTI to return a buffer of activity records. This callback function returns to the CUPTI client a buffer containing activity records. The buffer contains validSize bytes of activity records which should be read using cuptiActivityGetNextRecord. The number of dropped records can be read using cuptiActivityGetNumDroppedRecords. After this call CUPTI relinquished ownership of the buffer and will not use it anymore. The client may return the buffer to CUPTI using the CUpti_BuffersCallbackRequestFunc callback. Note: CUDA 6.0 onwards, all buffers returned by this callback are global buffers i.e. there is no context/stream specific buffer. User needs to parse the global buffer to extract the context/stream specific activity records.

Parameters
context
The context this buffer is associated with. If NULL, the buffer is associated with the global activities. This field is deprecated as of CUDA 6.0 and will always be NULL.
uint32_t streamId
buffer
The activity record buffer.
size_t size
size_t validSize
void ( *CUpti_BuffersCallbackRequestFunc )( uint8_t*  *buffer, size_t*  size, size_t*  maxNumRecords )

Function type for callback used by CUPTI to request an empty buffer for storing activity records. This callback function signals the CUPTI client that an activity buffer is needed by CUPTI. The activity buffer is used by CUPTI to store activity records. The callback function can decline the request by setting *buffer to NULL. In this case CUPTI may drop activity records.

Parameters
*buffer
size
Returns the size of the returned buffer.
maxNumRecords
Returns the maximum number of records that should be placed in the buffer. If 0 then the buffer is filled with as many records as possible. If > 0 the buffer is filled with at most that many records before it is returned.

Enumerations

enum CUpti_ActivityAttribute

These attributes are used to control the behavior of the activity API.

Values
CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE = 0
The device memory size (in bytes) reserved for storing profiling data for non-CDP operations, especially for concurrent kernel tracing, for each buffer on a context. The value is a size_t.Having larger buffer size means less flush operations but consumes more device memory. Having smaller buffer size increases the risk of dropping timestamps for kernel records if too many kernels are launched/replayed at one time. This value only applies to new buffer allocations.Set this value before initializing CUDA or before creating a context to ensure it is considered for the following allocations.The default value is 8388608 (8MB).Note: The actual amount of device memory per buffer reserved by CUPTI might be larger.
CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE_CDP = 1
The device memory size (in bytes) reserved for storing profiling data for CDP operations for each buffer on a context. The value is a size_t.Having larger buffer size means less flush operations but consumes more device memory. This value only applies to new allocations.Set this value before initializing CUDA or before creating a context to ensure it is considered for the following allocations.The default value is 8388608 (8MB).Note: The actual amount of device memory per context reserved by CUPTI might be larger.
CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_POOL_LIMIT = 2
The maximum number of memory buffers per context. The value is a size_t.Buffers can be reused by the context. Increasing this value reduces the number of times CUPTI needs to flush the buffers. Setting this value will not modify the number of memory buffers currently stored.Set this value before initializing CUDA to ensure the limit is not exceeded.The default value is 100.
CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_SIZE = 3
The profiling semaphore pool size reserved for storing profiling data for serialized kernels and memory operations for each context. The value is a size_t.Having larger pool size means less semaphore query operations but consumes more device resources. Having smaller pool size increases the risk of dropping timestamps for kernel and memcpy records if too many kernels or memcpy are launched/replayed at one time. This value only applies to new pool allocations.Set this value before initializing CUDA or before creating a context to ensure it is considered for the following allocations.The default value is 65536.
CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_LIMIT = 4
The maximum number of profiling semaphore pools per context. The value is a size_t.Profiling semaphore pool can be reused by the context. Increasing this value reduces the number of times CUPTI needs to query semaphores in the pool. Setting this value will not modify the number of semaphore pools currently stored.Set this value before initializing CUDA to ensure the limit is not exceeded.The default value is 100.
CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_FORCE_INT = 0x7fffffff
enum CUpti_ActivityComputeApiKind

Values
CUPTI_ACTIVITY_COMPUTE_API_UNKNOWN = 0
The compute API is not known.
CUPTI_ACTIVITY_COMPUTE_API_CUDA = 1
The compute APIs are for CUDA.
CUPTI_ACTIVITY_COMPUTE_API_CUDA_MPS = 2
The compute APIs are for CUDA running in MPS (Multi-Process Service) environment.
CUPTI_ACTIVITY_COMPUTE_API_FORCE_INT = 0x7fffffff
enum CUpti_ActivityEnvironmentKind

Values
CUPTI_ACTIVITY_ENVIRONMENT_UNKNOWN = 0
Unknown data.
CUPTI_ACTIVITY_ENVIRONMENT_SPEED = 1
The environment data is related to speed.
CUPTI_ACTIVITY_ENVIRONMENT_TEMPERATURE = 2
The environment data is related to temperature.
CUPTI_ACTIVITY_ENVIRONMENT_POWER = 3
The environment data is related to power.
CUPTI_ACTIVITY_ENVIRONMENT_COOLING = 4
The environment data is related to cooling.
CUPTI_ACTIVITY_ENVIRONMENT_COUNT
CUPTI_ACTIVITY_ENVIRONMENT_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityFlag

Activity record flags. Flags can be combined by bitwise OR to associated multiple flags with an activity record. Each flag is specific to a certain activity kind, as noted below.

Values
CUPTI_ACTIVITY_FLAG_NONE = 0
Indicates the activity record has no flags.
CUPTI_ACTIVITY_FLAG_DEVICE_CONCURRENT_KERNELS = 1<<0
Indicates the activity represents a device that supports concurrent kernel execution. Valid for CUPTI_ACTIVITY_KIND_DEVICE.
CUPTI_ACTIVITY_FLAG_DEVICE_ATTRIBUTE_CUDEVICE = 1<<0
Indicates if the activity represents a CUdevice_attribute value or a CUpti_DeviceAttribute value. Valid for CUPTI_ACTIVITY_KIND_DEVICE_ATTRIBUTE.
CUPTI_ACTIVITY_FLAG_MEMCPY_ASYNC = 1<<0
Indicates the activity represents an asynchronous memcpy operation. Valid for CUPTI_ACTIVITY_KIND_MEMCPY.
CUPTI_ACTIVITY_FLAG_MARKER_INSTANTANEOUS = 1<<0
Indicates the activity represents an instantaneous marker. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_START = 1<<1
Indicates the activity represents a region start marker. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_END = 1<<2
Indicates the activity represents a region end marker. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_SYNC_ACQUIRE = 1<<3
Indicates the activity represents an attempt to acquire a user defined synchronization object. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_SYNC_ACQUIRE_SUCCESS = 1<<4
Indicates the activity represents success in acquiring the user defined synchronization object. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_SYNC_ACQUIRE_FAILED = 1<<5
Indicates the activity represents failure in acquiring the user defined synchronization object. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_SYNC_RELEASE = 1<<6
Indicates the activity represents releasing a reservation on user defined synchronization object. Valid for CUPTI_ACTIVITY_KIND_MARKER.
CUPTI_ACTIVITY_FLAG_MARKER_COLOR_NONE = 1<<0
Indicates the activity represents a marker that does not specify a color. Valid for CUPTI_ACTIVITY_KIND_MARKER_DATA.
CUPTI_ACTIVITY_FLAG_MARKER_COLOR_ARGB = 1<<1
Indicates the activity represents a marker that specifies a color in alpha-red-green-blue format. Valid for CUPTI_ACTIVITY_KIND_MARKER_DATA.
CUPTI_ACTIVITY_FLAG_GLOBAL_ACCESS_KIND_SIZE_MASK = 0xFF<<0
The number of bytes requested by each thread Valid for CUpti_ActivityGlobalAccess3.
CUPTI_ACTIVITY_FLAG_GLOBAL_ACCESS_KIND_LOAD = 1<<8
If bit in this flag is set, the access was load, else it is a store access. Valid for CUpti_ActivityGlobalAccess3.
CUPTI_ACTIVITY_FLAG_GLOBAL_ACCESS_KIND_CACHED = 1<<9
If this bit in flag is set, the load access was cached else it is uncached. Valid for CUpti_ActivityGlobalAccess3.
CUPTI_ACTIVITY_FLAG_METRIC_OVERFLOWED = 1<<0
If this bit in flag is set, the metric value overflowed. Valid for CUpti_ActivityMetric and CUpti_ActivityMetricInstance.
CUPTI_ACTIVITY_FLAG_METRIC_VALUE_INVALID = 1<<1
If this bit in flag is set, the metric value couldn't be calculated. This occurs when a value(s) required to calculate the metric is missing. Valid for CUpti_ActivityMetric and CUpti_ActivityMetricInstance.
CUPTI_ACTIVITY_FLAG_INSTRUCTION_VALUE_INVALID = 1<<0
If this bit in flag is set, the source level metric value couldn't be calculated. This occurs when a value(s) required to calculate the source level metric cannot be evaluated. Valid for CUpti_ActivityInstructionExecution.
CUPTI_ACTIVITY_FLAG_INSTRUCTION_CLASS_MASK = 0xFF<<1
The mask for the instruction class, CUpti_ActivityInstructionClass Valid for CUpti_ActivityInstructionExecution and CUpti_ActivityInstructionCorrelation
CUPTI_ACTIVITY_FLAG_FLUSH_FORCED = 1<<0
When calling cuptiActivityFlushAll, this flag can be set to force CUPTI to flush all records in the buffer, whether finished or not
CUPTI_ACTIVITY_FLAG_SHARED_ACCESS_KIND_SIZE_MASK = 0xFF<<0
The number of bytes requested by each thread Valid for CUpti_ActivitySharedAccess.
CUPTI_ACTIVITY_FLAG_SHARED_ACCESS_KIND_LOAD = 1<<8
If bit in this flag is set, the access was load, else it is a store access. Valid for CUpti_ActivitySharedAccess.
CUPTI_ACTIVITY_FLAG_MEMSET_ASYNC = 1<<0
Indicates the activity represents an asynchronous memset operation. Valid for CUPTI_ACTIVITY_KIND_MEMSET.
CUPTI_ACTIVITY_FLAG_THRASHING_IN_CPU = 1<<0
Indicates the activity represents thrashing in CPU. Valid for counter of kind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING in CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER
CUPTI_ACTIVITY_FLAG_THROTTLING_IN_CPU = 1<<0
Indicates the activity represents page throttling in CPU. Valid for counter of kind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING in CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER
CUPTI_ACTIVITY_FLAG_FORCE_INT = 0x7fffffff
enum CUpti_ActivityInstructionClass

The sass instruction are broadly divided into different class. Each enum represents a classification.

Values
CUPTI_ACTIVITY_INSTRUCTION_CLASS_UNKNOWN = 0
The instruction class is not known.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_FP_32 = 1
Represents a 32 bit floating point operation.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_FP_64 = 2
Represents a 64 bit floating point operation.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_INTEGER = 3
Represents an integer operation.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_BIT_CONVERSION = 4
Represents a bit conversion operation.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_CONTROL_FLOW = 5
Represents a control flow instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_GLOBAL = 6
Represents a global load-store instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_SHARED = 7
Represents a shared load-store instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_LOCAL = 8
Represents a local load-store instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_GENERIC = 9
Represents a generic load-store instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_SURFACE = 10
Represents a surface load-store instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_CONSTANT = 11
Represents a constant load instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_TEXTURE = 12
Represents a texture load-store instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_GLOBAL_ATOMIC = 13
Represents a global atomic instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_SHARED_ATOMIC = 14
Represents a shared atomic instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_SURFACE_ATOMIC = 15
Represents a surface atomic instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_INTER_THREAD_COMMUNICATION = 16
Represents a inter-thread communication instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_BARRIER = 17
Represents a barrier instruction.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_MISCELLANEOUS = 18
Represents some miscellaneous instructions which do not fit in the above classification.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_FP_16 = 19
Represents a 16 bit floating point operation.
CUPTI_ACTIVITY_INSTRUCTION_CLASS_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityKind

Each activity record kind represents information about a GPU or an activity occurring on a CPU or GPU. Each kind is associated with a activity record structure that holds the information associated with the kind.

See also:

CUpti_Activity

CUpti_ActivityAPI

CUpti_ActivityContext

CUpti_ActivityDevice

CUpti_ActivityDevice2

CUpti_ActivityDeviceAttribute

CUpti_ActivityEvent

CUpti_ActivityEventInstance

CUpti_ActivityKernel

CUpti_ActivityKernel2

CUpti_ActivityKernel3

CUpti_ActivityKernel4

CUpti_ActivityCdpKernel

CUpti_ActivityPreemption

CUpti_ActivityMemcpy

CUpti_ActivityMemcpy2

CUpti_ActivityMemset

CUpti_ActivityMetric

CUpti_ActivityMetricInstance

CUpti_ActivityName

CUpti_ActivityMarker

CUpti_ActivityMarker2

CUpti_ActivityMarkerData

CUpti_ActivitySourceLocator

CUpti_ActivityGlobalAccess

CUpti_ActivityGlobalAccess2

CUpti_ActivityGlobalAccess3

CUpti_ActivityBranch

CUpti_ActivityBranch2

CUpti_ActivityOverhead

CUpti_ActivityEnvironment

CUpti_ActivityInstructionExecution

CUpti_ActivityUnifiedMemoryCounter

CUpti_ActivityFunction

CUpti_ActivityModule

CUpti_ActivitySharedAccess

CUpti_ActivityPCSampling

CUpti_ActivityPCSampling2

CUpti_ActivityPCSampling3

CUpti_ActivityPCSamplingRecordInfo

CUpti_ActivityCudaEvent

CUpti_ActivityStream

CUpti_ActivitySynchronization

CUpti_ActivityInstructionCorrelation

CUpti_ActivityExternalCorrelation

CUpti_ActivityUnifiedMemoryCounter2

CUpti_ActivityOpenAccData

CUpti_ActivityOpenAccLaunch

CUpti_ActivityOpenAccOther

CUpti_ActivityOpenMp

CUpti_ActivityNvLink

CUpti_ActivityNvLink2

CUpti_ActivityNvLink3

CUpti_ActivityMemory

CUpti_ActivityPcie

Values
CUPTI_ACTIVITY_KIND_INVALID = 0
The activity record is invalid.
CUPTI_ACTIVITY_KIND_MEMCPY = 1
A host<->host, host<->device, or device<->device memory copy. The corresponding activity record structure is CUpti_ActivityMemcpy.
CUPTI_ACTIVITY_KIND_MEMSET = 2
A memory set executing on the GPU. The corresponding activity record structure is CUpti_ActivityMemset.
CUPTI_ACTIVITY_KIND_KERNEL = 3
A kernel executing on the GPU. The corresponding activity record structure is CUpti_ActivityKernel4.
CUPTI_ACTIVITY_KIND_DRIVER = 4
A CUDA driver API function execution. The corresponding activity record structure is CUpti_ActivityAPI.
CUPTI_ACTIVITY_KIND_RUNTIME = 5
A CUDA runtime API function execution. The corresponding activity record structure is CUpti_ActivityAPI.
CUPTI_ACTIVITY_KIND_EVENT = 6
An event value. The corresponding activity record structure is CUpti_ActivityEvent.
CUPTI_ACTIVITY_KIND_METRIC = 7
A metric value. The corresponding activity record structure is CUpti_ActivityMetric.
CUPTI_ACTIVITY_KIND_DEVICE = 8
Information about a device. The corresponding activity record structure is CUpti_ActivityDevice2.
CUPTI_ACTIVITY_KIND_CONTEXT = 9
Information about a context. The corresponding activity record structure is CUpti_ActivityContext.
CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL = 10
A (potentially concurrent) kernel executing on the GPU. The corresponding activity record structure is CUpti_ActivityKernel4.
CUPTI_ACTIVITY_KIND_NAME = 11
Thread, device, context, etc. name. The corresponding activity record structure is CUpti_ActivityName.
CUPTI_ACTIVITY_KIND_MARKER = 12
Instantaneous, start, or end marker. The corresponding activity record structure is CUpti_ActivityMarker2.
CUPTI_ACTIVITY_KIND_MARKER_DATA = 13
Extended, optional, data about a marker. The corresponding activity record structure is CUpti_ActivityMarkerData.
CUPTI_ACTIVITY_KIND_SOURCE_LOCATOR = 14
Source information about source level result. The corresponding activity record structure is CUpti_ActivitySourceLocator.
CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS = 15
Results for source-level global acccess. The corresponding activity record structure is CUpti_ActivityGlobalAccess3.
CUPTI_ACTIVITY_KIND_BRANCH = 16
Results for source-level branch. The corresponding activity record structure is CUpti_ActivityBranch2.
CUPTI_ACTIVITY_KIND_OVERHEAD = 17
Overhead activity records. The corresponding activity record structure is CUpti_ActivityOverhead.
CUPTI_ACTIVITY_KIND_CDP_KERNEL = 18
A CDP (CUDA Dynamic Parallel) kernel executing on the GPU. The corresponding activity record structure is CUpti_ActivityCdpKernel. This activity can not be directly enabled or disabled. It is enabled and disabled through concurrent kernel activity i.e. _CONCURRENT_KERNEL
CUPTI_ACTIVITY_KIND_PREEMPTION = 19
Preemption activity record indicating a preemption of a CDP (CUDA Dynamic Parallel) kernel executing on the GPU. The corresponding activity record structure is CUpti_ActivityPreemption.
CUPTI_ACTIVITY_KIND_ENVIRONMENT = 20
Environment activity records indicating power, clock, thermal, etc. levels of the GPU. The corresponding activity record structure is CUpti_ActivityEnvironment.
CUPTI_ACTIVITY_KIND_EVENT_INSTANCE = 21
An event value associated with a specific event domain instance. The corresponding activity record structure is CUpti_ActivityEventInstance.
CUPTI_ACTIVITY_KIND_MEMCPY2 = 22
A peer to peer memory copy. The corresponding activity record structure is CUpti_ActivityMemcpy2.
CUPTI_ACTIVITY_KIND_METRIC_INSTANCE = 23
A metric value associated with a specific metric domain instance. The corresponding activity record structure is CUpti_ActivityMetricInstance.
CUPTI_ACTIVITY_KIND_INSTRUCTION_EXECUTION = 24
Results for source-level instruction execution. The corresponding activity record structure is CUpti_ActivityInstructionExecution.
CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER = 25
Unified Memory counter record. The corresponding activity record structure is CUpti_ActivityUnifiedMemoryCounter2.
CUPTI_ACTIVITY_KIND_FUNCTION = 26
Device global/function record. The corresponding activity record structure is CUpti_ActivityFunction.
CUPTI_ACTIVITY_KIND_MODULE = 27
CUDA Module record. The corresponding activity record structure is CUpti_ActivityModule.
CUPTI_ACTIVITY_KIND_DEVICE_ATTRIBUTE = 28
A device attribute value. The corresponding activity record structure is CUpti_ActivityDeviceAttribute.
CUPTI_ACTIVITY_KIND_SHARED_ACCESS = 29
Results for source-level shared acccess. The corresponding activity record structure is CUpti_ActivitySharedAccess.
CUPTI_ACTIVITY_KIND_PC_SAMPLING = 30
Enable PC sampling for kernels. This will serialize kernels. The corresponding activity record structure is CUpti_ActivityPCSampling3.
CUPTI_ACTIVITY_KIND_PC_SAMPLING_RECORD_INFO = 31
Summary information about PC sampling records. The corresponding activity record structure is CUpti_ActivityPCSamplingRecordInfo.
CUPTI_ACTIVITY_KIND_INSTRUCTION_CORRELATION = 32
SASS/Source line-by-line correlation record. This will generate sass/source correlation for functions that have source level analysis or pc sampling results. The records will be generated only when either of source level analysis or pc sampling activity is enabled. The corresponding activity record structure is CUpti_ActivityInstructionCorrelation.
CUPTI_ACTIVITY_KIND_OPENACC_DATA = 33
OpenACC data events. The corresponding activity record structure is CUpti_ActivityOpenAccData.
CUPTI_ACTIVITY_KIND_OPENACC_LAUNCH = 34
OpenACC launch events. The corresponding activity record structure is CUpti_ActivityOpenAccLaunch.
CUPTI_ACTIVITY_KIND_OPENACC_OTHER = 35
OpenACC other events. The corresponding activity record structure is CUpti_ActivityOpenAccOther.
CUPTI_ACTIVITY_KIND_CUDA_EVENT = 36
Information about a CUDA event. The corresponding activity record structure is CUpti_ActivityCudaEvent.
CUPTI_ACTIVITY_KIND_STREAM = 37
Information about a CUDA stream. The corresponding activity record structure is CUpti_ActivityStream.
CUPTI_ACTIVITY_KIND_SYNCHRONIZATION = 38
Records for synchronization management. The corresponding activity record structure is CUpti_ActivitySynchronization.
CUPTI_ACTIVITY_KIND_EXTERNAL_CORRELATION = 39
Records for correlation of different programming APIs. The corresponding activity record structure is CUpti_ActivityExternalCorrelation.
CUPTI_ACTIVITY_KIND_NVLINK = 40
NVLink information. The corresponding activity record structure is CUpti_ActivityNvLink3.
CUPTI_ACTIVITY_KIND_INSTANTANEOUS_EVENT = 41
Instantaneous Event information. The corresponding activity record structure is CUpti_ActivityInstantaneousEvent.
CUPTI_ACTIVITY_KIND_INSTANTANEOUS_EVENT_INSTANCE = 42
Instantaneous Event information for a specific event domain instance. The corresponding activity record structure is CUpti_ActivityInstantaneousEventInstance
CUPTI_ACTIVITY_KIND_INSTANTANEOUS_METRIC = 43
Instantaneous Metric information The corresponding activity record structure is CUpti_ActivityInstantaneousMetric.
CUPTI_ACTIVITY_KIND_INSTANTANEOUS_METRIC_INSTANCE = 44
Instantaneous Metric information for a specific metric domain instance. The corresponding activity record structure is CUpti_ActivityInstantaneousMetricInstance.
CUPTI_ACTIVITY_KIND_MEMORY = 45
CUPTI_ACTIVITY_KIND_PCIE = 46
CUPTI_ACTIVITY_KIND_OPENMP = 47
OpenMP parallel events. The corresponding activity record structure is CUpti_ActivityOpenMp.
CUPTI_ACTIVITY_KIND_COUNT = 48
CUPTI_ACTIVITY_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityLaunchType

Values
CUPTI_ACTIVITY_LAUNCH_TYPE_REGULAR = 0
The kernel was launched via a regular kernel call
CUPTI_ACTIVITY_LAUNCH_TYPE_COOPERATIVE_SINGLE_DEVICE = 1
The kernel was launched via API cudaLaunchCooperativeKernel() or cuLaunchCooperativeKernel()
CUPTI_ACTIVITY_LAUNCH_TYPE_COOPERATIVE_MULTI_DEVICE = 2
The kernel was launched via API cudaLaunchCooperativeKernelMultiDevice() or cuLaunchCooperativeKernelMultiDevice()
enum CUpti_ActivityMemcpyKind

Each kind represents the source and destination targets of a memory copy. Targets are host, device, and array.

Values
CUPTI_ACTIVITY_MEMCPY_KIND_UNKNOWN = 0
The memory copy kind is not known.
CUPTI_ACTIVITY_MEMCPY_KIND_HTOD = 1
A host to device memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_DTOH = 2
A device to host memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_HTOA = 3
A host to device array memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_ATOH = 4
A device array to host memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_ATOA = 5
A device array to device array memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_ATOD = 6
A device array to device memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_DTOA = 7
A device to device array memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_DTOD = 8
A device to device memory copy on the same device.
CUPTI_ACTIVITY_MEMCPY_KIND_HTOH = 9
A host to host memory copy.
CUPTI_ACTIVITY_MEMCPY_KIND_PTOP = 10
A peer to peer memory copy across different devices.
CUPTI_ACTIVITY_MEMCPY_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityMemoryKind

Each kind represents the type of the memory accessed by a memory operation/copy.

Values
CUPTI_ACTIVITY_MEMORY_KIND_UNKNOWN = 0
The memory kind is unknown.
CUPTI_ACTIVITY_MEMORY_KIND_PAGEABLE = 1
The memory is pageable.
CUPTI_ACTIVITY_MEMORY_KIND_PINNED = 2
The memory is pinned.
CUPTI_ACTIVITY_MEMORY_KIND_DEVICE = 3
The memory is on the device.
CUPTI_ACTIVITY_MEMORY_KIND_ARRAY = 4
The memory is an array.
CUPTI_ACTIVITY_MEMORY_KIND_MANAGED = 5
The memory is managed
CUPTI_ACTIVITY_MEMORY_KIND_DEVICE_STATIC = 6
The memory is device static
CUPTI_ACTIVITY_MEMORY_KIND_MANAGED_STATIC = 7
The memory is managed static
CUPTI_ACTIVITY_MEMORY_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityObjectKind
Values
CUPTI_ACTIVITY_OBJECT_UNKNOWN = 0
The object kind is not known.
CUPTI_ACTIVITY_OBJECT_PROCESS = 1
A process.
CUPTI_ACTIVITY_OBJECT_THREAD = 2
A thread.
CUPTI_ACTIVITY_OBJECT_DEVICE = 3
A device.
CUPTI_ACTIVITY_OBJECT_CONTEXT = 4
A context.
CUPTI_ACTIVITY_OBJECT_STREAM = 5
A stream.
CUPTI_ACTIVITY_OBJECT_FORCE_INT = 0x7fffffff
enum CUpti_ActivityOverheadKind

Values
CUPTI_ACTIVITY_OVERHEAD_UNKNOWN = 0
The overhead kind is not known.
CUPTI_ACTIVITY_OVERHEAD_DRIVER_COMPILER = 1
Compiler(JIT) overhead.
CUPTI_ACTIVITY_OVERHEAD_CUPTI_BUFFER_FLUSH = 1<<16
Activity buffer flush overhead.
CUPTI_ACTIVITY_OVERHEAD_CUPTI_INSTRUMENTATION = 2<<16
CUPTI instrumentation overhead.
CUPTI_ACTIVITY_OVERHEAD_CUPTI_RESOURCE = 3<<16
CUPTI resource creation and destruction overhead.
CUPTI_ACTIVITY_OVERHEAD_FORCE_INT = 0x7fffffff
enum CUpti_ActivityPCSamplingPeriod

Values
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_INVALID = 0
The PC sampling period is not set.
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_MIN = 1
Minimum sampling period available on the device.
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_LOW = 2
Sampling period in lower range.
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_MID = 3
Medium sampling period.
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_HIGH = 4
Sampling period in higher range.
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_MAX = 5
Maximum sampling period available on the device.
CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_FORCE_INT = 0x7fffffff
enum CUpti_ActivityPCSamplingStallReason

Values
CUPTI_ACTIVITY_PC_SAMPLING_STALL_INVALID = 0
Invalid reason
CUPTI_ACTIVITY_PC_SAMPLING_STALL_NONE = 1
No stall, instruction is selected for issue
CUPTI_ACTIVITY_PC_SAMPLING_STALL_INST_FETCH = 2
Warp is blocked because next instruction is not yet available, because of instruction cache miss, or because of branching effects
CUPTI_ACTIVITY_PC_SAMPLING_STALL_EXEC_DEPENDENCY = 3
Instruction is waiting on an arithmatic dependency
CUPTI_ACTIVITY_PC_SAMPLING_STALL_MEMORY_DEPENDENCY = 4
Warp is blocked because it is waiting for a memory access to complete.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_TEXTURE = 5
Texture sub-system is fully utilized or has too many outstanding requests.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_SYNC = 6
Warp is blocked as it is waiting at __syncthreads() or at memory barrier.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_CONSTANT_MEMORY_DEPENDENCY = 7
Warp is blocked waiting for __constant__ memory and immediate memory access to complete.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_PIPE_BUSY = 8
Compute operation cannot be performed due to the required resources not being available.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_MEMORY_THROTTLE = 9
Warp is blocked because there are too many pending memory operations. In Kepler architecture it often indicates high number of memory replays.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_NOT_SELECTED = 10
Warp was ready to issue, but some other warp issued instead.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_OTHER = 11
Miscellaneous reasons
CUPTI_ACTIVITY_PC_SAMPLING_STALL_SLEEPING = 12
Sleeping.
CUPTI_ACTIVITY_PC_SAMPLING_STALL_FORCE_INT = 0x7fffffff
enum CUpti_ActivityPartitionedGlobalCacheConfig

Values
CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_UNKNOWN = 0
Partitioned global cache config unknown.
CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_NOT_SUPPORTED = 1
Partitioned global cache not supported.
CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_OFF = 2
Partitioned global cache config off.
CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_ON = 3
Partitioned global cache config on.
CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_FORCE_INT = 0x7fffffff
enum CUpti_ActivityPreemptionKind

Values
CUPTI_ACTIVITY_PREEMPTION_KIND_UNKNOWN = 0
The preemption kind is not known.
CUPTI_ACTIVITY_PREEMPTION_KIND_SAVE = 1
Preemption to save CDP block.
CUPTI_ACTIVITY_PREEMPTION_KIND_RESTORE = 2
Preemption to restore CDP block.
CUPTI_ACTIVITY_PREEMPTION_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityStreamFlag

The types of stream to be used with CUpti_ActivityStream.

Values
CUPTI_ACTIVITY_STREAM_CREATE_FLAG_UNKNOWN = 0
Unknown data.
CUPTI_ACTIVITY_STREAM_CREATE_FLAG_DEFAULT = 1
Default stream.
CUPTI_ACTIVITY_STREAM_CREATE_FLAG_NON_BLOCKING = 2
Non-blocking stream.
CUPTI_ACTIVITY_STREAM_CREATE_FLAG_NULL = 3
Null stream.
CUPTI_ACTIVITY_STREAM_CREATE_MASK = 0xFFFF
Stream create Mask
CUPTI_ACTIVITY_STREAM_CREATE_FLAG_FORCE_INT = 0x7fffffff
enum CUpti_ActivitySynchronizationType

The types of synchronization to be used with CUpti_ActivitySynchronization.

Values
CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_UNKNOWN = 0
Unknown data.
CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_EVENT_SYNCHRONIZE = 1
Event synchronize API.
CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_STREAM_WAIT_EVENT = 2
Stream wait event API.
CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_STREAM_SYNCHRONIZE = 3
Stream synchronize API.
CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_CONTEXT_SYNCHRONIZE = 4
Context synchronize API.
CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_FORCE_INT = 0x7fffffff
enum CUpti_ActivityThreadIdType

CUPTI uses different methods to obtain the thread-id depending on the support and the underlying platform. This enum documents these methods for each type. APIs cuptiSetThreadIdType and cuptiGetThreadIdType can be used to set and get the thread-id type.

Values
CUPTI_ACTIVITY_THREAD_ID_TYPE_DEFAULT = 0
Default type Windows uses API GetCurrentThreadId() Linux/Mac/Android/QNX use POSIX pthread API pthread_self()
CUPTI_ACTIVITY_THREAD_ID_TYPE_SYSTEM = 1
This type is based on the system API available on the underlying platform and thread-id obtained is supposed to be unique for the process lifetime. Windows uses API GetCurrentThreadId() Linux uses syscall SYS_gettid Mac uses syscall SYS_thread_selfid Android/QNX use gettid()
CUPTI_ACTIVITY_THREAD_ID_TYPE_FORCE_INT = 0x7fffffff
enum CUpti_ActivityUnifiedMemoryAccessType
Values
CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_UNKNOWN = 0
The unified memory access type is not known
CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_READ = 1
The page fault was triggered by read memory instruction
CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_WRITE = 2
The page fault was triggered by write memory instruction
CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_ATOMIC = 3
The page fault was triggered by atomic memory instruction
CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_PREFETCH = 4
The page fault was triggered by memory prefetch operation
enum CUpti_ActivityUnifiedMemoryCounterKind

Many activities are associated with Unified Memory mechanism; among them are tranfer from host to device, device to host, page fault at host side.

Values
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_UNKNOWN = 0
The unified memory counter kind is not known.
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD = 1
Number of bytes transfered from host to device
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOH = 2
Number of bytes transfered from device to host
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT = 3
Number of CPU page faults, this is only supported on 64 bit Linux and Mac platforms
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT = 4
Number of GPU page faults, this is only supported on devices with compute capability 6.0 and higher and 64 bit Linux platforms
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING = 5
Thrashing occurs when data is frequently accessed by multiple processors and has to be constantly migrated around to achieve data locality. In this case the overhead of migration may exceed the benefits of locality. This is only supported on 64 bit Linux platforms.
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING = 6
Throttling is a prevention technique used by the driver to avoid further thrashing. Here, the driver doesn't service the fault for one of the contending processors for a specific period of time, so that the other processor can run at full-speed. This is only supported on 64 bit Linux platforms.
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP = 7
In case throttling does not help, the driver tries to pin the memory to a processor for a specific period of time. One of the contending processors will have slow access to the memory, while the other will have fast access. This is only supported on 64 bit Linux platforms.
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOD = 8
Number of bytes transferred from one device to another device. This is only supported on 64 bit Linux platforms.
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_COUNT
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_FORCE_INT = 0x7fffffff
enum CUpti_ActivityUnifiedMemoryCounterScope

Values
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_UNKNOWN = 0
The unified memory counter scope is not known.
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_PROCESS_SINGLE_DEVICE = 1
Collect unified memory counter for single process on one device
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_PROCESS_ALL_DEVICES = 2
Collect unified memory counter for single process across all devices
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_COUNT
CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_FORCE_INT = 0x7fffffff
enum CUpti_ActivityUnifiedMemoryMigrationCause
Values
CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_UNKNOWN = 0
The unified memory migration cause is not known
CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_USER = 1
The unified memory migrated due to an explicit call from the user e.g. cudaMemPrefetchAsync
CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_COHERENCE = 2
The unified memory migrated to guarantee data coherence e.g. CPU/GPU faults on Pascal+ and kernel launch on pre-Pascal GPUs
CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_PREFETCH = 3
The unified memory was speculatively migrated by the UVM driver before being accessed by the destination processor to improve performance
CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_EVICTION = 4
The unified memory migrated to the CPU because it was evicted to make room for another block of memory on the GPU
CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_ACCESS_COUNTERS = 5
The unified memory migrated to another processor because of access counter notifications
enum CUpti_DevType

Values
CUPTI_DEV_TYPE_INVALID = 0
CUPTI_DEV_TYPE_GPU = 1
The device type is GPU.
CUPTI_DEV_TYPE_NPU = 2
The device type is NVLink processing unit in CPU.
CUPTI_DEV_TYPE_FORCE_INT = 0x7fffffff
enum CUpti_DeviceSupport

Describes device support returned by API cuptiDeviceSupported.

Values
CUPTI_DEVICE_UNSUPPORTED = 0
If device is not supported.
CUPTI_DEVICE_SUPPORTED = 1
If device is supported.
CUPTI_DEVICE_VIRTUAL = 2
If device is a virtual GPU.
enum CUpti_EnvironmentClocksThrottleReason

The possible reasons that a clock can be throttled. There can be more than one reason that a clock is being throttled so these types can be combined by bitwise OR. These are used in the clocksThrottleReason field in the Environment Activity Record.

Values
CUPTI_CLOCKS_THROTTLE_REASON_GPU_IDLE = 0x00000001
Nothing is running on the GPU and the clocks are dropping to idle state.
CUPTI_CLOCKS_THROTTLE_REASON_USER_DEFINED_CLOCKS = 0x00000002
The GPU clocks are limited by a user specified limit.
CUPTI_CLOCKS_THROTTLE_REASON_SW_POWER_CAP = 0x00000004
A software power scaling algorithm is reducing the clocks below requested clocks.
CUPTI_CLOCKS_THROTTLE_REASON_HW_SLOWDOWN = 0x00000008
Hardware slowdown to reduce the clock by a factor of two or more is engaged. This is an indicator of one of the following: 1) Temperature is too high, 2) External power brake assertion is being triggered (e.g. by the system power supply), 3) Change in power state.
CUPTI_CLOCKS_THROTTLE_REASON_UNKNOWN = 0x80000000
Some unspecified factor is reducing the clocks.
CUPTI_CLOCKS_THROTTLE_REASON_UNSUPPORTED = 0x40000000
Throttle reason is not supported for this GPU.
CUPTI_CLOCKS_THROTTLE_REASON_NONE = 0x00000000
No clock throttling.
CUPTI_CLOCKS_THROTTLE_REASON_FORCE_INT = 0x7fffffff
enum CUpti_ExternalCorrelationKind

Custom correlation kinds are reserved for usage in external tools.

See also:

CUpti_ActivityExternalCorrelation

Values
CUPTI_EXTERNAL_CORRELATION_KIND_INVALID = 0
CUPTI_EXTERNAL_CORRELATION_KIND_UNKNOWN = 1
CUPTI_EXTERNAL_CORRELATION_KIND_OPENACC = 2
CUPTI_EXTERNAL_CORRELATION_KIND_CUSTOM0 = 3
CUPTI_EXTERNAL_CORRELATION_KIND_CUSTOM1 = 4
CUPTI_EXTERNAL_CORRELATION_KIND_CUSTOM2 = 5
CUPTI_EXTERNAL_CORRELATION_KIND_SIZE
CUPTI_EXTERNAL_CORRELATION_KIND_FORCE_INT = 0x7fffffff
enum CUpti_LinkFlag

Describes link properties, to be used with CUpti_ActivityNvLink.

Values
CUPTI_LINK_FLAG_INVALID = 0
CUPTI_LINK_FLAG_PEER_ACCESS = (1<<1)
Is peer to peer access supported by this link.
CUPTI_LINK_FLAG_SYSMEM_ACCESS = (1<<2)
Is system memory access supported by this link.
CUPTI_LINK_FLAG_PEER_ATOMICS = (1<<3)
Is peer atomic access supported by this link.
CUPTI_LINK_FLAG_SYSMEM_ATOMICS = (1<<4)
Is system memory atomic access supported by this link.
CUPTI_LINK_FLAG_FORCE_INT = 0x7fffffff
enum CUpti_OpenAccConstructKind

Values
CUPTI_OPENACC_CONSTRUCT_KIND_UNKNOWN = 0
CUPTI_OPENACC_CONSTRUCT_KIND_PARALLEL = 1
CUPTI_OPENACC_CONSTRUCT_KIND_KERNELS = 2
CUPTI_OPENACC_CONSTRUCT_KIND_LOOP = 3
CUPTI_OPENACC_CONSTRUCT_KIND_DATA = 4
CUPTI_OPENACC_CONSTRUCT_KIND_ENTER_DATA = 5
CUPTI_OPENACC_CONSTRUCT_KIND_EXIT_DATA = 6
CUPTI_OPENACC_CONSTRUCT_KIND_HOST_DATA = 7
CUPTI_OPENACC_CONSTRUCT_KIND_ATOMIC = 8
CUPTI_OPENACC_CONSTRUCT_KIND_DECLARE = 9
CUPTI_OPENACC_CONSTRUCT_KIND_INIT = 10
CUPTI_OPENACC_CONSTRUCT_KIND_SHUTDOWN = 11
CUPTI_OPENACC_CONSTRUCT_KIND_SET = 12
CUPTI_OPENACC_CONSTRUCT_KIND_UPDATE = 13
CUPTI_OPENACC_CONSTRUCT_KIND_ROUTINE = 14
CUPTI_OPENACC_CONSTRUCT_KIND_WAIT = 15
CUPTI_OPENACC_CONSTRUCT_KIND_RUNTIME_API = 16
CUPTI_OPENACC_CONSTRUCT_KIND_FORCE_INT = 0x7fffffff
enum CUpti_OpenAccEventKind

See also:

CUpti_ActivityKindOpenAcc

Values
CUPTI_OPENACC_EVENT_KIND_INVALID = 0
CUPTI_OPENACC_EVENT_KIND_DEVICE_INIT = 1
CUPTI_OPENACC_EVENT_KIND_DEVICE_SHUTDOWN = 2
CUPTI_OPENACC_EVENT_KIND_RUNTIME_SHUTDOWN = 3
CUPTI_OPENACC_EVENT_KIND_ENQUEUE_LAUNCH = 4
CUPTI_OPENACC_EVENT_KIND_ENQUEUE_UPLOAD = 5
CUPTI_OPENACC_EVENT_KIND_ENQUEUE_DOWNLOAD = 6
CUPTI_OPENACC_EVENT_KIND_WAIT = 7
CUPTI_OPENACC_EVENT_KIND_IMPLICIT_WAIT = 8
CUPTI_OPENACC_EVENT_KIND_COMPUTE_CONSTRUCT = 9
CUPTI_OPENACC_EVENT_KIND_UPDATE = 10
CUPTI_OPENACC_EVENT_KIND_ENTER_DATA = 11
CUPTI_OPENACC_EVENT_KIND_EXIT_DATA = 12
CUPTI_OPENACC_EVENT_KIND_CREATE = 13
CUPTI_OPENACC_EVENT_KIND_DELETE = 14
CUPTI_OPENACC_EVENT_KIND_ALLOC = 15
CUPTI_OPENACC_EVENT_KIND_FREE = 16
CUPTI_OPENACC_EVENT_KIND_FORCE_INT = 0x7fffffff
enum CUpti_PcieDeviceType

Field to differentiate whether PCIE Activity record is of a GPU or a PCI Bridge

Values
CUPTI_PCIE_DEVICE_TYPE_GPU = 0
PCIE GPU record
CUPTI_PCIE_DEVICE_TYPE_BRIDGE = 1
PCIE Bridge record
CUPTI_PCIE_DEVICE_TYPE_FORCE_INT = 0x7fffffff

Functions

CUptiResult cuptiActivityConfigurePCSampling ( CUcontext ctx, CUpti_ActivityPCSamplingConfig* config )
Set PC sampling configuration.
Parameters
ctx
The context
config
A pointer to CUpti_ActivityPCSamplingConfig structure containing PC sampling configuration.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_OPERATION

    if this api is called while some valid event collection method is set.

  • CUPTI_ERROR_INVALID_PARAMETER

    if config is NULL or any parameter in the config structures is not a valid value

  • CUPTI_ERROR_NOT_SUPPORTED

    Indicates that the system/device does not support the unified memory counters

Description

CUptiResult cuptiActivityConfigureUnifiedMemoryCounter ( CUpti_ActivityUnifiedMemoryCounterConfig* config, uint32_t count )
Set Unified Memory Counter configuration.
Parameters
config
A pointer to CUpti_ActivityUnifiedMemoryCounterConfig structures containing Unified Memory counter configuration.
count
Number of Unified Memory counter configuration structures
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_PARAMETER

    if config is NULL or any parameter in the config structures is not a valid value

  • CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED

    One potential reason is that platform (OS/arch) does not support the unified memory counters

  • CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_ON_DEVICE

    Indicates that the device does not support the unified memory counters

  • CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_ON_NON_P2P_DEVICES

    Indicates that multi-GPU configuration without P2P support between any pair of devices does not support the unified memory counters

Description

CUptiResult cuptiActivityDisable ( CUpti_ActivityKind kind )
Disable collection of a specific kind of activity record.
Parameters
kind
The kind of activity record to stop collecting
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_KIND

    if the activity kind is not supported

Description

Disable collection of a specific kind of activity record. Multiple kinds can be disabled by calling this function multiple times. By default all activity kinds are disabled for collection.

CUptiResult cuptiActivityDisableContext ( CUcontext context, CUpti_ActivityKind kind )
Disable collection of a specific kind of activity record for a context.
Parameters
context
The context for which activity is to be disabled
kind
The kind of activity record to stop collecting
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_KIND

    if the activity kind is not supported

Description

Disable collection of a specific kind of activity record for a context. This setting done by this API will supersede the global settings for activity records. Multiple kinds can be enabled by calling this function multiple times.

CUptiResult cuptiActivityEnable ( CUpti_ActivityKind kind )
Enable collection of a specific kind of activity record.
Parameters
kind
The kind of activity record to collect
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_NOT_COMPATIBLE

    if the activity kind cannot be enabled

  • CUPTI_ERROR_INVALID_KIND

    if the activity kind is not supported

Description

Enable collection of a specific kind of activity record. Multiple kinds can be enabled by calling this function multiple times. By default all activity kinds are disabled for collection.

CUptiResult cuptiActivityEnableContext ( CUcontext context, CUpti_ActivityKind kind )
Enable collection of a specific kind of activity record for a context.
Parameters
context
The context for which activity is to be enabled
kind
The kind of activity record to collect
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_NOT_COMPATIBLE

    if the activity kind cannot be enabled

  • CUPTI_ERROR_INVALID_KIND

    if the activity kind is not supported

Description

Enable collection of a specific kind of activity record for a context. This setting done by this API will supersede the global settings for activity records enabled by cuptiActivityEnable. Multiple kinds can be enabled by calling this function multiple times.

CUptiResult cuptiActivityEnableLatencyTimestamps ( uint8_t enable )
Controls the collection of queued and submitted timestamps for kernels.
Parameters
enable
is a boolean, denoting whether these timestamps should be collected
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

Description

This API is used to control the collection of queued and submitted timestamps for kernels whose records are provided through the struct CUpti_ActivityKernel4. Default value is 0, i.e. these timestamps are not collected. This API needs to be called before initialization of CUDA and this setting should not be changed during the profiling session.

CUptiResult cuptiActivityFlush ( CUcontext context, uint32_t streamId, uint32_t flag )
Wait for all activity records are delivered via the completion callback.
Parameters
context
A valid CUcontext or NULL.
streamId
The stream ID.
flag
The flag can be set to indicate a forced flush. See CUpti_ActivityFlag
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_CUPTI_ERROR_INVALID_OPERATION

    if not preceeded by a successful call to cuptiActivityRegisterCallbacks

  • CUPTI_ERROR_UNKNOWN

    an internal error occurred

Description

This function does not return until all activity records associated with the specified context/stream are returned to the CUPTI client using the callback registered in cuptiActivityRegisterCallbacks. To ensure that all activity records are complete, the requested stream(s), if any, are synchronized.

If context is NULL, the global activity records (i.e. those not associated with a particular stream) are flushed (in this case no streams are synchonized). If context is a valid CUcontext and streamId is 0, the buffers of all streams of this context are flushed. Otherwise, the buffers of the specified stream in this context is flushed.

Before calling this function, the buffer handling callback api must be activated by calling cuptiActivityRegisterCallbacks.

**DEPRECATED** This method is deprecated CONTEXT and STREAMID will be ignored. Use cuptiActivityFlushAll to flush all data.

CUptiResult cuptiActivityFlushAll ( uint32_t flag )
Wait for all activity records are delivered via the completion callback.
Parameters
flag
The flag can be set to indicate a forced flush. See CUpti_ActivityFlag
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_OPERATION

    if not preceeded by a successful call to cuptiActivityRegisterCallbacks

  • CUPTI_ERROR_UNKNOWN

    an internal error occurred

Description

This function does not return until all activity records associated with all contexts/streams (and the global buffers not associated with any stream) are returned to the CUPTI client using the callback registered in cuptiActivityRegisterCallbacks. To ensure that all activity records are complete, the requested stream(s), if any, are synchronized.

Before calling this function, the buffer handling callback api must be activated by calling cuptiActivityRegisterCallbacks.

CUptiResult cuptiActivityGetAttribute ( CUpti_ActivityAttribute attr, size_t* valueSize, void* value )
Read an activity API attribute.
Parameters
attr
The attribute to read
valueSize
Size of buffer pointed by the value, and returns the number of bytes written to value
value
Returns the value of the attribute
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_PARAMETER

    if valueSize or value is NULL, or if attr is not an activity attribute

  • CUPTI_ERROR_PARAMETER_SIZE_NOT_SUFFICIENT

    Indicates that the value buffer is too small to hold the attribute value.

Description

Read an activity API attribute and return it in *value.

CUptiResult cuptiActivityGetNextRecord ( uint8_t* buffer, size_t validBufferSizeBytes, CUpti_Activity** record )
Iterate over the activity records in a buffer.
Parameters
buffer
The buffer containing activity records
validBufferSizeBytes
The number of valid bytes in the buffer.
record
Inputs the previous record returned by cuptiActivityGetNextRecord and returns the next activity record from the buffer. If input value is NULL, returns the first activity record in the buffer. Records of kind CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL may contain invalid (0) timestamps, indicating that no timing information could be collected for lack of device memory.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_MAX_LIMIT_REACHED

    if no more records in the buffer

  • CUPTI_ERROR_INVALID_PARAMETER

    if buffer is NULL.

Description

This is a helper function to iterate over the activity records in a buffer. A buffer of activity records is typically obtained by receiving a CUpti_BuffersCallbackCompleteFunc callback.

An example of typical usage:

CUpti_Activity *record = NULL;
       CUptiResult status = CUPTI_SUCCESS;
         do {
            status = cuptiActivityGetNextRecord(buffer, validSize, &record);
            if(status == CUPTI_SUCCESS) {
                 // Use record here...
            }
            else if (status == CUPTI_ERROR_MAX_LIMIT_REACHED)
                break;
            else {
                goto Error;
            }
          } while (1);

CUptiResult cuptiActivityGetNumDroppedRecords ( CUcontext context, uint32_t streamId, size_t* dropped )
Get the number of activity records that were dropped of insufficient buffer space.
Parameters
context
The context, or NULL to get dropped count from global queue
streamId
The stream ID
dropped
The number of records that were dropped since the last call to this function.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_PARAMETER

    if dropped is NULL

Description

Get the number of records that were dropped because of insufficient buffer space. The dropped count includes records that could not be recorded because CUPTI did not have activity buffer space available for the record (because the CUpti_BuffersCallbackRequestFunc callback did not return an empty buffer of sufficient size) and also CDP records that could not be record because the device-size buffer was full (size is controlled by the CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE_CDP attribute). The dropped count maintained for the queue is reset to zero when this function is called.

CUptiResult cuptiActivityPopExternalCorrelationId ( CUpti_ExternalCorrelationKind kind, uint64_t* lastId )
Pop an external correlation id for the calling thread.
Parameters
kind
The kind of external API activities should be correlated with.
lastId
If the function returns successful, contains the last external correlation id for this kind, can be NULL.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    The external API kind is invalid.

  • CUPTI_ERROR_QUEUE_EMPTY

    No external id is currently associated with kind.

Description

This function notifies CUPTI that the calling thread is leaving an external API region.

CUptiResult cuptiActivityPushExternalCorrelationId ( CUpti_ExternalCorrelationKind kind, uint64_t id )
Push an external correlation id for the calling thread.
Parameters
kind
The kind of external API activities should be correlated with.
id
External correlation id.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    The external API kind is invalid

Description

This function notifies CUPTI that the calling thread is entering an external API region. When a CUPTI activity API record is created while within an external API region and CUPTI_ACTIVITY_KIND_EXTERNAL_CORRELATION is enabled, the activity API record will be preceeded by a CUpti_ActivityExternalCorrelation record for each CUpti_ExternalCorrelationKind.

CUptiResult cuptiActivityRegisterCallbacks ( CUpti_BuffersCallbackRequestFunc funcBufferRequested, CUpti_BuffersCallbackCompleteFunc funcBufferCompleted )
Registers callback functions with CUPTI for activity buffer handling.
Parameters
funcBufferRequested
callback which is invoked when an empty buffer is requested by CUPTI
funcBufferCompleted
callback which is invoked when a buffer containing activity records is available from CUPTI
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    if either funcBufferRequested or funcBufferCompleted is NULL

Description

This function registers two callback functions to be used in asynchronous buffer handling. If registered, activity record buffers are handled using asynchronous requested/completed callbacks from CUPTI.

Registering these callbacks prevents the client from using CUPTI's blocking enqueue/dequeue functions.

CUptiResult cuptiActivitySetAttribute ( CUpti_ActivityAttribute attr, size_t* valueSize, void* value )
Write an activity API attribute.
Parameters
attr
The attribute to write
valueSize
The size, in bytes, of the value
value
The attribute value to write
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_PARAMETER

    if valueSize or value is NULL, or if attr is not an activity attribute

  • CUPTI_ERROR_PARAMETER_SIZE_NOT_SUFFICIENT

    Indicates that the value buffer is too small to hold the attribute value.

Description

Write an activity API attribute.

CUptiResult cuptiComputeCapabilitySupported ( int  major, int  minor, int* support )
Check support for a compute capability.
Parameters
major
The major revision number of the compute capability
minor
The minor revision number of the compute capability
support
Pointer to an integer to return the support status
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    if support is NULL

Description

This function is used to check the support for a device based on it's compute capability. It sets the support when the compute capability is supported by the current version of CUPTI, and clears it otherwise. This version of CUPTI might not support all GPUs sharing the same compute capability. It is suggested to use API cuptiDeviceSupported which provides correct information.

See also:

cuptiDeviceSupported

CUptiResult cuptiDeviceSupported ( CUdevice dev, int* support )
Check support for a compute device.
Parameters
dev
The device handle returned by CUDA Driver API cuDeviceGet
support
Pointer to an integer to return the support status
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    if support is NULL

  • CUPTI_ERROR_INVALID_DEVICE

    if dev is not a valid device

Description

This function is used to check the support for a compute device. It sets the support when the device is supported by the current version of CUPTI.

See also:

CUpti_DeviceSupport

See also:

cuptiComputeCapabilitySupported

CUptiResult cuptiFinalize ( void )
Cleanup CUPTI.
Description

Explicitly destroys and cleans up all resources associated with CUPTI in the current process. Any subsequent CUPTI API call will reinitialize CUPTI. The CUPTI client needs to make sure that required CUDA synchronization and CUPTI activity buffer flush is done before calling cuptiFinalize.

CUptiResult cuptiGetAutoBoostState ( CUcontext context, CUpti_ActivityAutoBoostState* state )
Get auto boost state.
Parameters
context
A valid CUcontext.
state
A pointer to CUpti_ActivityAutoBoostState structure which contains the current state and the id of the process that has requested the current state
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    if CUcontext or state is NULL

  • CUPTI_ERROR_NOT_SUPPORTED

    Indicates that the device does not support auto boost

  • CUPTI_ERROR_UNKNOWN

    an internal error occurred

Description

The profiling results can be inconsistent in case auto boost is enabled. CUPTI tries to disable auto boost while profiling. It can fail to disable in cases where user does not have the permissions or CUDA_AUTO_BOOST env variable is set. The function can be used to query whether auto boost is enabled.

CUptiResult cuptiGetContextId ( CUcontext context, uint32_t* contextId )
Get the ID of a context.
Parameters
context
The context
contextId
Returns a process-unique ID for the context
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_CONTEXT

    The context is NULL or not valid.

  • CUPTI_ERROR_INVALID_PARAMETER

    if contextId is NULL

Description

Get the ID of a context.

CUptiResult cuptiGetDeviceId ( CUcontext context, uint32_t* deviceId )
Get the ID of a device.
Parameters
context
The context, or NULL to indicate the current context.
deviceId
Returns the ID of the device that is current for the calling thread.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_DEVICE

    if unable to get device ID

  • CUPTI_ERROR_INVALID_PARAMETER

    if deviceId is NULL

Description

If context is NULL, returns the ID of the device that contains the currently active context. If context is non-NULL, returns the ID of the device which contains that context. Operates in a similar manner to cudaGetDevice() or cuCtxGetDevice() but may be called from within callback functions.

CUptiResult cuptiGetLastError ( void )
Returns the last error from a cupti call or callback.
Description

Returns the last error that has been produced by any of the cupti api calls or the callback in the same host thread and resets it to CUPTI_SUCCESS.

CUptiResult cuptiGetStreamId ( CUcontext context, CUstream stream, uint32_t* streamId )
Get the ID of a stream.
Parameters
context
If non-NULL then the stream is checked to ensure that it belongs to this context. Typically this parameter should be null.
stream
The stream
streamId
Returns a context-unique ID for the stream
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_STREAM

    if unable to get stream ID, or if context is non-NULL and stream does not belong to the context

  • CUPTI_ERROR_INVALID_PARAMETER

    if streamId is NULL

Description

Get the ID of a stream. The stream ID is unique within a context (i.e. all streams within a context will have unique stream IDs).

**DEPRECATED** This method is deprecated as of CUDA 8.0. Use method cuptiGetStreamIdEx instead.

CUptiResult cuptiGetStreamIdEx ( CUcontext context, CUstream stream, uint8_t perThreadStream, uint32_t* streamId )
Get the ID of a stream.
Parameters
context
If non-NULL then the stream is checked to ensure that it belongs to this context. Typically this parameter should be null.
stream
The stream
perThreadStream
Flag to indicate if program is compiled for per-thread streams
streamId
Returns a context-unique ID for the stream
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_INITIALIZED

  • CUPTI_ERROR_INVALID_STREAM

    if unable to get stream ID, or if context is non-NULL and stream does not belong to the context

  • CUPTI_ERROR_INVALID_PARAMETER

    if streamId is NULL

Description

Get the ID of a stream. The stream ID is unique within a context (i.e. all streams within a context will have unique stream IDs).

CUptiResult cuptiGetThreadIdType ( CUpti_ActivityThreadIdType* type )
Get the thread-id type.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    if type is NULL

Description

Returns the thread-id type used in CUPTI

CUptiResult cuptiGetTimestamp ( uint64_t* timestamp )
Get the CUPTI timestamp.
Parameters
timestamp
Returns the CUPTI timestamp
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_INVALID_PARAMETER

    if timestamp is NULL

Description

Returns a timestamp normalized to correspond with the start and end timestamps reported in the CUPTI activity records. The timestamp is reported in nanoseconds.

CUptiResult cuptiSetThreadIdType ( CUpti_ActivityThreadIdType type )
Set the thread-id type.
Returns

  • CUPTI_SUCCESS

  • CUPTI_ERROR_NOT_SUPPORTED

    if type is not supported on the platform

Description

CUPTI uses the method corresponding to set type to generate the thread-id. See enum /ref CUpti_ActivityThreadIdType for the list of methods. Activity records having thread-id field contain the same value. Thread id type must not be changed during the profiling session to avoid thread-id value mismatch across activity records.

2.4. CUPTI Callback API

Functions, types, and enums that implement the CUPTI Callback API.

Classes

struct 
Data passed into a runtime or driver API callback function.
struct 
CUDA graphs data passed into a resource callback function.
struct 
Module data passed into a resource callback function.
struct 
Data passed into a NVTX callback function.
struct 
Data passed into a resource callback function.
struct 
Data passed into a synchronize callback function.

Typedefs

typedef void  ( *CUpti_CallbackFunc )( void*  userdata,  CUpti_CallbackDomain domain,  CUpti_CallbackId cbid, const void*  cbdata )
Function type for a callback.
typedef uint32_t  CUpti_CallbackId
An ID for a driver API, runtime API, resource or synchronization callback.
typedef CUpti_CallbackDomain* CUpti_DomainTable
Pointer to an array of callback domains.
typedef CUpti_Subscriber_st *  CUpti_SubscriberHandle
A callback subscriber.

Enumerations

enum CUpti_ApiCallbackSite
Specifies the point in an API call that a callback is issued.
enum CUpti_CallbackDomain
Callback domains.
enum CUpti_CallbackIdResource
Callback IDs for resource domain.
enum CUpti_CallbackIdSync
Callback IDs for synchronization domain.

Functions

CUptiResult cuptiEnableAllDomains ( uint32_t enable, CUpti_SubscriberHandle subscriber )
Enable or disable all callbacks in all domains.
CUptiResult cuptiEnableCallback ( uint32_t enable, CUpti_SubscriberHandle subscriber, CUpti_CallbackDomain domain, CUpti_CallbackId cbid )
Enable or disabled callbacks for a specific domain and callback ID.
CUptiResult cuptiEnableDomain ( uint32_t enable, CUpti_SubscriberHandle subscriber, CUpti_CallbackDomain domain )
Enable or disabled all callbacks for a specific domain.
CUptiResult cuptiGetCallbackName ( CUpti_CallbackDomain domain, uint32_t cbid, const char** name )
Get the name of a callback for a specific domain and callback ID.
CUptiResult cuptiGetCallbackState ( uint32_t* enable, CUpti_SubscriberHandle subscriber, CUpti_CallbackDomain domain, CUpti_CallbackId cbid )
Get the current enabled/disabled state of a callback for a specific domain and function ID.
CUptiResult cuptiSubscribe ( CUpti_SubscriberHandle* subscriber, CUpti_CallbackFunc callback, void* userdata )
Initialize a callback subscriber with a callback function and user data.
CUptiResult cuptiSupportedDomains ( size_t* domainCount, CUpti_DomainTable* domainTable )
Get the available callback domains.
CUptiResult cuptiUnsubscribe ( CUpti_SubscriberHandle subscriber )
Unregister a callback subscriber.

Typedefs

void ( *CUpti_CallbackFunc )( void*  userdata,  CUpti_CallbackDomain domain,  CUpti_CallbackId cbid, const void*  cbdata )

Function type for a callback. Function type for a callback. The type of the data passed to the callback in cbdata depends on the domain. If domain is CUPTI_CB_DOMAIN_DRIVER_API or CUPTI_CB_DOMAIN_RUNTIME_API the type of cbdata will be CUpti_CallbackData. If domain is CUPTI_CB_DOMAIN_RESOURCE the type of cbdata will be CUpti_ResourceData. If domain is CUPTI_CB_DOMAIN_SYNCHRONIZE the type of cbdata will be CUpti_SynchronizeData. If domain is CUPTI_CB_DOMAIN_NVTX the type of cbdata will be CUpti_NvtxData.

Parameters
userdata
User data supplied at subscription of the callback
CUpti_CallbackDomain domain
CUpti_CallbackId cbid
cbdata
Data passed to the callback.
typedef uint32_t CUpti_CallbackId

An ID for a driver API, runtime API, resource or synchronization callback. An ID for a driver API, runtime API, resource or synchronization callback. Within a driver API callback this should be interpreted as a CUpti_driver_api_trace_cbid value (these values are defined in cupti_driver_cbid.h). Within a runtime API callback this should be interpreted as a CUpti_runtime_api_trace_cbid value (these values are defined in cupti_runtime_cbid.h). Within a resource API callback this should be interpreted as a CUpti_CallbackIdResource value. Within a synchronize API callback this should be interpreted as a CUpti_CallbackIdSync value.

typedef CUpti_CallbackDomain* CUpti_DomainTable

Pointer to an array of callback domains.

typedef CUpti_Subscriber_st * CUpti_SubscriberHandle

A callback subscriber.

Enumerations

enum CUpti_ApiCallbackSite

Specifies the point in an API call that a callback is issued. This value is communicated to the callback function via CUpti_CallbackData::callbackSite.

Values
CUPTI_API_ENTER = 0
The callback is at the entry of the API call.
CUPTI_API_EXIT = 1
The callback is at the exit of the API call.
CUPTI_API_CBSITE_FORCE_INT = 0x7fffffff
enum CUpti_CallbackDomain

Callback domains. Each domain represents callback points for a group of related API functions or CUDA driver activity.

Values
CUPTI_CB_DOMAIN_INVALID = 0
Invalid domain.
CUPTI_CB_DOMAIN_DRIVER_API = 1
Domain containing callback points for all driver API functions.
CUPTI_CB_DOMAIN_RUNTIME_API = 2
Domain containing callback points for all runtime API functions.
CUPTI_CB_DOMAIN_RESOURCE = 3
Domain containing callback points for CUDA resource tracking.
CUPTI_CB_DOMAIN_SYNCHRONIZE = 4
Domain containing callback points for CUDA synchronization.
CUPTI_CB_DOMAIN_NVTX = 5
Domain containing callback points for NVTX API functions.
CUPTI_CB_DOMAIN_SIZE = 6
CUPTI_CB_DOMAIN_FORCE_INT = 0x7fffffff
enum CUpti_CallbackIdResource

Callback IDs for resource domain, CUPTI_CB_DOMAIN_RESOURCE. This value is communicated to the callback function via the cbid parameter.

Values
CUPTI_CBID_RESOURCE_INVALID = 0
Invalid resource callback ID.
CUPTI_CBID_RESOURCE_CONTEXT_CREATED = 1
A new context has been created.
CUPTI_CBID_RESOURCE_CONTEXT_DESTROY_STARTING = 2
A context is about to be destroyed.
CUPTI_CBID_RESOURCE_STREAM_CREATED = 3
A new stream has been created.
CUPTI_CBID_RESOURCE_STREAM_DESTROY_STARTING = 4
A stream is about to be destroyed.
CUPTI_CBID_RESOURCE_CU_INIT_FINISHED = 5
The driver has finished initializing.
CUPTI_CBID_RESOURCE_MODULE_LOADED = 6
A module has been loaded.
CUPTI_CBID_RESOURCE_MODULE_UNLOAD_STARTING = 7
A module is about to be unloaded.
CUPTI_CBID_RESOURCE_MODULE_PROFILED = 8
The current module which is being profiled.
CUPTI_CBID_RESOURCE_GRAPH_CREATED = 9
CUDA graph has been created.
CUPTI_CBID_RESOURCE_GRAPH_DESTROY_STARTING = 10
CUDA graph is about to be destroyed.
CUPTI_CBID_RESOURCE_GRAPH_CLONED = 11
CUDA graph is cloned.
CUPTI_CBID_RESOURCE_GRAPHNODE_CREATE_STARTING = 12
CUDA graph node is about to be created
CUPTI_CBID_RESOURCE_GRAPHNODE_CREATED = 13
CUDA graph node is created.
CUPTI_CBID_RESOURCE_GRAPHNODE_DESTROY_STARTING = 14
CUDA graph node is about to be destroyed.
CUPTI_CBID_RESOURCE_GRAPHNODE_DEPENDENCY_CREATED = 15
Dependency on a CUDA graph node is created.
CUPTI_CBID_RESOURCE_GRAPHNODE_DEPENDENCY_DESTROY_STARTING = 16
Dependency on a CUDA graph node is destroyed.
CUPTI_CBID_RESOURCE_GRAPHEXEC_CREATE_STARTING = 17
An executable CUDA graph is about to be created.
CUPTI_CBID_RESOURCE_GRAPHEXEC_CREATED = 18
An executable CUDA graph is created.
CUPTI_CBID_RESOURCE_GRAPHEXEC_DESTROY_STARTING = 19
An executable CUDA graph is about to be destroyed.
CUPTI_CBID_RESOURCE_SIZE
CUPTI_CBID_RESOURCE_FORCE_INT = 0x7fffffff
enum CUpti_CallbackIdSync

Callback IDs for synchronization domain, CUPTI_CB_DOMAIN_SYNCHRONIZE. This value is communicated to the callback function via the cbid parameter.

Values
CUPTI_CBID_SYNCHRONIZE_INVALID = 0
Invalid synchronize callback ID.
CUPTI_CBID_SYNCHRONIZE_STREAM_SYNCHRONIZED = 1
Stream synchronization has completed for the stream.
CUPTI_CBID_SYNCHRONIZE_CONTEXT_SYNCHRONIZED = 2
Context synchronization has completed for the context.
CUPTI_CBID_SYNCHRONIZE_SIZE
CUPTI_CBID_SYNCHRONIZE_FORCE_INT = 0x7fffffff

Functions

CUptiResult cuptiEnableAllDomains ( uint32_t enable, CUpti_SubscriberHandle subscriber )
Enable or disable all callbacks in all domains.
Parameters
enable
New enable state for all callbacks in all domain. Zero disables all callbacks, non-zero enables all callbacks.
subscriber
- Handle to callback subscription
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialized CUPTI

  • CUPTI_ERROR_INVALID_PARAMETER

    if subscriber is invalid

Description

Enable or disable all callbacks in all domains.

Note:

Thread-safety: a subscriber must serialize access to cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and cuptiEnableAllDomains. For example, if cuptiGetCallbackState(sub, d, *) and cuptiEnableAllDomains(sub) are called concurrently, the results are undefined.

CUptiResult cuptiEnableCallback ( uint32_t enable, CUpti_SubscriberHandle subscriber, CUpti_CallbackDomain domain, CUpti_CallbackId cbid )
Enable or disabled callbacks for a specific domain and callback ID.
Parameters
enable
New enable state for the callback. Zero disables the callback, non-zero enables the callback.
subscriber
- Handle to callback subscription
domain
The domain of the callback
cbid
The ID of the callback
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialized CUPTI

  • CUPTI_ERROR_INVALID_PARAMETER

    if subscriber, domain or cbid is invalid.

Description

Enable or disabled callbacks for a subscriber for a specific domain and callback ID.

Note:

Thread-safety: a subscriber must serialize access to cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and cuptiEnableAllDomains. For example, if cuptiGetCallbackState(sub, d, c) and cuptiEnableCallback(sub, d, c) are called concurrently, the results are undefined.

CUptiResult cuptiEnableDomain ( uint32_t enable, CUpti_SubscriberHandle subscriber, CUpti_CallbackDomain domain )
Enable or disabled all callbacks for a specific domain.
Parameters
enable
New enable state for all callbacks in the domain. Zero disables all callbacks, non-zero enables all callbacks.
subscriber
- Handle to callback subscription
domain
The domain of the callback
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialized CUPTI

  • CUPTI_ERROR_INVALID_PARAMETER

    if subscriber or domain is invalid

Description

Enable or disabled all callbacks for a specific domain.

Note:

Thread-safety: a subscriber must serialize access to cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and cuptiEnableAllDomains. For example, if cuptiGetCallbackEnabled(sub, d, *) and cuptiEnableDomain(sub, d) are called concurrently, the results are undefined.

CUptiResult cuptiGetCallbackName ( CUpti_CallbackDomain domain, uint32_t cbid, const char** name )
Get the name of a callback for a specific domain and callback ID.
Parameters
domain
The domain of the callback
cbid
The ID of the callback
name
Returns pointer to the name string on success, NULL otherwise
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_INVALID_PARAMETER

    if name is NULL, or if domain or cbid is invalid.

Description

Returns a pointer to the name c_string in **name.

Note:

Names are available only for the DRIVER and RUNTIME domains.

CUptiResult cuptiGetCallbackState ( uint32_t* enable, CUpti_SubscriberHandle subscriber, CUpti_CallbackDomain domain, CUpti_CallbackId cbid )
Get the current enabled/disabled state of a callback for a specific domain and function ID.
Parameters
enable
Returns non-zero if callback enabled, zero if not enabled
subscriber
Handle to the initialize subscriber
domain
The domain of the callback
cbid
The ID of the callback
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialized CUPTI

  • CUPTI_ERROR_INVALID_PARAMETER

    if enabled is NULL, or if subscriber, domain or cbid is invalid.

Description

Returns non-zero in *enable if the callback for a domain and callback ID is enabled, and zero if not enabled.

Note:

Thread-safety: a subscriber must serialize access to cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and cuptiEnableAllDomains. For example, if cuptiGetCallbackState(sub, d, c) and cuptiEnableCallback(sub, d, c) are called concurrently, the results are undefined.

CUptiResult cuptiSubscribe ( CUpti_SubscriberHandle* subscriber, CUpti_CallbackFunc callback, void* userdata )
Initialize a callback subscriber with a callback function and user data.
Parameters
subscriber
Returns handle to initialize subscriber
callback
The callback function
userdata
A pointer to user data. This data will be passed to the callback function via the userdata paramater.
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialize CUPTI

  • CUPTI_ERROR_MAX_LIMIT_REACHED

    if there is already a CUPTI subscriber

  • CUPTI_ERROR_INVALID_PARAMETER

    if subscriber is NULL

Description

Initializes a callback subscriber with a callback function and (optionally) a pointer to user data. The returned subscriber handle can be used to enable and disable the callback for specific domains and callback IDs.

Note:
  • Only a single subscriber can be registered at a time.

  • This function does not enable any callbacks.

  • Thread-safety: this function is thread safe.

CUptiResult cuptiSupportedDomains ( size_t* domainCount, CUpti_DomainTable* domainTable )
Get the available callback domains.
Parameters
domainCount
Returns number of callback domains
domainTable
Returns pointer to array of available callback domains
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialize CUPTI

  • CUPTI_ERROR_INVALID_PARAMETER

    if domainCount or domainTable are NULL

Description

Returns in *domainTable an array of size *domainCount of all the available callback domains.

Note:

Thread-safety: this function is thread safe.

CUptiResult cuptiUnsubscribe ( CUpti_SubscriberHandle subscriber )
Unregister a callback subscriber.
Parameters
subscriber
Handle to the initialize subscriber
Returns

  • CUPTI_SUCCESS

    on success

  • CUPTI_ERROR_NOT_INITIALIZED

    if unable to initialized CUPTI

  • CUPTI_ERROR_INVALID_PARAMETER

    if subscriber is NULL or not initialized

Description

Removes a callback subscriber so that no future callbacks will be issued to that subscriber.

Note:

Thread-safety: this function is thread safe.

2.5. CUPTI Event API

Functions, types, and enums that implement the CUPTI Event API.

Classes

struct 
A set of event groups.
struct 
A set of event group sets.

Defines

#define CUPTI_EVENT_INVALID
The value that indicates the event value is invalid.
#define CUPTI_EVENT_OVERFLOW
The overflow value for a CUPTI event.

Typedefs

typedef uint32_t  CUpti_EventDomainID
ID for an event domain.
typedef void *  CUpti_EventGroup
A group of events.
typedef uint32_t  CUpti_EventID
ID for an event.
typedef void  ( *CUpti_KernelReplayUpdateFunc )( const char*  kernelName,  int numReplaysDone, void*  customData )
Function type for getting updates on kernel replay.

Enumerations

enum CUpti_DeviceAttribute
Device attributes.
enum CUpti_DeviceAttributeDeviceClass
Device class.
enum CUpti_EventAttribute
Event attributes.
enum CUpti_EventCategory
An event category.
enum CUpti_EventCollectionMethod
The collection method used for an event.
enum CUpti_EventCollectionMode
Event collection modes.
enum CUpti_EventDomainAttribute
Event domain attributes.
enum CUpti_EventGroupAttribute
Event group attributes.
enum CUpti_EventProfilingScope
Profiling scope for event.
enum CUpti_ReadEventFlags
Flags for cuptiEventGroupReadEvent an cuptiEventGroupReadAllEvents.

Functions

CUptiResult cuptiDeviceEnumEventDomains ( CUdevice device, size_t* arraySizeBytes, CUpti_EventDomainID* domainArray )
Get the event domains for a device.
CUptiResult cuptiDeviceGetAttribute ( CUdevice device, CUpti_DeviceAttribute attrib, size_t* valueSize, void* value )
Read a device attribute.
CUptiResult cuptiDeviceGetEventDomainAttribute ( CUdevice device, CUpti_EventDomainID eventDomain, CUpti_EventDomainAttribute attrib, size_t* valueSize, void* value )
Read an event domain attribute.
CUptiResult cuptiDeviceGetNumEventDomains ( CUdevice device, uint32_t* numDomains )
Get the number of domains for a device.
CUptiResult cuptiDeviceGetTimestamp ( CUcontext context, uint64_t* timestamp )
Read a device timestamp.
CUptiResult cuptiDisableKernelReplayMode ( CUcontext context )
Disable kernel replay mode.
CUptiResult cuptiEnableKernelReplayMode ( CUcontext context )
Enable kernel replay mode.
CUptiResult cuptiEnumEventDomains ( size_t* arraySizeBytes, CUpti_EventDomainID* domainArray )
Get the event domains available on any device.
CUptiResult cuptiEventDomainEnumEvents ( CUpti_EventDomainID eventDomain, size_t* arraySizeBytes, CUpti_EventID* eventArray )
Get the events in a domain.
CUptiResult cuptiEventDomainGetAttribute ( CUpti_EventDomainID eventDomain, CUpti_EventDomainAttribute attrib, size_t* valueSize, void* value )
Read an event domain attribute.
CUptiResult cuptiEventDomainGetNumEvents ( CUpti_EventDomainID eventDomain, uint32_t* numEvents )
Get number of events in a domain.
CUptiResult cuptiEventGetAttribute ( CUpti_EventID event, CUpti_EventAttribute attrib, size_t* valueSize, void* value )
Get an event attribute.
CUptiResult cuptiEventGetIdFromName ( CUdevice device, const char* eventName, CUpti_EventID* event )
Find an event by name.
CUptiResult cuptiEventGroupAddEvent ( CUpti_EventGroup eventGroup, CUpti_EventID event )
Add an event to an event group.
CUptiResult cuptiEventGroupCreate ( CUcontext context, CUpti_EventGroup* eventGroup, uint32_t flags )
Create a new event group for a context.
CUptiResult cuptiEventGroupDestroy ( CUpti_EventGroup eventGroup )
Destroy an event group.
CUptiResult cuptiEventGroupDisable ( CUpti_EventGroup eventGroup )
Disable an event group.
CUptiResult cuptiEventGroupEnable ( CUpti_EventGroup eventGroup )
Enable an event group.
CUptiResult cuptiEventGroupGetAttribute ( CUpti_EventGroup eventGroup, CUpti_EventGroupAttribute attrib, size_t* valueSize, void* value )
Read an event group attribute.
CUptiResult cuptiEventGroupReadAllEvents ( CUpti_EventGroup eventGroup, CUpti_ReadEventFlags flags, size_t* eventValueBufferSizeBytes, uint64_t* eventValueBuffer, size_t* eventIdArraySizeBytes, CUpti_EventID* eventIdArray, size_t* numEventIdsRead )
Read the values for all the events in an event group.
CUptiResult cuptiEventGroupReadEvent ( CUpti_EventGroup eventGroup, CUpti_ReadEventFlags flags, CUpti_EventID event, size_t* eventValueBufferSizeBytes, uint64_t* eventValueBuffer )
Read the value for an event in an event group.
CUptiResult cuptiEventGroupRemoveAllEvents ( CUpti_EventGroup eventGroup )
Remove all events from an event group.
CUptiResult cuptiEventGroupRemoveEvent ( CUpti_EventGroup eventGroup, CUpti_EventID event )
Remove an event from an event group.
CUptiResult cuptiEventGroupResetAllEvents ( CUpti_EventGroup eventGroup )
Zero all the event counts in an event group.
CUptiResult cuptiEventGroupSetAttribute ( CUpti_EventGroup eventGroup, CUpti_EventGroupAttribute attrib, size_t valueSize, void* value )
Write an event group attribute.
CUptiResult cuptiEventGroupSetDisable ( CUpti_EventGroupSet* eventGroupSet )
Disable an event group set.
CUptiResult cuptiEventGroupSetEnable ( CUpti_EventGroupSet* eventGroupSet )
Enable an event group set.
CUptiResult cuptiEventGroupSetsCreate ( CUcontext context, size_t eventIdArraySizeBytes, CUpti_EventID* eventIdArray, CUpti_EventGroupSets** eventGroupPasses )
For a set of events, get the grouping that indicates the number of passes and the event groups necessary to collect the events.
CUptiResult cuptiEventGroupSetsDestroy ( CUpti_EventGroupSets* eventGroupSets )
Destroy a CUpti_EventGroupSets object.
CUptiResult cuptiGetNumEventDomains ( uint32_t* numDomains )
Get the number of event domains available on any device.
CUptiResult cuptiKernelReplaySubscribeUpdate ( CUpti_KernelReplayUpdateFunc updateFunc, void* customData )
Subscribe to kernel replay updates.
CUptiResult cuptiSetEventCollectionMode ( CUcontext context, CUpti_EventCollectionMode mode )
Set the event collection mode.

Defines

#define CUPTI_EVENT_INVALID

Value

((uint64_t)0xFFFFFFFFFFFFFFFEULL)

#define CUPTI_EVENT_OVERFLOW

The CUPTI event value that indicates an overflow.

Value

((uint64_t)0xFFFFFFFFFFFFFFFFULL)

Typedefs

typedef uint32_t CUpti_EventDomainID

ID for an event domain. ID for an event domain. An event domain represents a group of related events. A device may have multiple instances of a domain, indicating that the device can simultaneously record multiple instances of each event within that domain.

typedef void * CUpti_EventGroup

A group of events. An event group is a collection of events that are managed together. All events in an event group must belong to the same domain.

typedef uint32_t CUpti_EventID

ID for an event. An event represents a countable activity, action, or occurrence on the device.

void ( *CUpti_KernelReplayUpdateFunc )( const char*  kernelName,  int numReplaysDone, void*  customData )

Function type for getting updates on kernel replay.

Parameters
kernelName
The mangled kernel name
int numReplaysDone
customData
Pointer of any custom data passed in when subscribing

Enumerations

enum CUpti_DeviceAttribute

CUPTI device attributes. These attributes can be read using cuptiDeviceGetAttribute.

Values
CUPTI_DEVICE_ATTR_MAX_EVENT_ID = 1
Number of event IDs for a device. Value is a uint32_t.
CUPTI_DEVICE_ATTR_MAX_EVENT_DOMAIN_ID = 2
Number of event domain IDs for a device. Value is a uint32_t.
CUPTI_DEVICE_ATTR_GLOBAL_MEMORY_BANDWIDTH = 3
Get global memory bandwidth in Kbytes/sec. Value is a uint64_t.
CUPTI_DEVICE_ATTR_INSTRUCTION_PER_CYCLE = 4
Get theoretical maximum number of instructions per cycle. Value is a uint32_t.
CUPTI_DEVICE_ATTR_INSTRUCTION_THROUGHPUT_SINGLE_PRECISION = 5
Get theoretical maximum number of single precision instructions that can be executed per second. Value is a uint64_t.
CUPTI_DEVICE_ATTR_MAX_FRAME_BUFFERS = 6
Get number of frame buffers for device. Value is a uint64_t.
CUPTI_DEVICE_ATTR_PCIE_LINK_RATE = 7
Get PCIE link rate in Mega bits/sec for device. Return 0 if bus-type is non-PCIE. Value is a uint64_t.
CUPTI_DEVICE_ATTR_PCIE_LINK_WIDTH = 8
Get PCIE link width for device. Return 0 if bus-type is non-PCIE. Value is a uint64_t.
CUPTI_DEVICE_ATTR_PCIE_GEN = 9
Get PCIE generation for device. Return 0 if bus-type is non-PCIE. Value is a uint64_t.
CUPTI_DEVICE_ATTR_DEVICE_CLASS = 10
Get the class for the device. Value is a CUpti_DeviceAttributeDeviceClass.
CUPTI_DEVICE_ATTR_FLOP_SP_PER_CYCLE = 11
Get the peak single precision flop per cycle. Value is a uint64_t.
CUPTI_DEVICE_ATTR_FLOP_DP_PER_CYCLE = 12
Get the peak double precision flop per cycle. Value is a uint64_t.
CUPTI_DEVICE_ATTR_MAX_L2_UNITS = 13
Get number of L2 units. Value is a uint64_t.
CUPTI_DEVICE_ATTR_MAX_SHARED_MEMORY_CACHE_CONFIG_PREFER_SHARED = 14
Get the maximum shared memory for the CU_FUNC_CACHE_PREFER_SHARED preference. Value is a uint64_t.
CUPTI_DEVICE_ATTR_MAX_SHARED_MEMORY_CACHE_CONFIG_PREFER_L1 = 15
Get the maximum shared memory for the CU_FUNC_CACHE_PREFER_L1 preference. Value is a uint64_t.
CUPTI_DEVICE_ATTR_MAX_SHARED_MEMORY_CACHE_CONFIG_PREFER_EQUAL = 16
Get the maximum shared memory for the CU_FUNC_CACHE_PREFER_EQUAL preference. Value is a uint64_t.
CUPTI_DEVICE_ATTR_FLOP_HP_PER_CYCLE = 17
Get the peak half precision flop per cycle. Value is a uint64_t.
CUPTI_DEVICE_ATTR_NVLINK_PRESENT = 18
Check if Nvlink is connected to device. Returns 1, if at least one Nvlink is connected to the device, returns 0 otherwise. Value is a uint32_t.
CUPTI_DEVICE_ATTR_GPU_CPU_NVLINK_BW = 19
Check if Nvlink is present between GPU and CPU. Returns Bandwidth, in Bytes/sec, if Nvlink is present, returns 0 otherwise. Value is a uint64_t.
CUPTI_DEVICE_ATTR_NVSWITCH_PRESENT = 20
Check if NVSwitch is present in the underlying topology. Returns 1, if present, returns 0 otherwise. Value is a uint32_t.
CUPTI_DEVICE_ATTR_FORCE_INT = 0x7fffffff
enum CUpti_DeviceAttributeDeviceClass

Enumeration of device classes for device attribute CUPTI_DEVICE_ATTR_DEVICE_CLASS.

Values
CUPTI_DEVICE_ATTR_DEVICE_CLASS_TESLA = 0
CUPTI_DEVICE_ATTR_DEVICE_CLASS_QUADRO = 1