6.2. Thread Management [DEPRECATED]
This section describes deprecated thread management functions of the CUDA runtime application programming interface.
Functions
- __host__ cudaError_t cudaThreadExit ( void )
- Exit and clean up from CUDA launches.
- __host__ cudaError_t cudaThreadGetCacheConfig ( cudaFuncCache ** pCacheConfig )
- Returns the preferred cache configuration for the current device.
- __host__ cudaError_t cudaThreadGetLimit ( size_t* pValue, cudaLimit limit )
- Returns resource limits.
- __host__ cudaError_t cudaThreadSetCacheConfig ( cudaFuncCache cacheConfig )
- Sets the preferred cache configuration for the current device.
- __host__ cudaError_t cudaThreadSetLimit ( cudaLimit limit, size_t value )
- Set resource limits.
- __host__ cudaError_t cudaThreadSynchronize ( void )
- Wait for compute device to finish.
Functions
- __host__ cudaError_t cudaThreadExit ( void )
-
Exit and clean up from CUDA launches.
Returns
Deprecated
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is identical to the non-deprecated function cudaDeviceReset(), which should be used instead.
Description
Explicitly destroys all cleans up all resources associated with the current device in the current process. Any subsequent API call to this device will reinitialize the device.
Note that this function will reset the device immediately. It is the caller's responsibility to ensure that the device is not being accessed by any other host threads from the process when this function is called.
Note:-
Note that this function may also return error codes from previous, asynchronous launches.
-
Note that this function may also return cudaErrorInitializationError, cudaErrorInsufficientDriver or cudaErrorNoDevice if this call tries to initialize internal CUDA RT state.
-
Note that as specified by cudaStreamAddCallback no CUDA function may be called from callback. cudaErrorNotPermitted may, but is not guaranteed to, be returned as a diagnostic in such case.
See also:
-
- __host__ cudaError_t cudaThreadGetCacheConfig ( cudaFuncCache ** pCacheConfig )
-
Returns the preferred cache configuration for the current device.
Parameters
- pCacheConfig
- - Returned cache configuration
Returns
Deprecated
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is identical to the non-deprecated function cudaDeviceGetCacheConfig(), which should be used instead.
Description
On devices where the L1 cache and shared memory use the same hardware resources, this returns through pCacheConfig the preferred cache configuration for the current device. This is only a preference. The runtime will use the requested configuration if possible, but it is free to choose a different configuration if required to execute functions.
This will return a pCacheConfig of cudaFuncCachePreferNone on devices where the size of the L1 cache and shared memory are fixed.
The supported cache configurations are:
-
cudaFuncCachePreferNone: no preference for shared memory or L1 (default)
-
cudaFuncCachePreferShared: prefer larger shared memory and smaller L1 cache
-
cudaFuncCachePreferL1: prefer larger L1 cache and smaller shared memory
Note:-
Note that this function may also return error codes from previous, asynchronous launches.
-
Note that this function may also return cudaErrorInitializationError, cudaErrorInsufficientDriver or cudaErrorNoDevice if this call tries to initialize internal CUDA RT state.
-
Note that as specified by cudaStreamAddCallback no CUDA function may be called from callback. cudaErrorNotPermitted may, but is not guaranteed to, be returned as a diagnostic in such case.
See also:
- __host__ cudaError_t cudaThreadGetLimit ( size_t* pValue, cudaLimit limit )
-
Returns resource limits.
Parameters
- pValue
- - Returned size in bytes of limit
- limit
- - Limit to query
Deprecated
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is identical to the non-deprecated function cudaDeviceGetLimit(), which should be used instead.
Description
Returns in *pValue the current size of limit. The supported cudaLimit values are:
-
cudaLimitStackSize: stack size of each GPU thread;
-
cudaLimitPrintfFifoSize: size of the shared FIFO used by the printf() device system call.
-
cudaLimitMallocHeapSize: size of the heap used by the malloc() and free() device system calls;
Note:-
Note that this function may also return error codes from previous, asynchronous launches.
-
Note that this function may also return cudaErrorInitializationError, cudaErrorInsufficientDriver or cudaErrorNoDevice if this call tries to initialize internal CUDA RT state.
-
Note that as specified by cudaStreamAddCallback no CUDA function may be called from callback. cudaErrorNotPermitted may, but is not guaranteed to, be returned as a diagnostic in such case.
See also:
- __host__ cudaError_t cudaThreadSetCacheConfig ( cudaFuncCache cacheConfig )
-
Sets the preferred cache configuration for the current device.
Parameters
- cacheConfig
- - Requested cache configuration
Returns
Deprecated
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is identical to the non-deprecated function cudaDeviceSetCacheConfig(), which should be used instead.
Description
On devices where the L1 cache and shared memory use the same hardware resources, this sets through cacheConfig the preferred cache configuration for the current device. This is only a preference. The runtime will use the requested configuration if possible, but it is free to choose a different configuration if required to execute the function. Any function preference set via cudaFuncSetCacheConfig ( C API) or cudaFuncSetCacheConfig ( C++ API) will be preferred over this device-wide setting. Setting the device-wide cache configuration to cudaFuncCachePreferNone will cause subsequent kernel launches to prefer to not change the cache configuration unless required to launch the kernel.
This setting does nothing on devices where the size of the L1 cache and shared memory are fixed.
Launching a kernel with a different preference than the most recent preference setting may insert a device-side synchronization point.
The supported cache configurations are:
-
cudaFuncCachePreferNone: no preference for shared memory or L1 (default)
-
cudaFuncCachePreferShared: prefer larger shared memory and smaller L1 cache
-
cudaFuncCachePreferL1: prefer larger L1 cache and smaller shared memory
Note:-
Note that this function may also return error codes from previous, asynchronous launches.
-
Note that this function may also return cudaErrorInitializationError, cudaErrorInsufficientDriver or cudaErrorNoDevice if this call tries to initialize internal CUDA RT state.
-
Note that as specified by cudaStreamAddCallback no CUDA function may be called from callback. cudaErrorNotPermitted may, but is not guaranteed to, be returned as a diagnostic in such case.
See also:
- __host__ cudaError_t cudaThreadSetLimit ( cudaLimit limit, size_t value )
-
Set resource limits.
Parameters
- limit
- - Limit to set
- value
- - Size in bytes of limit
Deprecated
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is identical to the non-deprecated function cudaDeviceSetLimit(), which should be used instead.
Description
Setting limit to value is a request by the application to update the current limit maintained by the device. The driver is free to modify the requested value to meet h/w requirements (this could be clamping to minimum or maximum values, rounding up to nearest element size, etc). The application can use cudaThreadGetLimit() to find out exactly what the limit has been set to.
Setting each cudaLimit has its own specific restrictions, so each is discussed here.
-
cudaLimitStackSize controls the stack size of each GPU thread.
-
cudaLimitPrintfFifoSize controls the size of the shared FIFO used by the printf() device system call. Setting cudaLimitPrintfFifoSize must be performed before launching any kernel that uses the printf() device system call, otherwise cudaErrorInvalidValue will be returned.
-
cudaLimitMallocHeapSize controls the size of the heap used by the malloc() and free() device system calls. Setting cudaLimitMallocHeapSize must be performed before launching any kernel that uses the malloc() or free() device system calls, otherwise cudaErrorInvalidValue will be returned.
Note:-
Note that this function may also return error codes from previous, asynchronous launches.
-
Note that this function may also return cudaErrorInitializationError, cudaErrorInsufficientDriver or cudaErrorNoDevice if this call tries to initialize internal CUDA RT state.
-
Note that as specified by cudaStreamAddCallback no CUDA function may be called from callback. cudaErrorNotPermitted may, but is not guaranteed to, be returned as a diagnostic in such case.
See also:
- __host__ cudaError_t cudaThreadSynchronize ( void )
-
Wait for compute device to finish.
Returns
Deprecated
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is similar to the non-deprecated function cudaDeviceSynchronize(), which should be used instead.
Description
Blocks until the device has completed all preceding requested tasks. cudaThreadSynchronize() returns an error if one of the preceding tasks has failed. If the cudaDeviceScheduleBlockingSync flag was set for this device, the host thread will block until the device has finished its work.
Note:-
Note that this function may also return error codes from previous, asynchronous launches.
-
Note that this function may also return cudaErrorInitializationError, cudaErrorInsufficientDriver or cudaErrorNoDevice if this call tries to initialize internal CUDA RT state.
-
Note that as specified by cudaStreamAddCallback no CUDA function may be called from callback. cudaErrorNotPermitted may, but is not guaranteed to, be returned as a diagnostic in such case.
See also:
-