Parallel Thread Execution ISA Version 7.7
The programming guide to using PTX (Parallel Thread Execution) and ISA (Instruction Set Architecture).
1. Introduction
This document describes PTX, a lowlevel parallel thread execution virtual machine and instruction set architecture (ISA). PTX exposes the GPU as a dataparallel computing device.
1.1. Scalable DataParallel Computing using GPUs
Driven by the insatiable market demand for realtime, highdefinition 3D graphics, the programmable GPU has evolved into a highly parallel, multithreaded, manycore processor with tremendous computational horsepower and very high memory bandwidth. The GPU is especially wellsuited to address problems that can be expressed as dataparallel computations  the same program is executed on many data elements in parallel  with high arithmetic intensity  the ratio of arithmetic operations to memory operations. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control; and because it is executed on many data elements and has high arithmetic intensity, the memory access latency can be hidden with calculations instead of big data caches.
Dataparallel processing maps data elements to parallel processing threads. Many applications that process large data sets can use a dataparallel programming model to speed up the computations. In 3D rendering large sets of pixels and vertices are mapped to parallel threads. Similarly, image and media processing applications such as postprocessing of rendered images, video encoding and decoding, image scaling, stereo vision, and pattern recognition can map image blocks and pixels to parallel processing threads. In fact, many algorithms outside the field of image rendering and processing are accelerated by dataparallel processing, from general signal processing or physics simulation to computational finance or computational biology.
PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at install time to the target hardware instruction set. The PTXtoGPU translator and driver enable NVIDIA GPUs to be used as programmable parallel computers.
1.2. Goals of PTX
PTX provides a stable programming model and instruction set for general purpose parallel programming. It is designed to be efficient on NVIDIA GPUs supporting the computation features defined by the NVIDIA Tesla architecture. High level language compilers for languages such as CUDA and C/C++ generate PTX instructions, which are optimized for and translated to native targetarchitecture instructions.
 Provide a stable ISA that spans multiple GPU generations.
 Achieve performance in compiled applications comparable to native GPU performance.
 Provide a machineindependent ISA for C/C++ and other compilers to target.
 Provide a code distribution ISA for application and middleware developers.
 Provide a common sourcelevel ISA for optimizing code generators and translators, which map PTX to specific target machines.
 Facilitate handcoding of libraries, performance kernels, and architecture tests.
 Provide a scalable programming model that spans GPU sizes from a single unit to many parallel units.
PTX ISA Version 7.7
 Extends isspacep and cvta instructions to include the .param state space for kernel function parameters.
1.4. Document Structure
 Programming Model outlines the programming model.
 PTX Machine Model gives an overview of the PTX virtual machine model.
 Syntax describes the basic syntax of the PTX language.
 State Spaces, Types, and Variables describes state spaces, types, and variable declarations.
 Instruction Operands describes instruction operands.
 Abstracting the ABI describes the function and call syntax, calling convention, and PTX support for abstracting the Application Binary Interface (ABI).
 Instruction Set describes the instruction set.
 Special Registers lists special registers.
 Directives lists the assembly directives supported in PTX.
 Release Notes provides release notes for PTX ISA versions 2.x and beyond.
References

7542008 IEEE Standard for FloatingPoint Arithmetic. ISBN 9780738157528, 2008.

The OpenCL Specification, Version: 1.1, Document Revision: 44, June 1, 2011.

CUDA Dynamic Parallelism Programming Guide.
https://docs.nvidia.com/cuda/cudacprogrammingguide/index.html#cudadynamicparallelism

PTX Writers Guide to Interoperability.
https://docs.nvidia.com/cuda/ptxwritersguidetointeroperability/index.html
2. Programming Model
2.1. A Highly Multithreaded Coprocessor
The GPU is a compute device capable of executing a very large number of threads in parallel. It operates as a coprocessor to the main CPU, or host: In other words, dataparallel, computeintensive portions of applications running on the host are offloaded onto the device.
More precisely, a portion of an application that is executed many times, but independently on different data, can be isolated into a kernel function that is executed on the GPU as many different threads. To that effect, such a function is compiled to the PTX instruction set and the resulting kernel is translated at install time to the target GPU instruction set.
2.2. Thread Hierarchy
The batch of threads that executes a kernel is organized as a grid of cooperative thread arrays as described in this section and illustrated in Figure 1. Cooperative thread arrays (CTAs) implement CUDA thread blocks.
2.2.1. Cooperative Thread Arrays
The Parallel Thread Execution (PTX) programming model is explicitly parallel: a PTX program specifies the execution of a given thread of a parallel thread array. A cooperative thread array, or CTA, is an array of threads that execute a kernel concurrently or in parallel.
Threads within a CTA can communicate with each other. To coordinate the communication of the threads within the CTA, one can specify synchronization points where threads wait until all threads in the CTA have arrived.
Each thread has a unique thread identifier within the CTA. Programs use a data parallel decomposition to partition inputs, work, and results across the threads of the CTA. Each CTA thread uses its thread identifier to determine its assigned role, assign specific input and output positions, compute addresses, and select work to perform. The thread identifier is a threeelement vector tid, (with elements tid.x, tid.y, and tid.z) that specifies the thread's position within a 1D, 2D, or 3D CTA. Each thread identifier component ranges from zero up to the number of thread ids in that CTA dimension.
Each CTA has a 1D, 2D, or 3D shape specified by a threeelement vector ntid (with elements ntid.x, ntid.y, and ntid.z). The vector ntid specifies the number of threads in each CTA dimension.
Threads within a CTA execute in SIMT (singleinstruction, multiplethread) fashion in groups called warps. A warp is a maximal subset of threads from a single CTA, such that the threads execute the same instructions at the same time. Threads within a warp are sequentially numbered. The warp size is a machinedependent constant. Typically, a warp has 32 threads. Some applications may be able to maximize performance with knowledge of the warp size, so PTX includes a runtime immediate constant, WARP_SZ, which may be used in any instruction where an immediate operand is allowed.
2.2.2. Grid of Cooperative Thread Arrays
There is a maximum number of threads that a CTA can contain. However, CTAs that execute the same kernel can be batched together into a grid of CTAs, so that the total number of threads that can be launched in a single kernel invocation is very large. This comes at the expense of reduced thread communication and synchronization, because threads in different CTAs cannot communicate and synchronize with each other.
Multiple CTAs may execute concurrently and in parallel, or sequentially, depending on the platform. Each CTA has a unique CTA identifier (ctaid) within a grid of CTAs. Each grid of CTAs has a 1D, 2D , or 3D shape specified by the parameter nctaid. Each grid also has a unique temporal grid identifier (gridid). Threads may read and use these values through predefined, readonly special registers %tid, %ntid, %ctaid, %nctaid, and %gridid.
The host issues a succession of kernel invocations to the device. Each kernel is executed as a batch of threads organized as a grid of CTAs (Figure 1).
A cooperative thread array (CTA) is a set of concurrent threads that execute the same kernel program. A grid is a set of CTAs that execute independently.
2.3. Memory Hierarchy
PTX threads may access data from multiple memory spaces during their execution as illustrated by Figure 2. Each thread has a private local memory. Each thread block (CTA) has a shared memory visible to all threads of the block and with the same lifetime as the block. Finally, all threads have access to the same global memory.
There are additional memory spaces accessible by all threads: the constant, texture, and surface memory spaces. Constant and texture memory are readonly; surface memory is readable and writable. The global, constant, texture, and surface memory spaces are optimized for different memory usages. For example, texture memory offers different addressing modes as well as data filtering for specific data formats. Note that texture and surface memory is cached, and within the same kernel call, the cache is not kept coherent with respect to global memory writes and surface memory writes, so any texture fetch or surface read to an address that has been written to via a global or a surface write in the same kernel call returns undefined data. In other words, a thread can safely read some texture or surface memory location only if this memory location has been updated by a previous kernel call or memory copy, but not if it has been previously updated by the same thread or another thread from the same kernel call.
The global, constant, and texture memory spaces are persistent across kernel launches by the same application.
Both the host and the device maintain their own local memory, referred to as host memory and device memory, respectively. The device memory may be mapped and read or written by the host, or, for more efficient transfer, copied from the host memory through optimized API calls that utilize the device's highperformance Direct Memory Access (DMA) engine.
3. PTX Machine Model
3.1. A Set of SIMT Multiprocessors
The NVIDIA GPU architecture is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). When a host program invokes a kernel grid, the blocks of the grid are enumerated and distributed to multiprocessors with available execution capacity. The threads of a thread block execute concurrently on one multiprocessor. As thread blocks terminate, new blocks are launched on the vacated multiprocessors.
A multiprocessor consists of multiple Scalar Processor (SP) cores, a multithreaded instruction unit, and onchip shared memory. The multiprocessor creates, manages, and executes concurrent threads in hardware with zero scheduling overhead. It implements a singleinstruction barrier synchronization. Fast barrier synchronization together with lightweight thread creation and zerooverhead thread scheduling efficiently support very finegrained parallelism, allowing, for example, a low granularity decomposition of problems by assigning one thread to each data element (such as a pixel in an image, a voxel in a volume, a cell in a gridbased computation).
To manage hundreds of threads running several different programs, the multiprocessor employs an architecture we call SIMT (singleinstruction, multiplethread). The multiprocessor maps each thread to one scalar processor core, and each scalar thread executes independently with its own instruction address and register state. The multiprocessor SIMT unit creates, manages, schedules, and executes threads in groups of parallel threads called warps. (This term originates from weaving, the first parallel thread technology.) Individual threads composing a SIMT warp start together at the same program address but are otherwise free to branch and execute independently.
When a multiprocessor is given one or more thread blocks to execute, it splits them into warps that get scheduled by the SIMT unit. The way a block is split into warps is always the same; each warp contains threads of consecutive, increasing thread IDs with the first warp containing thread 0.
At every instruction issue time, the SIMT unit selects a warp that is ready to execute and issues the next instruction to the active threads of the warp. A warp executes one common instruction at a time, so full efficiency is realized when all threads of a warp agree on their execution path. If threads of a warp diverge via a datadependent conditional branch, the warp serially executes each branch path taken, disabling threads that are not on that path, and when all paths complete, the threads converge back to the same execution path. Branch divergence occurs only within a warp; different warps execute independently regardless of whether they are executing common or disjointed code paths.
SIMT architecture is akin to SIMD (Single Instruction, Multiple Data) vector organizations in that a single instruction controls multiple processing elements. A key difference is that SIMD vector organizations expose the SIMD width to the software, whereas SIMT instructions specify the execution and branching behavior of a single thread. In contrast with SIMD vector machines, SIMT enables programmers to write threadlevel parallel code for independent, scalar threads, as well as dataparallel code for coordinated threads. For the purposes of correctness, the programmer can essentially ignore the SIMT behavior; however, substantial performance improvements can be realized by taking care that the code seldom requires threads in a warp to diverge. In practice, this is analogous to the role of cache lines in traditional code: Cache line size can be safely ignored when designing for correctness but must be considered in the code structure when designing for peak performance. Vector architectures, on the other hand, require the software to coalesce loads into vectors and manage divergence manually.
How many blocks a multiprocessor can process at once depends on how many registers per thread and how much shared memory per block are required for a given kernel since the multiprocessor's registers and shared memory are split among all the threads of the batch of blocks. If there are not enough registers or shared memory available per multiprocessor to process at least one block, the kernel will fail to launch.
A set of SIMT multiprocessors with onchip shared memory.
3.2. Independent Thread Scheduling
On architectures prior to Volta, warps used a single program counter shared amongst all 32 threads in the warp together with an active mask specifying the active threads of the warp. As a result, threads from the same warp in divergent regions or different states of execution cannot signal each other or exchange data, and algorithms requiring finegrained sharing of data guarded by locks or mutexes can easily lead to deadlock, depending on which warp the contending threads come from.
Starting with the Volta architecture, Independent Thread Scheduling allows full concurrency between threads, regardless of warp. With Independent Thread Scheduling, the GPU maintains execution state per thread, including a program counter and call stack, and can yield execution at a perthread granularity, either to make better use of execution resources or to allow one thread to wait for data to be produced by another. A schedule optimizer determines how to group active threads from the same warp together into SIMT units. This retains the high throughput of SIMT execution as in prior NVIDIA GPUs, but with much more flexibility: threads can now diverge and reconverge at subwarp granularity.
Independent Thread Scheduling can lead to a rather different set of threads participating in the executed code than intended if the developer made assumptions about warpsynchronicity of previous hardware architectures. In particular, any warpsynchronous code (such as synchronizationfree, intrawarp reductions) should be revisited to ensure compatibility with Volta and beyond. See the section on Compute Capability 7.x in the Cuda Programming Guide for further details.
4. Syntax
PTX programs are a collection of text source modules (files). PTX source modules have an assemblylanguage style syntax with instruction operation codes and operands. Pseudooperations specify symbol and addressing management. The ptxas optimizing backend compiler optimizes and assembles PTX source modules to produce corresponding binary object files.
4.1. Source Format
Source modules are ASCII text. Lines are separated by the newline character (\n).
All whitespace characters are equivalent; whitespace is ignored except for its use in separating tokens in the language.
The C preprocessor cpp may be used to process PTX source modules. Lines beginning with # are preprocessor directives. The following are common preprocessor directives:
#include, #define, #if, #ifdef, #else, #endif, #line, #file
C: A Reference Manual by Harbison and Steele provides a good description of the C preprocessor.
PTX is case sensitive and uses lowercase for keywords.
Each PTX module must begin with a .version directive specifying the PTX language version, followed by a .target directive specifying the target architecture assumed. See PTX Module Directives for a more information on these directives.
4.2. Comments
Comments in PTX follow C/C++ syntax, using nonnested /* and */ for comments that may span multiple lines, and using // to begin a comment that extends up to the next newline character, which terminates the current line. Comments cannot occur within character constants, string literals, or within other comments.
Comments in PTX are treated as whitespace.
4.3. Statements
A PTX statement is either a directive or an instruction. Statements begin with an optional label and end with a semicolon.
Examples
.reg .b32 r1, r2; .global .f32 array[N]; start: mov.b32 r1, %tid.x; shl.b32 r1, r1, 2; // shift thread id by 2 bits ld.global.b32 r2, array[r1]; // thread[tid] gets array[tid] add.f32 r2, r2, 0.5; // add 1/2
4.3.1. Directive Statements
Directive keywords begin with a dot, so no conflict is possible with userdefined identifiers. The directives in PTX are listed in Table 1 and described in State Spaces, Types, and Variables and Directives.
4.3.2. Instruction Statements
Instructions are formed from an instruction opcode followed by a commaseparated list of zero or more operands, and terminated with a semicolon. Operands may be register variables, constant expressions, address expressions, or label names. Instructions have an optional guard predicate which controls conditional execution. The guard predicate follows the optional label and precedes the opcode, and is written as @p, where p is a predicate register. The guard predicate may be optionally negated, written as @!p.
The destination operand is first, followed by source operands.
Instruction keywords are listed in Table 2.All instruction keywords are reserved tokens in PTX.
abs  cvt  min  shfl  vadd 
activemask  cvta  mma  shl  vadd2 
add  discard  mov  shr  vadd4 
addc  div  mul  sin  vavrg2 
alloca  dp2a  mul24  slct  vavrg4 
and  dp4a  nanosleep  sqrt  vmad 
applypriority  ex2  neg  st  vmax 
atom  exit  not  stackrestore  vmax2 
bar  fence  or  stacksave  vmax4 
barrier  fma  pmevent  sub  vmin 
bfe  fns  popc  subc  vmin2 
bfi  isspacep  prefetch  suld  vmin4 
bfind  istypep  prefetchu  suq  vote 
bmsk  ld  prmt  sured  vset 
bra  ldmatrix  rcp  sust  vset2 
brev  ldu  red  szext  vset4 
brkpt  lg2  redux  tanh  vshl 
brx  lop3  rem  testp  vshr 
call  mad  ret  tex  vsub 
clz  mad24  rsqrt  tld4  vsub2 
cnot  madc  sad  trap  vsub4 
copysign  match  selp  txq  wmma 
cos  max  set  vabsdiff  xor 
cp  mbarrier  setp  vabsdiff2  
createpolicy  membar  shf  vabsdiff4 
4.4. Identifiers
Userdefined identifiers follow extended C++ rules: they either start with a letter followed by zero or more letters, digits, underscore, or dollar characters; or they start with an underscore, dollar, or percentage character followed by one or more letters, digits, underscore, or dollar characters:
followsym: [azAZ09_$] identifier: [azAZ]{followsym}*  {[_$%]{followsym}+
PTX does not specify a maximum length for identifiers and suggests that all implementations support a minimum length of at least 1024 characters.
Many highlevel languages such as C and C++ follow similar rules for identifier names, except that the percentage sign is not allowed. PTX allows the percentage sign as the first character of an identifier. The percentage sign can be used to avoid name conflicts, e.g., between userdefined variable names and compilergenerated names.
PTX predefines one constant and a small number of special registers that begin with the percentage sign, listed in Table 3.
4.5. Constants
PTX supports integer and floatingpoint constants and constant expressions. These constants may be used in data initialization and as operands to instructions. Type checking rules remain the same for integer, floatingpoint, and bitsize types. For predicatetype data and instructions, integer constants are allowed and are interpreted as in C, i.e., zero values are False and nonzero values are True.
4.5.1. Integer Constants
Integer constants are 64bits in size and are either signed or unsigned, i.e., every integer constant has type .s64 or .u64. The signed/unsigned nature of an integer constant is needed to correctly evaluate constant expressions containing operations such as division and ordered comparisons, where the behavior of the operation depends on the operand types. When used in an instruction or data initialization, each integer constant is converted to the appropriate size based on the data or instruction type at its use.
Integer literals may be written in decimal, hexadecimal, octal, or binary notation. The syntax follows that of C. Integer literals may be followed immediately by the letter U to indicate that the literal is unsigned.
hexadecimal literal: 0[xX]{hexdigit}+U? octal literal: 0{octal digit}+U? binary literal: 0[bB]{bit}+U? decimal literal {nonzerodigit}{digit}*U?
Integer literals are nonnegative and have a type determined by their magnitude and optional type suffix as follows: literals are signed (.s64) unless the value cannot be fully represented in .s64 or the unsigned suffix is specified, in which case the literal is unsigned (.u64).
The predefined integer constant WARP_SZ specifies the number of threads per warp for the target platform; to date, all target architectures have a WARP_SZ value of 32.
4.5.2. FloatingPoint Constants
Floatingpoint constants are represented as 64bit doubleprecision values, and all floatingpoint constant expressions are evaluated using 64bit double precision arithmetic. The only exception is the 32bit hex notation for expressing an exact singleprecision floatingpoint value; such values retain their exact 32bit singleprecision value and may not be used in constant expressions. Each 64bit floatingpoint constant is converted to the appropriate floatingpoint size based on the data or instruction type at its use.
Floatingpoint literals may be written with an optional decimal point and an optional signed exponent. Unlike C and C++, there is no suffix letter to specify size; literals are always represented in 64bit doubleprecision format.
PTX includes a second representation of floatingpoint constants for specifying the exact machine representation using a hexadecimal constant. To specify IEEE 754 doubleprecision floating point values, the constant begins with 0d or 0D followed by 16 hex digits. To specify IEEE 754 singleprecision floating point values, the constant begins with 0f or 0F followed by 8 hex digits.
0[fF]{hexdigit}{8} // singleprecision floating point 0[dD]{hexdigit}{16} // doubleprecision floating point
Example
mov.f32 $f3, 0F3f800000; // 1.0
4.5.3. Predicate Constants
In PTX, integer constants may be used as predicates. For predicatetype data initializers and instruction operands, integer constants are interpreted as in C, i.e., zero values are False and nonzero values are True.
4.5.4. Constant Expressions
In PTX, constant expressions are formed using operators as in C and are evaluated using rules similar to those in C, but simplified by restricting types and sizes, removing most casts, and defining full semantics to eliminate cases where expression evaluation in C is implementation dependent.
Constant expressions are formed from constant literals, unary plus and minus, basic arithmetic operators (addition, subtraction, multiplication, division), comparison operators, the conditional ternary operator ( ?: ), and parentheses. Integer constant expressions also allow unary logical negation (!), bitwise complement (~), remainder (%), shift operators (<< and >>), bittype operators (&, , and ^), and logical operators (&&, ).
Constant expressions in PTX do not support casts between integer and floatingpoint.
Constant expressions are evaluated using the same operator precedence as in C. Table 4 gives operator precedence and associativity. Operator precedence is highest for unary operators and decreases with each line in the chart. Operators on the same line have the same precedence and are evaluated righttoleft for unary operators and lefttoright for binary operators.
Kind  Operator Symbols  Operator Names  Associates 

Primary  ()  parenthesis  n/a 
Unary  + ! ~  plus, minus, negation, complement  right 
(.s64)(.u64)  casts  right  
Binary  */ %  multiplication, division, remainder  left 
+  addition, subtraction  
>> <<  shifts  
< > <= >=  ordered comparisons  
== !=  equal, not equal  
&  bitwise AND  
^  bitwise XOR  
  bitwise OR  
&&  logical AND  
  logical OR  
Ternary  ?:  conditional  right 
4.5.5. Integer Constant Expression Evaluation
Integer constant expressions are evaluated at compile time according to a set of rules that determine the type (signed .s64 versus unsigned .u64) of each subexpression. These rules are based on the rules in C, but they've been simplified to apply only to 64bit integers, and behavior is fully defined in all cases (specifically, for remainder and shift operators).
 Literals are signed unless unsigned is needed to prevent overflow, or unless the literal uses a U suffix. For example:
 42, 0x1234, 0123 are signed.
 0xfabc123400000000, 42U, 0x1234U are unsigned.
 Unary plus and minus preserve the type of the input operand. For example:
 +123, 1, (42) are signed.
 1U, 0xfabc123400000000 are unsigned.
 Unary logical negation (!) produces a signed result with value 0 or 1.
 Unary bitwise complement (~) interprets the source operand as unsigned and produces an unsigned result.
 Some binary operators require normalization of source operands. This normalization is known as the usual arithmetic conversions and simply converts both operands to unsigned type if either operand is unsigned.
 Addition, subtraction, multiplication, and division perform the usual arithmetic conversions and produce a result with the same type as the converted operands. That is, the operands and result are unsigned if either source operand is unsigned, and is otherwise signed.
 Remainder (%) interprets the operands as unsigned. Note that this differs from C, which allows a negative divisor but defines the behavior to be implementation dependent.
 Left and right shift interpret the second operand as unsigned and produce a result with the same type as the first operand. Note that the behavior of rightshift is determined by the type of the first operand: right shift of a signed value is arithmetic and preserves the sign, and right shift of an unsigned value is logical and shifts in a zero bit.
 AND (&), OR (), and XOR (^) perform the usual arithmetic conversions and produce a result with the same type as the converted operands.
 AND_OP (&&), OR_OP (), Equal (==), and Not_Equal (!=) produce a signed result. The result value is 0 or 1.
 Ordered comparisons (<, <=, >, >=) perform the usual arithmetic conversions on source operands and produce a signed result. The result value is 0 or 1.
 Casting of expressions to signed or unsigned is supported using (.s64) and (.u64) casts.
 For the conditional operator ( ? : ) , the first operand must be an integer, and the second and third operands are either both integers or both floatingpoint. The usual arithmetic conversions are performed on the second and third operands, and the result type is the same as the converted type.
4.5.6. Summary of Constant Expression Evaluation Rules
Table 5 contains a summary of the constant expression evaluation rules.
Kind  Operator  Operand Types  Operand Interpretation  Result Type 

Primary  ()  any type  same as source  same as source 
constant literal  n/a  n/a  .u64, .s64, or .f64  
Unary  +  any type  same as source  same as source 
!  integer  zero or nonzero  .s64  
~  integer  .u64  .u64  
Cast  (.u64)  integer  .u64  .u64 
(.s64)  integer  .s64  .s64  
Binary  + * /  .f64  .f64  .f64 
integer  use usual conversions  converted type  
< > <= >=  .f64  .f64  .s64  
integer  use usual conversions  .s64  
== !=  .f64  .f64  .s64  
integer  use usual conversions  .s64  
%  integer  .u64  .s64  
>> <<  integer  1st unchanged, 2nd is .u64  same as 1st operand  
&  ^  integer  .u64  .u64  
&&   integer  zero or nonzero  .s64  
Ternary  ?:  int ? .f64 : .f64  same as sources  .f64 
int ? int : int  use usual conversions  converted type 
5. State Spaces, Types, and Variables
While the specific resources available in a given target GPU will vary, the kinds of resources will be common across platforms, and these resources are abstracted in PTX through state spaces and data types.
5.1. State Spaces
A state space is a storage area with particular characteristics. All variables reside in some state space. The characteristics of a state space include its size, addressability, access speed, access rights, and level of sharing between threads.
The state spaces defined in PTX are a byproduct of parallel programming and graphics programming. The list of state spaces is shown in Table 6,and properties of state spaces are shown in Table 7.
Name  Description 

.reg  Registers, fast. 
.sreg  Special registers. Readonly; predefined; platformspecific. 
.const  Shared, readonly memory. 
.global  Global memory, shared by all threads. 
.local  Local memory, private to each thread. 
.param 
Kernel parameters, defined pergrid; or Function or local parameters, defined perthread. 
.shared  Addressable memory shared between threads in 1 CTA. 
.tex  Global texture memory (deprecated). 
5.1.1. Register State Space
Registers (.reg state space) are fast storage locations. The number of registers is limited, and will vary from platform to platform. When the limit is exceeded, register variables will be spilled to memory, causing changes in performance. For each architecture, there is a recommended maximum number of registers to use (see the CUDA Programming Guide for details).
Registers may be typed (signed integer, unsigned integer, floating point, predicate) or untyped. Register size is restricted; aside from predicate registers which are 1bit, scalar registers have a width of 8, 16, 32, or 64bits, and vector registers have a width of 16, 32, 64, or 128bits. The most common use of 8bit registers is with ld, st, and cvt instructions, or as elements of vector tuples.
Registers differ from the other state spaces in that they are not fully addressable, i.e., it is not possible to refer to the address of a register. When compiling to use the Application Binary Interface (ABI), register variables are restricted to function scope and may not be declared at module scope. When compiling legacy PTX code (ISA versions prior to 3.0) containing modulescoped .reg variables, the compiler silently disables use of the ABI. Registers may have alignment boundaries required by multiword loads and stores.
5.1.2. Special Register State Space
The special register (.sreg) state space holds predefined, platformspecific registers, such as grid, CTA, and thread parameters, clock counters, and performance monitoring registers. All special registers are predefined.
5.1.3. Constant State Space
The constant (.const) state space is a readonly memory initialized by the host. Constant memory is accessed with a ld.const instruction. Constant memory is restricted in size, currently limited to 64 KB which can be used to hold staticallysized constant variables. There is an additional 640 KB of constant memory, organized as ten independent 64 KB regions. The driver may allocate and initialize constant buffers in these regions and pass pointers to the buffers as kernel function parameters. Since the ten regions are not contiguous, the driver must ensure that constant buffers are allocated so that each buffer fits entirely within a 64 KB region and does not span a region boundary.
Staticallysized constant variables have an optional variable initializer; constant variables with no explicit initializer are initialized to zero by default. Constant buffers allocated by the driver are initialized by the host, and pointers to such buffers are passed to the kernel as parameters. See the description of kernel parameter attributes in Kernel Function Parameter Attributes for more details on passing pointers to constant buffers as kernel parameters.
5.1.3.1. Banked Constant State Space (deprecated)
Previous versions of PTX exposed constant memory as a set of eleven 64 KB banks, with explicit bank numbers required for variable declaration and during access.
Prior to PTX ISA version 2.2, the constant memory was organized into fixed size banks. There were eleven 64 KB banks, and banks were specified using the .const[bank] modifier, where bank ranged from 0 to 10. If no bank number was given, bank zero was assumed.
.extern .const[2] .b32 const_buffer[];resulted in const_buffer pointing to the start of constant bank two. This pointer could then be used to access the entire 64 KB constant bank. Multiple incomplete array variables declared in the same bank were aliased, with each pointing to the start address of the specified constant bank.
.extern .const[2] .b32 const_buffer[]; ld.const[2].b32 %r1, [const_buffer+4]; // load second wordIn PTX ISA version 2.2, we eliminated explicit banks and replaced the incomplete array representation of driverallocated constant buffers with kernel parameter attributes that allow pointers to constant buffers to be passed as kernel parameters.
5.1.4. Global State Space
The global (.global) state space is memory that is accessible by all threads in a context. It is the mechanism by which different CTAs and different grids can communicate. Use ld.global, st.global, and atom.global to access global variables.
Global variables have an optional variable initializer; global variables with no explicit initializer are initialized to zero by default.
5.1.5. Local State Space
The local state space (.local) is private memory for each thread to keep its own data. It is typically standard memory with cache. The size is limited, as it must be allocated on a perthread basis. Use ld.local and st.local to access local variables.
When compiling to use the Application Binary Interface (ABI), .local statespace variables must be declared within function scope and are allocated on the stack. In implementations that do not support a stack, all local memory variables are stored at fixed addresses, recursive function calls are not supported, and .local variables may be declared at module scope. When compiling legacy PTX code (ISA versions prior to 3.0) containing modulescoped .local variables, the compiler silently disables use of the ABI.
5.1.6. Parameter State Space
The parameter (.param) state space is used (1) to pass input arguments from the host to the kernel, (2a) to declare formal input and return parameters for device functions called from within kernel execution, and (2b) to declare locallyscoped byte array variables that serve as function call arguments, typically for passing large structures by value to a function. Kernel function parameters differ from device function parameters in terms of access and sharing (readonly versus readwrite, perkernel versus perthread). Note that PTX ISA versions 1.x supports only kernel function parameters in .param space; device function parameters were previously restricted to the register state space. The use of parameter state space for device function parameters was introduced in PTX ISA version 2.0 and requires target architecture sm_20 or higher.
5.1.6.1. Kernel Function Parameters
Each kernel function definition includes an optional list of parameters. These parameters are addressable, readonly variables declared in the .param state space. Values passed from the host to the kernel are accessed through these parameter variables using ld.param instructions. The kernel parameter variables are shared across all CTAs within a grid.
The address of a kernel parameter may be moved into a register using the mov instruction. The resulting address is in the .param state space and is accessed using ld.param instructions.
Example
.entry foo ( .param .b32 N, .param .align 8 .b8 buffer[64] ) { .reg .u32 %n; .reg .f64 %d; ld.param.u32 %n, [N]; ld.param.f64 %d, [buffer]; ...
Example
.entry bar ( .param .b32 len ) { .reg .u32 %ptr, %n; mov.u32 %ptr, len; ld.param.u32 %n, [%ptr]; ...
Kernel function parameters may represent normal data values, or they may hold addresses to objects in constant, global, local, or shared state spaces. In the case of pointers, the compiler and runtime system need information about which parameters are pointers, and to which state space they point. Kernel parameter attribute directives are used to provide this information at the PTX level. See Kernel Function Parameter Attributes for a description of kernel parameter attribute directives.
5.1.6.2. Kernel Function Parameter Attributes
Kernel function parameters may be declared with an optional .ptr attribute to indicate that a parameter is a pointer to memory, and also indicate the state space and alignment of the memory being pointed to. Kernel Parameter Attribute: .ptr describes the .ptr kernel parameter attribute.
5.1.6.3. Kernel Parameter Attribute: .ptr
.ptr
Kernel parameter alignment attribute.
Syntax
.param .type .ptr .space .align N varname .param .type .ptr .align N varname .space = { .const, .global, .local, .shared };
Description
Used to specify the state space and, optionally, the alignment of memory pointed to by a pointer type kernel parameter. The alignment value N, if present, must be a power of two. If no state space is specified, the pointer is assumed to be a generic address pointing to one of const, global, local, or shared memory. If no alignment is specified, the memory pointed to is assumed to be aligned to a 4 byte boundary.
Spaces between .ptr, .space, and .align may be eliminated to improve readability.
PTX ISA Notes
 Introduced in PTX ISA version 2.2.
 Support for generic addressing of .const space added in PTX ISA version 3.1.
Target ISA Notes
 Supported on all target architectures.
Examples
.entry foo ( .param .u32 param1, .param .u32 .ptr.global.align 16 param2, .param .u32 .ptr.const.align 8 param3, .param .u32 .ptr.align 16 param4 // generic address // pointer ) { .. }
5.1.6.4. Device Function Parameters
PTX ISA version 2.0 extended the use of parameter space to device function parameters. The most common use is for passing objects by value that do not fit within a PTX register, such as C structures larger than 8 bytes. In this case, a byte array in parameter space is used. Typically, the caller will declare a locallyscoped .param byte array variable that represents a flattened C structure or union. This will be passed by value to a callee, which declares a .param formal parameter having the same size and alignment as the passed argument.
Example
// pass object of type struct { double d; int y; }; .func foo ( .reg .b32 N, .param .align 8 .b8 buffer[12] ) { .reg .f64 %d; .reg .s32 %y; ld.param.f64 %d, [buffer]; ld.param.s32 %y, [buffer+8]; ... } // code snippet from the caller // struct { double d; int y; } mystruct; is flattened, passed to foo ... .reg .f64 dbl; .reg .s32 x; .param .align 8 .b8 mystruct; ... st.param.f64 [mystruct+0], dbl; st.param.s32 [mystruct+8], x; call foo, (4, mystruct); ...
See the section on function call syntax for more details.
Function input parameters may be read via ld.param and function return parameters may be written using st.param; it is illegal to write to an input parameter or read from a return parameter.
Aside from passing structures by value, .param space is also required whenever a formal parameter has its address taken within the called function. In PTX, the address of a function input parameter may be moved into a register using the mov instruction. Note that the parameter will be copied to the stack if necessary, and so the address will be in the .local state space and is accessed via ld.local and st.local instructions. It is not possible to use mov to get the address of or a locallyscoped .param space variable. Starting PTX ISA version 6.0, it is possible to use mov instruction to get address of return parameter of device function.
Example
// pass array of up to eight floatingpoint values in buffer .func foo ( .param .b32 N, .param .b32 buffer[32] ) { .reg .u32 %n, %r; .reg .f32 %f; .reg .pred %p; ld.param.u32 %n, [N]; mov.u32 %r, buffer; // forces buffer to .local state space Loop: setp.eq.u32 %p, %n, 0; @%p: bra Done; ld.local.f32 %f, [%r]; ... add.u32 %r, %r, 4; sub.u32 %n, %n, 1; bra Loop; Done: ... }
5.1.8. Texture State Space (deprecated)
The texture (.tex) state space is global memory accessed via the texture instruction. It is shared by all threads in a context. Texture memory is readonly and cached, so accesses to texture memory are not coherent with global memory stores to the texture image.
The GPU hardware has a fixed number of texture bindings that can be accessed within a single kernel (typically 128). The .tex directive will bind the named texture memory variable to a hardware texture identifier, where texture identifiers are allocated sequentially beginning with zero. Multiple names may be bound to the same physical texture identifier. An error is generated if the maximum number of physical resources is exceeded. The texture name must be of type .u32 or .u64.
Physical texture resources are allocated on a perkernel granularity, and .tex variables are required to be defined in the global scope.
Texture memory is readonly. A texture's base address is assumed to be aligned to a 16 byte boundary.
Example
.tex .u32 tex_a; // bound to physical texture 0 .tex .u32 tex_c, tex_d; // both bound to physical texture 1 .tex .u32 tex_d; // bound to physical texture 2 .tex .u32 tex_f; // bound to physical texture 3
.tex .u32 tex_a;is equivalent to:
.global .texref tex_a;
See Texture Sampler and Surface Types for the description of the .texref type and Texture Instructions for its use in texture instructions.
5.2. Types
5.2.1. Fundamental Types
In PTX, the fundamental types reflect the native data types supported by the target architectures. A fundamental type specifies both a basic type and a size. Register variables are always of a fundamental type, and instructions operate on these types. The same typesize specifiers are used for both variable definitions and for typing instructions, so their names are intentionally short.
Table 8 lists the fundamental type specifiers for each basic type:
Basic Type  Fundamental Type Specifiers 

Signed integer  .s8, .s16, .s32, .s64 
Unsigned integer  .u8, .u16, .u32, .u64 
Floatingpoint  .f16, .f16x2, .f32, .f64 
Bits (untyped)  .b8, .b16, .b32, .b64 
Predicate  .pred 
Most instructions have one or more type specifiers, needed to fully specify instruction behavior. Operand types and sizes are checked against instruction types for compatibility.
Two fundamental types are compatible if they have the same basic type and are the same size. Signed and unsigned integer types are compatible if they have the same size. The bitsize type is compatible with any fundamental type having the same size.
In principle, all variables (aside from predicates) could be declared using only bitsize types, but typed variables enhance program readability and allow for better operand type checking.
5.2.2. Restricted Use of SubWord Sizes
The .u8, .s8, and .b8 instruction types are restricted to ld, st, and cvt instructions. The .f16 floatingpoint type is allowed only in conversions to and from .f32, .f64 types, in half precision floating point instructions and texture fetch instructions. The .f16x2 floating point type is allowed only in half precision floating point arithmetic instructions and texture fetch instructions.
For convenience, ld, st, and cvt instructions permit source and destination data operands to be wider than the instructiontype size, so that narrow values may be loaded, stored, and converted using regularwidth registers. For example, 8bit or 16bit values may be held directly in 32bit or 64bit registers when being loaded, stored, or converted to other types and sizes.
5.2.3. Alternate FloatingPoint Data Formats
The fundamental floatingpoint types supported in PTX have implicit bit representations that indicate the number of bits used to store exponent and mantissa. For example, the .f16 type indicates 5 bits reserved for exponent and 10 bits reserved for mantissa. In addition to the floatingpoint representations assumed by the fundamental types, PTX allows the following alternate floatingpoint data formats:
 bf16 data format:
 This data format is a 16bit floating point format with 8 bits for exponent and 7 bits for mantissa. A register variable containing bf16 data must be declared with .b16 type.
 tf32 data format:
 This data format is a special 32bit floating point format supported by the matrix multiplyandaccumulate instructions, with the same range as .f32 and reduced precision (>=10 bits). The internal layout of tf32 format is implementation defined. PTX facilitates conversion from single precision .f32 type to tf32 format. A register variable containing tf32 data must be declared with .b32 type.
Alternate data formats cannot be used as fundamental types. They are supported as source or destination formats by certain instructions.
5.3. Texture Sampler and Surface Types
PTX includes builtin opaque types for defining texture, sampler, and surface descriptor variables. These types have named fields similar to structures, but all information about layout, field ordering, base address, and overall size is hidden to a PTX program, hence the term opaque. The use of these opaque types is limited to:
 Variable definition within global (module) scope and in kernel entry parameter lists.
 Static initialization of modulescope variables using commadelimited static assignment expressions for the named members of the type.
 Referencing textures, samplers, or surfaces via texture and surface load/store instructions (tex, suld, sust, sured).
 Retrieving the value of a named member via query instructions (txq, suq).
 Creating pointers to opaque variables using mov, e.g., mov.u64 reg, opaque_var;. The resulting pointer may be stored to and loaded from memory, passed as a parameter to functions, and dereferenced by texture and surface load, store, and query instructions, but the pointer cannot otherwise be treated as an address, i.e., accessing the pointer with ld and st instructions, or performing pointer arithmetic will result in undefined results.
 Opaque variables may not appear in initializers, e.g., to initialize a pointer to an opaque variable.
Indirect access to textures and surfaces using pointers to opaque variables is supported beginning with PTX ISA version 3.1 and requires target sm_20 or later.
Indirect access to textures is supported only in unified texture mode (see below).
The three builtin types are .texref, .samplerref, and .surfref. For working with textures and samplers, PTX has two modes of operation. In the unified mode, texture and sampler information is accessed through a single .texref handle. In the independent mode, texture and sampler information each have their own handle, allowing them to be defined separately and combined at the site of usage in the program. In independent mode, the fields of the .texref type that describe sampler properties are ignored, since these properties are defined by .samplerref variables.
Table 9 and Table 10 list the named members of each type for unified and independent texture modes. These members and their values have precise mappings to methods and values defined in the texture HW class as well as exposed values via the API.
Member  .texref values  .surfref values 

width  in elements  
height  in elements  
depth  in elements  
channel_data_type  enum type corresponding to source language API  
channel_order  enum type corresponding to source language API  
normalized_coords  0, 1  N/A 
filter_mode  nearest, linear  N/A 
addr_mode_0, addr_mode_1, addr_mode_2  wrap,mirror, clamp_ogl, clamp_to_edge, clamp_to_border  N/A 
array_size  as number of textures in a texture array  as number of surfaces in a surface array 
num_mipmap_levels  as number of levels in a mipmapped texture  N/A 
num_samples  as number of samples in a multisample texture  N/A 
memory_layout  N/A  1 for linear memory layout; 0 otherwise 
Texture and Surface Properties
Fields width, height, and depth specify the size of the texture or surface in number of elements in each dimension.
The channel_data_type and channel_order fields specify these properties of the texture or surface using enumeration types corresponding to the source language API. For example, see Channel Data Type and Channel Order Fields for the OpenCL enumeration types currently supported in PTX.
5.3.2. Sampler Properties
The normalized_coords field indicates whether the texture or surface uses normalized coordinates in the range [0.0, 1.0) instead of unnormalized coordinates in the range [0, N). If no value is specified, the default is set by the runtime system based on the source language.
The filter_mode field specifies how the values returned by texture reads are computed based on the input texture coordinates.
The addr_mode_{0,1,2} fields define the addressing mode in each dimension, which determine how outofrange coordinates are handled.
See the CUDA C++ Programming Guide for more details of these properties.
Member  .samplerref values  .texref values  .surfref values 

width  N/A  in elements  
height  N/A  in elements  
depth  N/A  in elements  
channel_data_type  N/A  enum type corresponding to source language API  
channel_order  N/A  enum type corresponding to source language AP  
normalized_coords  N/A  0, 1  N/A 
force_unnormalized_coords  0, 1  N/A  N/A 
filter_mode  nearest, linear  ignored  N/A 
addr_mode_0, addr_mode_1, addr_mode_2  wrap,mirror, clamp_ogl, clamp_to_edge, clamp_to_border  N/A  N/A 
array_size  N/A  as number of textures in a texture array  as number of surfaces in a surface array 
num_mipmap_levels  N/A  as number of levels in a mipmapped texture  N/A 
num_samples  N/A  as number of samples in a multisample texture  N/A 
memory_layout  N/A  N/A  1 for linear memory layout; 0 otherwise 
In independent texture mode, the sampler properties are carried in an independent .samplerref variable, and these fields are disabled in the .texref variables. One additional sampler property, force_unnormalized_coords, is available in independent texture mode.
The force_unnormalized_coords field is a property of .samplerref variables that allows the sampler to override the texture header normalized_coords property. This field is defined only in independent texture mode. When True, the texture header setting is overridden and unnormalized coordinates are used; when False, the texture header setting is used.
The force_unnormalized_coords property is used in compiling OpenCL; in OpenCL, the property of normalized coordinates is carried in sampler headers. To compile OpenCL to PTX, texture headers are always initialized with normalized_coords set to True, and the OpenCL samplerbased normalized_coords flag maps (negated) to the PTXlevel force_unnormalized_coords flag.
Variables using these types may be declared at module scope or within kernel entry parameter lists. At module scope, these variables must be in the .global state space. As kernel parameters, these variables are declared in the .param state space.
Example
.global .texref my_texture_name; .global .samplerref my_sampler_name; .global .surfref my_surface_name;
When declared at module scope, the types may be initialized using a list of static expressions assigning values to the named members.
Example
.global .texref tex1; .global .samplerref tsamp1 = { addr_mode_0 = clamp_to_border, filter_mode = nearest };
5.3.3. Channel Data Type and Channel Order Fields
The channel_data_type and channel_order fields have enumeration types corresponding to the source language API. Currently, OpenCL is the only source language that defines these fields. Table 12 and Table 11 show the enumeration values defined in OpenCL version 1.0 for channel data type and channel order.
CL_SNORM_INT8  0x10D0 
CL_SNORM_INT16  0x10D1 
CL_UNORM_INT8  0x10D2 
CL_UNORM_INT16  0x10D3 
CL_UNORM_SHORT_565  0x10D4 
CL_UNORM_SHORT_555  0x10D5 
CL_UNORM_INT_101010  0x10D6 
CL_SIGNED_INT8  0x10D7 
CL_SIGNED_INT16  0x10D8 
CL_SIGNED_INT32  0x10D9 
CL_UNSIGNED_INT8  0x10DA 
CL_UNSIGNED_INT16  0x10DB 
CL_UNSIGNED_INT32  0x10DC 
CL_HALF_FLOAT  0x10DD 
CL_FLOAT  0x10DE 
5.4. Variables
In PTX, a variable declaration describes both the variable's type and its state space. In addition to fundamental types, PTX supports types for simple aggregate objects such as vectors and arrays.
5.4.1. Variable Declarations
All storage for data is specified with variable declarations. Every variable must reside in one of the state spaces enumerated in the previous section.
A variable declaration names the space in which the variable resides, its type and size, its name, an optional array size, an optional initializer, and an optional fixed address for the variable.
Predicate variables may only be declared in the register state space.
Examples
.global .u32 loc; .reg .s32 i; .const .f32 bias[] = {1.0, 1.0}; .global .u8 bg[4] = {0, 0, 0, 0}; .reg .v4 .f32 accel; .reg .pred p, q, r;
5.4.2. Vectors
Limitedlength vector types are supported. Vectors of length 2 and 4 of any nonpredicate fundamental type can be declared by prefixing the type with .v2 or .v4. Vectors must be based on a fundamental type, and they may reside in the register space. Vectors cannot exceed 128bits in length; for example, .v4.f64 is not allowed. Threeelement vectors may be handled by using a .v4 vector, where the fourth element provides padding. This is a common case for threedimensional grids, textures, etc.
Examples
.global .v4 .f32 V; // a length4 vector of floats .shared .v2 .u16 uv; // a length2 vector of unsigned ints .global .v4 .b8 v; // a length4 vector of bytes
By default, vector variables are aligned to a multiple of their overall size (vector length times basetype size), to enable vector load and store instructions which require addresses aligned to a multiple of the access size.
5.4.3. Array Declarations
Array declarations are provided to allow the programmer to reserve space. To declare an array, the variable name is followed with dimensional declarations similar to fixedsize array declarations in C. The size of each dimension is a constant expression.
Examples
.local .u16 kernel[19][19]; .shared .u8 mailbox[128];
The size of the array specifies how many elements should be reserved. For the declaration of array kernel above, 19*19 = 361 halfwords are reserved, for a total of 722 bytes.
When declared with an initializer, the first dimension of the array may be omitted. The size of the first array dimension is determined by the number of elements in the array initializer.
Examples
.global .u32 index[] = { 0, 1, 2, 3, 4, 5, 6, 7 }; .global .s32 offset[][2] = { {1, 0}, {0, 1}, {1, 0}, {0, 1} };
Array index has eight elements, and array offset is a 4x2 array.
5.4.4. Initializers
Declared variables may specify an initial value using a syntax similar to C/C++, where the variable name is followed by an equals sign and the initial value or values for the variable. A scalar takes a single value, while vectors and arrays take nested lists of values inside of curly braces (the nesting matches the dimensionality of the declaration).
As in C, array initializers may be incomplete, i.e., the number of initializer elements may be less than the extent of the corresponding array dimension, with remaining array locations initialized to the default value for the specified array type.
Examples
.const .f32 vals[8] = { 0.33, 0.25, 0.125 }; .global .s32 x[3][2] = { {1,2}, {3} };is equivalent to
.const .f32 vals[8] = { 0.33, 0.25, 0.125, 0.0, 0.0, 0.0, 0.0, 0.0 }; .global .s32 x[3][2] = { {1,2}, {3,0}, {0,0} };
Currently, variable initialization is supported only for constant and global state spaces. Variables in constant and global state spaces with no explicit initializer are initialized to zero by default. Initializers are not allowed in external variable declarations.
Variable names appearing in initializers represent the address of the variable; this can be used to statically initialize a pointer to a variable. Initializers may also contain var+offset expressions, where offset is a byte offset added to the address of var. Only variables in .global or .const state spaces may be used in initializers. By default, the resulting address is the offset in the variable's state space (as is the case when taking the address of a variable with a mov instruction). An operator, generic(), is provided to create a generic address for variables used in initializers.
Starting PTX ISA version 7.1, an operator mask() is provided, where mask is an integer immediate. The only allowed expressions in the mask() operator are integer constant expression and symbol expression representing address of variable. The mask() operator extracts n consecutive bits from the expression used in initializers and inserts these bits at the lowest position of the initialized variable. The number n and the starting position of the bits to be extracted is specified by the integer immediate mask. PTX ISA version 7.1 only supports extracting a single byte starting at byte boundary from the address of the variable. PTX ISA version 7.3 supports Integer constant expression as an operand in the mask() operator.
Supported values for mask are: 0xFF, 0xFF00, 0XFF0000, 0xFF000000, 0xFF00000000, 0xFF0000000000, 0xFF000000000000, 0xFF00000000000000.
Examples
.const .u32 foo = 42; .global .u32 bar[] = { 2, 3, 5 }; .global .u32 p1 = foo; // offset of foo in .const space .global .u32 p2 = generic(foo); // generic address of foo // array of genericaddress pointers to elements of bar .global .u32 parr[] = { generic(bar), generic(bar)+4, generic(bar)+8 }; // examples using mask() operator are pruned for brevity .global .u8 addr[] = {0xff(foo), 0xff00(foo), 0xff0000(foo), ...}; .global .u8 addr2[] = {0xff(foo+4), 0xff00(foo+4), 0xff0000(foo+4),...} .global .u8 addr3[] = {0xff(generic(foo)), 0xff00(generic(foo)),...} .global .u8 addr4[] = {0xff(generic(foo)+4), 0xff00(generic(foo)+4),...} // mask() operator with integer const expression .global .u8 addr5[] = { 0xFF(1000 + 546), 0xFF00(131187), ...};
Device function names appearing in initializers represent the address of the first instruction in the function; this can be used to initialize a table of function pointers to be used with indirect calls. Beginning in PTX ISA version 3.1, kernel function names can be used as initializers e.g. to initialize a table of kernel function pointers, to be used with CUDA Dynamic Parallelism to launch kernels from GPU. See the CUDA Dynamic Parallelism Programming Guide for details.
Labels cannot be used in initializers.
Variables that hold addresses of variables or functions should be of type .u8 or .u32 or .u64.
Type .u8 is allowed only if the mask() operator is used.
Initializers are allowed for all types except .f16, .f16x2 and .pred.
Examples
.global .s32 n = 10; .global .f32 blur_kernel[][3] = {{.05,.1,.05},{.1,.4,.1},{.05,.1,.05}}; .global .u32 foo[] = { 2, 3, 5, 7, 9, 11 }; .global .u64 ptr = generic(foo); // generic address of foo[0] .global .u64 ptr = generic(foo)+8; // generic address of foo[2]
5.4.5. Alignment
Byte alignment of storage for all addressable variables can be specified in the variable declaration. Alignment is specified using an optional .alignbytecount specifier immediately following the statespace specifier. The variable will be aligned to an address which is an integer multiple of bytecount. The alignment value bytecount must be a power of two. For arrays, alignment specifies the address alignment for the starting address of the entire array, not for individual elements.
The default alignment for scalar and array variables is to a multiple of the basetype size. The default alignment for vector variables is to a multiple of the overall vector size.
Examples
// allocate array at 4byte aligned address. Elements are bytes. .const .align 4 .b8 bar[8] = {0,0,0,0,2,0,0,0};
Note that all PTX instructions that access memory require that the address be aligned to a multiple of the access size. The access size of a memory instruction is the total number of bytes accessed in memory. For example, the access size of ld.v4.b32 is 16 bytes, while the access size of atom.f16x2 is 4 bytes.
5.4.6. Parameterized Variable Names
Since PTX supports virtual registers, it is quite common for a compiler frontend to generate a large number of register names. Rather than require explicit declaration of every name, PTX supports a syntax for creating a set of variables having a common prefix string appended with integer suffixes.
.reg .b32 %r<100>; // declare %r0, %r1, ..., %r99
This shorthand syntax may be used with any of the fundamental types and with any state space, and may be preceded by an alignment specifier. Array variables cannot be declared this way, nor are initializers permitted.
5.4.7. Variable Attributes
Variables may be declared with an optional .attribute directive which allows specifying special attributes of variables. Keyword .attribute is followed by attribute specification inside parenthesis. Multiple attributes are separated by comma.
Variable Attribute Directive: .attribute describes the .attribute directive.
5.4.8. Variable Attribute Directive: .attribute
.attribute
Variable attributes
Description
Used to specify special attributes of a variable.
Following attributes are supported.
 .managed
 .managed attribute specifies that variable will be allocated at a location in unified virtual memory environment where host and other devices in the system can reference the variable directly. This attribute can only be used with variables in .global state space. See the CUDA UVMLite Programming Guide for details.
PTX ISA Notes
 Introduced in PTX ISA version 4.0.
Target ISA Notes
 .managed attribute requires sm_30 or higher.
Examples
.global .attribute(.managed) .s32 g; .global .attribute(.managed) .u64 x;
6. Instruction Operands
6.1. Operand Type Information
All operands in instructions have a known type from their declarations. Each operand type must be compatible with the type determined by the instruction template and instruction type. There is no automatic conversion between types.
The bitsize type is compatible with every type having the same size. Integer types of a common size are compatible with each other. Operands having type different from but compatible with the instruction type are silently cast to the instruction type.
6.2. Source Operands
The source operands are denoted in the instruction descriptions by the names a, b, and c. PTX describes a loadstore machine, so operands for ALU instructions must all be in variables declared in the .reg register state space. For most operations, the sizes of the operands must be consistent.
The cvt (convert) instruction takes a variety of operand types and sizes, as its job is to convert from nearly any data type to any other data type (and size).
The ld, st, mov, and cvt instructions copy data from one location to another. Instructions ld and st move data from/to addressable state spaces to/from registers. The mov instruction copies data between registers.
Most instructions have an optional predicate guard that controls conditional execution, and a few instructions have additional predicate source operands. Predicate operands are denoted by the names p, q, r, s.
6.3. Destination Operands
PTX instructions that produce a single result store the result in the field denoted by d (for destination) in the instruction descriptions. The result operand is a scalar or vector variable in the register state space.
6.4. Using Addresses, Arrays, and Vectors
Using scalar variables as operands is straightforward. The interesting capabilities begin with addresses, arrays, and vectors.
6.4.1. Addresses as Operands
All the memory instructions take an address operand that specifies the memory location being accessed. This addressable operand is one of:
 [var]
 the name of an addressable variable var.
 [reg]
 an integer or bitsize type register reg containing a byte address.
 [reg+immOff]
 a sum of register reg containing a byte address plus a constant integer byte offset (signed, 32bit).
 [var+immOff]
 a sum of address of addressable variable var containing a byte address plus a constant integer byte offset (signed, 32bit).
 [immAddr]
 an immediate absolute byte address (unsigned, 32bit).
 var[immOff]
 an array element as described in Arrays as Operands.
The register containing an address may be declared as a bitsize type or integer type.
The access size of a memory instruction is the total number of bytes accessed in memory. For example, the access size of ld.v4.b32 is 16 bytes, while the access size of atom.f16x2 is 4 bytes.
The address must be naturally aligned to a multiple of the access size. If an address is not properly aligned, the resulting behavior is undefined. For example, among other things, the access may proceed by silently masking off loworder address bits to achieve proper rounding, or the instruction may fault.
The address size may be either 32bit or 64bit. Addresses are zeroextended to the specified width as needed, and truncated if the register width exceeds the state space address width for the target architecture.
Address arithmetic is performed using integer arithmetic and logical instructions. Examples include pointer arithmetic and pointer comparisons. All addresses and address computations are bytebased; there is no support for Cstyle pointer arithmetic.
The mov instruction can be used to move the address of a variable into a pointer. The address is an offset in the state space in which the variable is declared. Load and store operations move data between registers and locations in addressable state spaces. The syntax is similar to that used in many assembly languages, where scalar variables are simply named and addresses are dereferenced by enclosing the address expression in square brackets. Address expressions include variable names, address registers, address register plus byte offset, and immediate address expressions which evaluate at compiletime to a constant address.
Here are a few examples:
.shared .u16 x; .reg .u16 r0; .global .v4 .f32 V; .reg .v4 .f32 W; .const .s32 tbl[256]; .reg .b32 p; .reg .s32 q; ld.shared.u16 r0,[x]; ld.global.v4.f32 W, [V]; ld.const.s32 q, [tbl+12]; mov.u32 p, tbl;
6.4.1.1. Generic Addressing
If a memory instruction does not specify a state space, the operation is performed using generic addressing. The state spaces .const, Kernel Function Parameters (.param), .local and .shared are modeled as windows within the generic address space. Each window is defined by a window base and a window size that is equal to the size of the corresponding state space. A generic address maps to global memory unless it falls within the window for const, local, or shared memory. The Kernel Function Parameters (.param) window is contained within the .global window. Within each window, a generic address maps to an address in the underlying state space by subtracting the window base from the generic address.
6.4.2. Arrays as Operands
Arrays of all types can be declared, and the identifier becomes an address constant in the space where the array is declared. The size of the array is a constant in the program.
Array elements can be accessed using an explicitly calculated byte address, or by indexing into the array using squarebracket notation. The expression within square brackets is either a constant integer, a register variable, or a simple register with constant offset expression, where the offset is a constant expression that is either added or subtracted from a register variable. If more complicated indexing is desired, it must be written as an address calculation prior to use. Examples are:
ld.global.u32 s, a[0]; ld.global.u32 s, a[N1]; mov.u32 s, a[1]; // move address of a[1] into s
6.4.3. Vectors as Operands
Vector operands are supported by a limited subset of instructions, which include mov, ld, st, and tex. Vectors may also be passed as arguments to called functions.
Vector elements can be extracted from the vector with the suffixes .x, .y, .z and .w, as well as the typical color fields .r, .g, .b and .a.
A braceenclosed list is used for pattern matching to pull apart vectors.
.reg .v4 .f32 V; .reg .f32 a, b, c, d; mov.v4.f32 {a,b,c,d}, V;
Vector loads and stores can be used to implement wide loads and stores, which may improve memory performance. The registers in the load/store operations can be a vector, or a braceenclosed list of similarly typed scalars. Here are examples:
ld.global.v4.f32 {a,b,c,d}, [addr+16]; ld.global.v2.u32 V2, [addr+8];
Elements in a braceenclosed vector, say {Ra, Rb, Rc, Rd}, correspond to extracted elements as follows:
Ra = V.x = V.r Rb = V.y = V.g Rc = V.z = V.b Rd = V.w = V.a
6.4.4. Labels and Function Names as Operands
Labels and function names can be used only in bra/brx.idx and call instructions respectively. Function names can be used in mov instruction to get the address of the function into a register, for use in an indirect call.
Beginning in PTX ISA version 3.1, the mov instruction may be used to take the address of kernel functions, to be passed to a system call that initiates a kernel launch from the GPU. This feature is part of the support for CUDA Dynamic Parallelism. See the CUDA Dynamic Parallelism Programming Guide for details.
6.5. Type Conversion
All operands to all arithmetic, logic, and data movement instruction must be of the same type and size, except for operations where changing the size and/or type is part of the definition of the instruction. Operands of different sizes or types must be converted prior to the operation.
6.5.1. Scalar Conversions
Table 13 shows what precision and format the cvt instruction uses given operands of differing types. For example, if a cvt.s32.u16 instruction is given a u16 source operand and s32 as a destination operand, the u16 is zeroextended to s32.
Conversions to floatingpoint that are beyond the range of floatingpoint numbers are represented with the maximum floatingpoint value (IEEE 754 Inf for f32 and f64, and ~131,000 for f16).
Destination Format  

s8  s16  s32  s64  u8  u16  u32  u64  f16  f32  f64  
Source Format  s8    sext  sext  sext    sext  sext  sext  s2f  s2f  s2f 
s16  chop^{1}    sext  sext  chop^{1}    sext  sext  s2f  s2f  s2f  
s32  chop^{1}  chop^{1}    sext  chop^{1}  chop^{1}    sext  s2f  s2f  s2f  
s64  chop^{1}  chop^{1}  chop    chop^{1}  chop^{1}  chop    s2f  s2f  s2f  
u8    zext  zext  zext    zext  zext  zext  u2f  u2f  u2f  
u16  chop^{1}    zext  zext  chop^{1}    zext  zext  u2f  u2f  u2f  
u32  chop^{1}  chop^{1}    zext  chop^{1}  chop^{1}    zext  u2f  u2f  u2f  
u64  chop^{1}  chop^{1}  chop    chop^{1}  chop^{1}  chop    u2f  u2f  u2f  
f16  f2s  f2s  f2s  f2s  f2u  f2u  f2u  f2u    f2f  f2f  
f32  f2s  f2s  f2s  f2s  f2u  f2u  f2u  f2u  f2f    f2f  
f64  f2s  f2s  f2s  f2s  f2u  f2u  f2u  f2u  f2f  f2f    
Notes 
sext = signextend; zext = zeroextend; chop = keep only low bits that fit; s2f = signedtofloat; f2s = floattosigned; u2f = unsignedtofloat; f2u = floattounsigned; f2f = floattofloat. ^{1} If the destination register is wider than the destination format, the result is extended to the destination register width after chopping. The type of extension (sign or zero) is based on the destination format. For example, cvt.s16.u32 targeting a 32bit register first chops to 16bit, then signextends to 32bit. 
6.5.2. Rounding Modifiers
Conversion instructions may specify a rounding modifier. In PTX, there are four integer rounding modifiers and four floatingpoint rounding modifiers. Table 14 and Table 15 summarize the rounding modifiers.
Modifier  Description 

.rn  mantissa LSB rounds to nearest even 
.rna  mantissa LSB rounds to nearest, ties away from zero 
.rz  mantissa LSB rounds towards zero 
.rm  mantissa LSB rounds towards negative infinity 
.rp  mantissa LSB rounds towards positive infinity 
Modifier  Description 

.rni  round to nearest integer, choosing even integer if source is equidistant between two integers. 
.rzi  round to nearest integer in the direction of zero 
.rmi  round to nearest integer in direction of negative infinity 
.rpi  round to nearest integer in direction of positive infinity 
6.6. Operand Costs
Operands from different state spaces affect the speed of an operation. Registers are fastest, while global memory is slowest. Much of the delay to memory can be hidden in a number of ways. The first is to have multiple threads of execution so that the hardware can issue a memory operation and then switch to other execution. Another way to hide latency is to issue the load instructions as early as possible, as execution is not blocked until the desired result is used in a subsequent (in time) instruction. The register in a store operation is available much more quickly. Table 16 gives estimates of the costs of using different kinds of memory.
7. Abstracting the ABI
Rather than expose details of a particular calling convention, stack layout, and Application Binary Interface (ABI), PTX provides a slightly higherlevel abstraction and supports multiple ABI implementations. In this section, we describe the features of PTX needed to achieve this hiding of the ABI. These include syntax for function definitions, function calls, parameter passing, support for variadic functions (varargs), and memory allocated on the stack (alloca).
Refer to PTX Writers Guide to Interoperability for details on generating PTX compliant with Application Binary Interface (ABI) for the CUDA^{®} architeture.
7.1. Function Declarations and Definitions
In PTX, functions are declared and defined using the .func directive. A function declaration specifies an optional list of return parameters, the function name, and an optional list of input parameters; together these specify the function's interface, or prototype. A function definition specifies both the interface and the body of the function. A function must be declared or defined prior to being called.
The simplest function has no parameters or return values, and is represented in PTX as follows:
.func foo { ... ret; } ... call foo; ...
Here, execution of the call instruction transfers control to foo, implicitly saving the return address. Execution of the ret instruction within foo transfers control to the instruction following the call.
Scalar and vector basetype input and return parameters may be represented simply as register variables. At the call, arguments may be register variables or constants, and return values may be placed directly into register variables. The arguments and return variables at the call must have type and size that match the callee's corresponding formal parameters.
Example
.func (.reg .u32 %res) inc_ptr ( .reg .u32 %ptr, .reg .u32 %inc ) { add.u32 %res, %ptr, %inc; ret; } ... call (%r1), inc_ptr, (%r1,4); ...
When using the ABI, .reg state space parameters must be at least 32bits in size. Subword scalar objects in the source language should be promoted to 32bit registers in PTX, or use .param state space byte arrays described next.
struct { double dbl; char c[4]; };
In PTX, this structure will be flattened into a byte array. Since memory accesses are required to be aligned to a multiple of the access size, the structure in this example will be a 12 byte array with 8 byte alignment so that accesses to the .f64 field are aligned. The .param state space is used to pass the structure by value:
Example
.func (.reg .s32 out) bar (.reg .s32 x, .param .align 8 .b8 y[12]) { .reg .f64 f1; .reg .b32 c1, c2, c3, c4; ... ld.param.f64 f1, [y+0]; ld.param.b8 c1, [y+8]; ld.param.b8 c2, [y+9]; ld.param.b8 c3, [y+10]; ld.param.b8 c4, [y+11]; ... ... // computation using x,f1,c1,c2,c3,c4; } { .param .b8 .align 8 py[12]; ... st.param.b64 [py+ 0], %rd; st.param.b8 [py+ 8], %rc1; st.param.b8 [py+ 9], %rc2; st.param.b8 [py+10], %rc1; st.param.b8 [py+11], %rc2; // scalar args in .reg space, byte array in .param space call (%out), bar, (%x, py); ...
In this example, note that .param space variables are used in two ways. First, a .param variable y is used in function definition bar to represent a formal parameter. Second, a .param variable py is declared in the body of the calling function and used to set up the structure being passed to bar.
The following is a conceptual way to think about the .param state space use in device functions.
 The .param state space is used to set values that will passed to a called function and/or to receive return values from a called function. Typically, a .param byte array is used to collect together fields of a structure being passed by value.
 The .param state space is used to receive parameter values and/or pass return values back to the caller.
The following restrictions apply to parameter passing.
 Arguments may be .param variables, .reg variables, or constants.
 In the case of .param space formal parameters that are byte arrays, the argument must also be a .param space byte array with matching type, size, and alignment. A .param argument must be declared within the local scope of the caller.
 In the case of .param space formal parameters that are basetype scalar or vector variables, the corresponding argument may be either a .param or .reg space variable with matching type and size, or a constant that can be represented in the type of the formal parameter.
 In the case of .reg space formal parameters, the corresponding argument may be either a .param or .reg space variable of matching type and size, or a constant that can be represented in the type of the formal parameter.
 In the case of .reg space formal parameters, the register must be at least 32bits in size.
 All st.param instructions used for passing arguments to function call must immediately precede the corresponding call instruction and ld.param instruction used for collecting return value must immediately follow the call instruction without any control flow alteration. st.param and ld.param instructions used for argument passing cannot be predicated. This enables compiler optimization and ensures that the .param variable does not consume extra space in the caller's frame beyond that needed by the ABI. The .param variable simply allows a mapping to be made at the call site between data that may be in multiple locations (e.g., structure being manipulated by caller is located in registers and memory) to something that can be passed as a parameter or return value to the callee.
 Input and return parameters may be .param variables or .reg variables.
 Parameters in .param memory must be aligned to a multiple of 1, 2, 4, 8, or 16 bytes.
 Parameters in the .reg state space must be at least 32bits in size.
 The .reg state space can be used to receive and return basetype scalar and vector values, including subword size objects when compiling in nonABI mode. Supporting the .reg state space provides legacy support.
Note that the choice of .reg or .param state space for parameter passing has no impact on whether the parameter is ultimately passed in physical registers or on the stack. The mapping of parameters to physical registers and stack locations depends on the ABI definition and the order, size, and alignment of parameters.
7.1.1. Changes from PTX ISA Version 1.x
In PTX ISA version 1.x, formal parameters were restricted to .reg state space, and there was no support for array parameters. Objects such as C structures were flattened and passed or returned using multiple registers. PTX ISA version 1.x supports multiple return values for this purpose.
Beginning with PTX ISA version 2.0, formal parameters may be in either .reg or .param state space, and .param space parameters support arrays. For targets sm_20 or higher, PTX restricts functions to a single return value, and a .param byte array should be used to return objects that do not fit into a register. PTX continues to support multiple return registers for sm_1x targets.
PTX ISA versions prior to 3.0 permitted variables in .reg and .local state spaces to be defined at module scope. When compiling to use the ABI, PTX ISA version 3.0 and later disallows modulescoped .reg and .local variables and restricts their use to within function scope. When compiling without use of the ABI, modulescoped .reg and .local variables are supported as before. When compiling legacy PTX code (ISA versions prior to 3.0) containing modulescoped .reg or .local variables, the compiler silently disables use of the ABI.
7.2. Variadic Functions
PTX version 6.0 supports passing unsized array parameter to a function which can be used to implement variadic functions.
Refer to Kernel and Function Directives: .func for details
7.3. Alloca
PTX provides alloca instruction for allocating storage at runtime on the perthread local memory stack. The allocated stack memory can be accessed with ld.local and st.local instructions using the pointer returned by alloca.
In order to facilitate deallocation of memory allocated with alloca, PTX provides two additional instructions: stacksave which allows reading the value of stack pointer in a local variable, and stackrestore which can restore the stack pointer with the saved value.
alloca, stacksave, and stackrestore instructions are described in Stack Manipulation Instructions.
 Preview Feature:

Stack manipulation instructions alloca, stacksave and stackrestore are preview features in PTX ISA version 7.3. All details are subject to change with no guarantees of backward compatibility on future PTX ISA versions or SM architectures.
8. Memory Consistency Model
In multithreaded executions, the sideeffects of memory operations performed by each thread become visible to other threads in a partial and nonidentical order. This means that any two operations may appear to happen in no order, or in different orders, to different threads. The axioms introduced by the memory consistency model specify exactly which contradictions are forbidden between the orders observed by different threads.
In the absence of any constraint, each read operation returns the value committed by some write operation to the same memory location, including the initial write to that memory location. The memory consistency model effectively constrains the set of such candidate writes from which a read operation can return a value.
8.1. Scope and applicability of the model
The constraints specified under this model apply to PTX programs with any PTX ISA version number, running on sm_70 or later architectures.
The memory consistency model does not apply to texture (including ld.global.nc) and surface accesses.
8.1.1. Limitations on atomicity at system scope
When communicating with the host CPU, the 64bit strong operations with system scope may not be performed atomically on some systems. For more details on atomicity guarantees to host memory, see the CUDA Programming Guide.
8.2. Memory operations
The fundamental storage unit in the PTX memory model is a byte, consisting of 8 bits. Each state space available to a PTX program is a sequence of contiguous bytes in memory. Every byte in a PTX state space has a unique address relative to all threads that have access to the same state space.
Each PTX memory instruction specifies an address operand and a data type. The address operand contains a virtual address that gets converted to a physical address during memory access. The physical address and the size of the data type together define a physical memory location, which is the range of bytes starting from the physical address and extending up to the size of the data type in bytes.
The memory consistency model specification uses the terms "address" or "memory address" to indicate a virtual address, and the term "memory location" to indicate a physical memory location.
Each PTX memory instruction also specifies the operation  either a read, a write or an atomic readmodifywrite  to be performed on all the bytes in the corresponding memory location.
8.2.1. Overlap
Two memory locations are said to overlap when the starting address of one location is within the range of bytes constituting the other location. Two memory operations are said to overlap when they specify the same virtual address and the corresponding memory locations overlap. The overlap is said to be complete when both memory locations are identical, and it is said to be partial otherwise.
8.2.2. Aliases
Two distinct virtual addresses are said to be aliases if they map to the same memory location.
8.2.3. Vector Datatypes
The memory consistency model relates operations executed on memory locations with scalar datatypes, which have a maximum size and alignment of 64 bits. Memory operations with a vector datatype are modelled as a set of equivalent memory operations with a scalar datatype, executed in an unspecified order on the elements in the vector.
8.2.4. Packed Datatypes
The packed datatype .f16x2 consists of two .f16 values accessed in adjacent memory locations. Memory operations on the packed datatype .f16x2 are modelled as a pair of equivalent memory operations with a scalar datatype .f16, executed in an unspecified order on each element of the packed data.
8.2.5. Initialization
Each byte in memory is initialized by a hypothetical write W0 executed before starting any thread in the program. If the byte is included in a program variable, and that variable has an initial value, then W0 writes the corresponding initial value for that byte; else W0 is assumed to have written an unknown but constant value to the byte.
8.3. State spaces
The relations defined in the memory consistency model are independent of state spaces. In particular, causality order closes over all memory operations across all the state spaces. But the sideeffect of a memory operation in one state space can be observed directly only by operations that also have access to the same state space. This further constrains the synchronizing effect of a memory operation in addition to scope. For example, the synchronizing effect of the PTX instruction ld.relaxed.shared.sys is identical to that of ld.relaxed.shared.cta, since no thread outside the same CTA can execute an operation that accesses the same memory location.
8.4. Operation types
For simplicity, the rest of the document refers to the following operation types, instead of mentioning specific instructions that give rise to them.
Operation type  Instruction/Operation 

atomic operation  atom or red instruction. 
read operation  All variants of ld instruction and atom instruction (but not red instruction). 
write operation  All variants of st instruction, and atomic operations if they result in a write. 
memory operation  A read or write operation. 
volatile operation  An instruction with .volatile qualifier. 
acquire operation 
A memory operation with .acquire or .acq_rel qualifier. 
release operation  A memory operation with .release or .acq_rel qualifier. 
memory fence operation  A membar, fence.sc or fence.acq_rel instruction. 
proxy fence operation  A fence.proxy or a membar.proxy instruction. 
strong operation  A memory fence operation, or a memory operation with a .relaxed, .acquire, .release, .acq_rel or .volatile qualifier. 
weak operation  An ld or st instruction with a .weak qualifier. 
synchronizing operation  A bar instruction, fence operation, release operation or acquire operation. 
8.5. Scope
Each strong operation must specify a scope, which is the set of threads that may interact directly with that operation and establish any of the relations described in the memory consistency model. There are three scopes:
Scope  Description 

.cta  The set of all threads executing in the same CTA as the current thread. 
.gpu  The set of all threads in the current program executing on the same compute device as the current thread. This also includes other kernel grids invoked by the host program on the same compute device. 
.sys  The set of all threads in the current program, including all kernel grids invoked by the host program on all compute devices, and all threads constituting the host program itself. 
Note that the warp is not a scope; the CTA is the smallest collection of threads that qualifies as a scope in the memory consistency model.
8.6. Proxies
A memory proxy, or a proxy is an abstract label applied to a method of memory access. When two memory operations use distinct methods of memory access, they are said to be different proxies.
Memory operations as defined in Operation types use generic method of memory access, i.e. a generic proxy. Other operations such as textures and surfaces all use distinct methods of memory access, also distinct from the generic method.
A proxy fence is required to synchronize memory operations across different proxies. Although virtual aliases use the generic method of memory access, since using distinct virtual addresses behaves as if using different proxies, they require a proxy fence to establish memory ordering.
8.7. Morally strong operations
Two operations are said to be morally strong relative to each other if they satisfy all of the following conditions:
 The operations are related in program order (i.e, they are both executed by the same thread), or each operation is strong and specifies a scope that includes the thread executing the other operation.
 Both operations are performed via the same proxy.
 If both are memory operations, then they overlap completely.
Most (but not all) of the axioms in the memory consistency model depend on relations between morally strong operations.
8.7.1. Conflict and Dataraces
Two overlapping memory operations are said to conflict when at least one of them is a write.
Two conflicting memory operations are said to be in a datarace if they are not related in causality order and they are not morally strong.
8.7.2. Limitations on Mixedsize Dataraces
A datarace between operations that overlap completely is called a uniformsize datarace, while a datarace between operations that overlap partially is called a mixedsize datarace.
The axioms in the memory consistency model do not apply if a PTX program contains one or more mixedsize dataraces. But these axioms are sufficient to describe the behavior of a PTX program with only uniformsize dataraces.
Atomicity of mixedsize RMW operations
In any program with or without mixedsize dataraces, the following property holds for every pair of overlapping atomic operations A1 and A2 such that each specifies a scope that includes the other: Either the readmodifywrite operation specified by A1 is performed completely before A2 is initiated, or vice versa. This property holds irrespective of whether the two operations A1 and A2 overlap partially or completely.
8.8. Release and Acquire Patterns
Some sequences of instructions give rise to patterns that participate in memory synchronization as described later. The release pattern makes prior operations from the current thread^{1} visible to some operations from other threads. The acquire pattern makes some operations from other threads visible to later operations from the current thread.
A release pattern on a location M consists of one of the following:

A release operation on M
E.g.: st.release [M]; or atom.acq_rel [M];

Or a release operation on M followed by a strong write on M in program order
E.g.: st.release [M]; st.relaxed [M];

Or a memory fence followed by a strong write on M in program order
E.g.: fence; st.relaxed [M];
Any memory synchronization established by a release pattern only affects operations occurring in program order before the first instruction in that pattern.
An acquire pattern on a location M consists of one of the following:

An acquire operation on M
E.g.: ld.acquire [M]; or atom.acq_rel [M];

Or a strong read on M followed by an acquire operation on M in program order
E.g.: ld.relaxed [M]; ld.acquire [M];

Or a strong read on M followed by a memory fence in program order
E.g.: ld.relaxed [M]; fence;
Any memory synchronization established by an acquire pattern only affects operations occurring in program order after the last instruction in that pattern.
^{1} For both release and acquire patterns, this effect is further extended to operations in other threads through the transitive nature of causality order.
8.9. Ordering of memory operations
The sequence of operations performed by each thread is captured as program order while memory synchronization across threads is captured as causality order. The visibility of the sideeffects of memory operations to other memory operations is captured as communication order. The memory consistency model defines contradictions that are disallowed between communication order on the one hand, and causality order and program order on the other.
8.9.1. Program Order
The program order relates all operations performed by a thread to the order in which a sequential processor will execute instructions in the corresponding PTX source. It is a transitive relation that forms a total order over the operations performed by the thread, but does not relate operations from different threads.
8.9.2. Observation Order
Observation order relates a write W to a read R through an optional sequence of atomic readmodifywrite operations.
A write W precedes a read R in observation order if:
 R and W are morally strong and R reads the value written by W, or
 For some atomic operation Z, W precedes Z and Z precedes R in observation order.
8.9.3. FenceSC Order
The FenceSC order is an acyclic partial order, determined at runtime, that relates every pair of morally strong fence.sc operations.
8.9.4. Memory synchronization
Synchronizing operations performed by different threads synchronize with each other at runtime as described here. The effect of such synchronization is to establish causality order across threads.
 A fence.sc operation X synchronizes with a fence.sc operation Y if X precedes Y in the FenceSC order.
 A bar.sync or bar.red or bar.arrive operation synchronizes with a bar.sync or bar.red operation executed on the same barrier.
 A release pattern X synchronizes with an acquire pattern Y, if a write operation in X precedes a read operation in Y in observation order, and the first operation in X and the last operation in Y are morally strong.
API synchronization
A synchronizes relation can also be established by certain CUDA APIs.
 Completion of a task enqueued in a CUDA stream synchronizes with the start of the following task in the same stream, if any.
 For purposes of the above, recording or waiting on a CUDA event in a stream, or causing a crossstream barrier to be inserted due to cudaStreamLegacy, enqueues tasks in the associated streams even if there are no direct side effects. An event record task synchronizes with matching event wait tasks, and a barrier arrival task synchronizes with matching barrier wait tasks.
 Start of a CUDA kernel synchronizes with start of all threads in the kernel. End of all threads in a kernel synchronize with end of the kernel.
 Start of a CUDA graph synchronizes with start of all source nodes in the graph. Completion of all sink nodes in a CUDA graph synchronizes with completion of the graph. Completion of a graph node synchronizes with start of all nodes with a direct dependency.
 Start of a CUDA API call to enqueue a task synchronizes with start of the task.
 Completion of the last task queued to a stream, if any, synchronizes with return from cudaStreamSynchronize. Completion of the most recently queued matching event record task, if any, synchronizes with return from cudaEventSynchronize. Synchronizing a CUDA device or context behaves as if synchronizing all streams in the context, including ones that have been destroyed.
 Returning cudaSuccess from an API to query a CUDA handle, such as a stream or event, behaves the same as return from the matching synchronization API.
In addition to establishing a synchronizes relation, the CUDA API synchronization mechanisms above also participate in proxypreserved base causality order.
8.9.5. Causality Order
Causality order captures how memory operations become visible across threads through synchronizing operations. The axiom “Causality” uses this order to constrain the set of write operations from which a read operation may read a value.
Relations in the causality order primarily consist of relations in Base causality order^{1} , which is a transitive order, determined at runtime.
Base causality order
An operation X precedes an operation Y in base causality order if:
 X precedes Y in program order, or
 X synchronizes with Y, or
 For some operation Z,
 X precedes Z in program order and Z precedes Y in base causality order, or
 X precedes Z in base causality order and Z precedes Y in program order, or
 X precedes Z in base causality order and Z precedes Y in base causality order.
Proxypreserved base causality order
A memory operation X precedes a memory operation Y in proxypreserved base causality order if X precedes Y in base causality order, and:
 X and Y are performed to the same address, using the generic proxy, or
 X and Y are performed to the same address, using the same proxy, and by the same thread block, or
 X and Y are aliases and there is an alias proxy fence along the base causality path from X to Y.
Causality order
Causality order combines base causality order with some nontransitive relations as follows:
An operation X precedes an operation Y in causality order if:
 X precedes Y in proxypreserved base causality order, or
 For some operation Z, X precedes Z in observation order, and Z precedes Y in proxypreserved base causality order.
^{1 }The transitivity of base causality order accounts for the “cumulativity” of synchronizing operations.
8.9.6. Coherence Order
There exists a partial transitive order that relates overlapping write operations, determined at runtime, called the coherence order^{1}. Two overlapping write operations are related in coherence order if they are morally strong or if they are related in causality order. Two overlapping writes are unrelated in coherence order if they are in a datarace, which gives rise to the partial nature of coherence order.
^{1 }Coherence order cannot be observed directly since it consists entirely of write operations. It may be observed indirectly by its use in constraining the set of candidate writes that a read operation may read from.
8.9.7. Communication Order
The communication order is a nontransitive order, determined at runtime, that relates write operations to other overlapping memory operations.
 A write W precedes an overlapping read R in communication order if R returns the value of any byte that was written by W.
 A write W precedes a write W’ in communication order if W precedes W’ in coherence order.
 A read R precedes an overlapping write W in communication order if, for any byte accessed by both R and W, R returns the value written by a write W’ that precedes W in coherence order.
Communication order captures the visibility of memory operations  when a memory operation X1 precedes a memory operation X2 in communication order, X1 is said to be visible to X2.
8.10. Axioms
8.10.1. Coherence
If a write W precedes an overlapping write W’ in causality order, then W must precede W’ in coherence order.
8.10.2. FenceSC
FenceSC order cannot contradict causality order. For a pair of morally strong fence.sc operations F1 and F2, if F1 precedes F2 in causality order, then F1 must precede F2 in FenceSC order.
8.10.3. Atomicity
SingleCopy Atomicity
Conflicting morally strong operations are performed with singlecopy atomicity. When a read R and a write W are morally strong, then the following two communications cannot both exist in the same execution, for the set of bytes accessed by both R and W:
 R reads any byte from W.
 R reads any byte from any write W’ which precedes W in coherence order.
Atomicity of readmodifywrite (RMW) operations
When an atomic operation A and a write W overlap and are morally strong, then the following two communications cannot both exist in the same execution, for the set of bytes accessed by both A and W:
 A reads any byte from a write W’ that precedes W in coherence order.
 A follows W in coherence order.
8.10.4. No Thin Air
Values may not appear "out of thin air": an execution cannot speculatively produce a value in such a way that the speculation becomes selfsatisfying through chains of instruction dependencies and interthread communication. This matches both programmer intuition and hardware reality, but is necessary to state explicitly when performing formal analysis.
Litmus Test: Load Buffering
.global .u32 x = 0; .global .u32 y = 0; 

T1  T2 
A1: ld.global.u32 %r0, [x]; B1: st.global.u32 [y], %r0; 
A2: ld.global.u32 %r1, [y]; B2: st.global.u32 [x], %r1; 
FINAL STATE: x == 0 AND y == 0 
The litmus test known as "LB" (Load Buffering) checks such forbidden values that may arise out of thin air. Two threads T1 and T2 each read from a first variable and copy the observed result into a second variable, with the first and second variable exchanged between the threads. If each variable is initially zero, the final result shall also be zero. If A1 reads from B2 and A2 reads from B1, then values passing through the memory operations in this example form a cycle: A1>B1>A2>B2>A1. Only the values x == 0 and y == 0 are allowed to satisfy this cycle. If any of the memory operations in this example were to speculatively associate a different value with the corresponding memory location, then such a speculation would become selffulfilling, and hence forbidden.
8.10.5. Sequential Consistency Per Location
Within any set of overlapping memory operations that are pairwise morally strong, communication order cannot contradict program order, i.e., a concatenation of program order between overlapping operations and morally strong relations in communication order cannot result in a cycle. This ensures that each program slice of overlapping pairwise morally strong operations is strictly sequentiallyconsistent.
Litmus Test: CoRR
.global .u32 x = 0; 

T1  T2 
W1: st.global.relaxed.sys.u32 [x], 1; 
R1: ld.global.relaxed.u32 %r0, [x]; R2: ld.global.relaxed.u32 %r1, [x]; 
IF %r0 == 1 THEN %r1 == 1 
The litmus test "CoRR" (Coherent ReadRead), demonstrates one consequence of this guarantee. A thread T1 executes a write W1 on a location x, and a thread T2 executes two (or an infinite sequence of) reads R1 and R2 on the same location x. No other writes are executed on x, except the one modelling the initial value. The operations W1, R1 and R2 are pairwise morally strong. If R1 reads from W1, then the subsequent read R2 must also observe the same value. If R2 observed the initial value of x instead, then this would form a sequence of morallystrong relations R2>W1>R1 in communication order that contradicts the program order R1>R2 in thread T2. Hence R2 cannot read the initial value of x in such an execution.
8.10.6. Causality
Relations in communication order cannot contradict causality order. This constrains the set of candidate write operations that a read operation may read from:
 If a read R precedes an overlapping write W in causality order, then R cannot read from W.
 If a write W precedes an overlapping read R in causality order, then for any byte accessed by both R and W, R cannot read from any write W’ that precedes W in coherence order.
Litmus Test: Message Passing
.global .u32 data = 0; .global .u32 flag = 0; 

T1  T2 
W1: st.global.u32 [data], 1; F1: fence.sys; W2: st.global.relaxed.sys.u32 [flag], 1; 
R1: ld.global.relaxed.sys.u32 %r0, [flag]; F2: fence.sys; R2: ld.global.u32 %r1, [data]; 
IF %r0 == 1 THEN %r1 == 1 
The litmus test known as "MP" (Message Passing) represents the essence of typical synchronization algorithms. A vast majority of useful programs can be reduced to sequenced applications of this pattern.
Thread T1 first writes to a data variable and then to a flag variable while a second thread T2 first reads from the flag variable and then from the data variable. The operations on the flag are morally strong and the memory operations in each thread are separated by a fence, and these fences are morally strong.
If R1 observes W2, then the release pattern “F1; W2” synchronizes with the acquire pattern “R1; F2”. This establishes the causality order W1 > F1 > W2 > R1 > F2 > R2. Then axiom causality guarantees that R2 cannot read from any write that precedes W1 in coherence order. In the absence of any other writes in this example, R2 must read from W1.
Litmus Test: CoWR
// These addresses are aliases .global .u32 data_alias_1; .global .u32 data_alias_2; 
T1 
W1: st.global.u32 [data_alias_1], 1; F1: fence.proxy.alias; R1: ld.global.u32 %r1, [data_alias_2]; 
%r1 == 1 
Virtual aliases require an alias proxy fence along the synchronization path.
Litmus Test: Store Buffering
The litmus test known as "SB" (Store Buffering) demonstrates the sequential consistency enforced by the fence.sc. A thread T1 writes to a first variable, and then reads the value of a second variable, while a second thread T2 writes to the second variable and then reads the value of the first variable. The memory operations in each thread are separated by fence.sc instructions, and these fences are morally strong.
.global .u32 x = 0; .global .u32 y = 0; 

T1  T2 
W1: st.global.u32 [x], 1; F1: fence.sc.sys; R1: ld.global.u32 %r0, [y]; 
W2: st.global.u32 [y], 1; F2: fence.sc.sys; R2: ld.global.u32 %r1, [x]; 
%r0 == 1 OR %r1 == 1 
In any execution, either F1 precedes F2 in FenceSC order, or vice versa. If F1 precedes F2 in FenceSC order, then F1 synchronizes with F2. This establishes the causality order in W1 > F1 > F2 > R2. Axiom causality ensures that R2 cannot read from any write that precedes W1 in coherence order. In the absence of any other write to that variable, R2 must read from W1. Similarly, in the case where F2 precedes F1 in FenceSC order, R1 must read from W2. If each fence.sc in this example were replaced by a fence.acq_rel instruction, then this outcome is not guaranteed. There may be an execution where the write from each thread remains unobserved from the other thread, i.e., an execution is possible, where both R1 and R2 return the initial value “0” for variables y and x respectively.
9. Instruction Set
9.1. Format and Semantics of Instruction Descriptions
This section describes each PTX instruction. In addition to the name and the format of the instruction, the semantics are described, followed by some examples that attempt to show several possible instantiations of the instruction.
9.2. PTX Instructions
 @p opcode;
 @p opcode a;
 @p opcode d, a;
 @p opcode d, a, b;
 @p opcode d, a, b, c;
For instructions that create a result value, the d operand is the destination operand, while a, b, and c are source operands.
The setp instruction writes two destination registers. We use a  symbol to separate multiple destination registers.
setp.lt.s32 pq, a, b; // p = (a < b); q = !(a < b);
For some instructions the destination operand is optional. A bit bucket operand denoted with an underscore (_) may be used in place of a destination register.
9.3. Predicated Execution
.reg .pred p, q, r;
All instructions have an optional guard predicate which controls conditional execution of the instruction. The syntax to specify conditional execution is to prefix an instruction with @{!}p, where p is a predicate variable, optionally negated. Instructions without a guard predicate are executed unconditionally.
Predicates are most commonly set as the result of a comparison performed by the setp instruction.
if (i < n)
j = j + 1;
setp.lt.s32 p, i, n; // p = (i < n) @p add.s32 j, j, 1; // if i < n, add 1 to j
setp.lt.s32 p, i, n; // compare i to n @!p bra L1; // if False, branch over add.s32 j, j, 1; L1: ...
9.3.1. Comparisons
9.3.1.1. Integer and BitSize Comparisons
The signed integer comparisons are the traditional eq (equal), ne (notequal), lt (lessthan), le (lessthanorequal), gt (greaterthan), and ge (greaterthanorequal). The unsigned comparisons are eq, ne, lo (lower), ls (lowerorsame), hi (higher), and hs (higherorsame). The bitsize comparisons are eq and ne; ordering comparisons are not defined for bitsize types.
Table 19 shows the operators for signed integer, unsigned integer, and bitsize types.
9.3.1.2. Floating Point Comparisons
The ordered floatingpoint comparisons are eq, ne, lt, le, gt, and ge. If either operand is NaN, the result is False. Table 20 lists the floatingpoint comparison operators.
Meaning  FloatingPoint Operator 

a == b && !isNaN(a) && !isNaN(b)  eq 
a != b && !isNaN(a) && !isNaN(b)  ne 
a < b && !isNaN(a) && !isNaN(b)  lt 
a <= b && !isNaN(a) && !isNaN(b)  le 
a > b && !isNaN(a) && !isNaN(b)  gt 
a >= b && !isNaN(a) && !isNaN(b)  ge 
To aid comparison operations in the presence of NaN values, unordered floatingpoint comparisons are provided: equ, neu, ltu, leu, gtu, and geu. If both operands are numeric values (not NaN), then the comparison has the same result as its ordered counterpart. If either operand is NaN, then the result of the comparison is True.
Table 21 lists the floatingpoint comparison operators accepting NaN values.
Meaning  FloatingPoint Operator 

a == b  isNaN(a)  isNaN(b)  equ 
a != b  isNaN(a)  isNaN(b)  neu 
a < b  isNaN(a)  isNaN(b)  ltu 
a <= b  isNaN(a)  isNaN(b)  leu 
a > b  isNaN(a)  isNaN(b)  gtu 
a >= b  isNaN(a)  isNaN(b)  geu 
To test for NaN values, two operators num (numeric) and nan (isNaN) are provided. num returns True if both operands are numeric values (not NaN), and nan returns True if either operand is NaN. Table 22 lists the floatingpoint comparison operators testing for NaN values.
9.3.2. Manipulating Predicates
Predicate values may be computed and manipulated using the following instructions: and, or, xor, not, and mov.
selp.u32 %r1,1,0,%p; // convert predicate to 32bit value
9.4. Type Information for Instructions and Operands
Typed instructions must have a typesize modifier. For example, the add instruction requires type and size information to properly perform the addition operation (signed, unsigned, float, different sizes), and this information must be specified as a suffix to the opcode.
Example
.reg .u16 d, a, b; add.u16 d, a, b; // perform a 16bit unsigned add
Some instructions require multiple typesize modifiers, most notably the data conversion instruction cvt. It requires separate typesize modifiers for the result and source, and these are placed in the same order as the operands. For example:
.reg .u16 a; .reg .f32 d; cvt.f32.u16 d, a; // convert 16bit unsigned to 32bit float
In general, an operand's type must agree with the corresponding instructiontype modifier. The rules for operand and instruction type conformance are as follows:
 Bitsize types agree with any type of the same size.
 Signed and unsigned integer types agree provided they have the same size, and integer operands are silently cast to the instruction type if needed. For example, an unsigned integer operand used in a signed integer instruction will be treated as a signed integer by the instruction.
 Floatingpoint types agree only if they have the same size; i.e., they must match exactly.
Table 23 summarizes these type checking rules.
Operand Type  

.bX  .sX  .uX  .fX  
Instruction Type  .bX  okay  okay  okay  okay 
.sX  okay  okay  okay  invalid  
.uX  okay  okay  okay  invalid  
.fX  okay  invalid  invalid  okay 
Example
// 64bit arithmetic right shift; shift amount 'b' is .u32 shr.s64 d,a,b;
9.4.1. Operand Size Exceeding InstructionType Size
For convenience, ld, st, and cvt instructions permit source and destination data operands to be wider than the instructiontype size, so that narrow values may be loaded, stored, and converted using regularwidth registers. For example, 8bit or 16bit values may be held directly in 32bit or 64bit registers when being loaded, stored, or converted to other types and sizes. The operand type checking rules are relaxed for bitsize and integer (signed and unsigned) instruction types; floatingpoint instruction types still require that the operand typesize matches exactly, unless the operand is of bitsize type.
When a source operand has a size that exceeds the instructiontype size, the source data is truncated (chopped) to the appropriate number of bits specified by the instruction typesize.
Table 24 summarizes the relaxed typechecking rules for source operands. Note that some combinations may still be invalid for a particular instruction; for example, the cvt instruction does not support .bX instruction types, so those rows are invalid for cvt.
Source Operand Type  

b8  b16  b32  b64  s8  s16  s32  s64  u8  u16  u32  u64  f16  f32  f64  
Instruction Type  b8    chop  chop  chop    chop  chop  chop    chop  chop  chop  chop  chop  chop 
b16  inv    chop  chop  inv    chop  chop  inv    chop  chop    chop  chop  
b32  inv  inv    chop  inv  inv    chop  inv  inv    chop  inv    chop  
b64  inv  inv  inv    inv  inv  inv    inv  inv  inv    inv  inv    
s8    chop  chop  chop    chop  chop  chop    chop  chop  chop  inv  inv  inv  
s16  inv    chop  chop  inv    chop  chop  inv    chop  chop  inv  inv  inv  
s32  inv  inv    chop  inv  inv    chop  inv  inv    chop  inv  inv  inv  
s64  inv  inv  inv    inv  inv  inv    inv  inv  inv    inv  inv  inv  
u8    chop  chop  chop    chop  chop  chop    chop  chop  chop  inv  inv  inv  
u16  inv    chop  chop  inv    chop  chop  inv    chop  chop  inv  inv  inv  
u32  inv  inv    chop  inv  inv    chop  inv  inv    chop  inv  inv  inv  
u64  inv  inv  inv    inv  inv  inv    inv  inv  inv    inv  inv  inv  
f16  inv    chop  chop  inv  inv  inv  inv  inv  inv  inv  inv    inv  inv  
f32  inv  inv    chop  inv  inv  inv  inv  inv  inv  inv  inv  inv    inv  
f64  inv  inv  inv    inv  inv  inv  inv  inv  inv  inv  inv  inv  inv    
Notes 
chop = keep only low bits that fit; "" = allowed, but no conversion needed; inv = invalid, parse error.

When a destination operand has a size that exceeds the instructiontype size, the destination data is zero or signextended to the size of the destination register. If the corresponding instruction type is signed integer, the data is signextended; otherwise, the data is zeroextended.
Table 25 summarizes the relaxed typechecking rules for destination operands.
Destination Operand Type  

b8  b16  b32  b64  s8  s16  s32  s64  u8  u16  u32  u64  f16  f32  f64  
Instruction Type  b8    zext  zext  zext    zext  zext  zext    zext  zext  zext  zext  zext  zext 
b16  inv    zext  zext  inv    zext  zext  inv    zext  zext    zext  zext  
b32  inv  inv    zext  inv  inv    zext  inv  inv    zext  inv    zext  
b64  inv  inv  inv    inv  inv  inv    inv  inv  inv    inv  inv    
s8    sext  sext  sext    sext  sext  sext    sext  sext  sext  inv  inv  inv  
s16  inv    sext  sext  inv    sext  sext  inv    sext  sext  inv  inv  inv  
s32  inv  inv    sext  inv  inv    sext  inv  inv    sext  inv  inv  inv  
s64  inv  inv  inv    inv  inv  inv    inv  inv  inv    inv  inv  inv  
u8    zext  zext  zext    zext  zext  zext    zext  zext  zext  inv  inv  inv  
u16  inv    zext  zext  inv    zext  zext  inv    zext  zext  inv  inv  inv  
u32  inv  inv    zext  inv  inv    zext  inv  inv    zext  inv  inv  inv  
u64  inv  inv  inv    inv  inv  inv    inv  inv  inv    inv  inv  inv  
f16  inv    zext  zext  inv  inv  inv  inv  inv  inv  inv  inv    inv  inv  
f32  inv  inv    zext  inv  inv  inv  inv  inv  inv  inv  inv  inv    inv  
f64  inv  inv  inv    inv  inv  inv  inv  inv  inv  inv  inv  inv  inv    
Notes 
sext = signextend; zext = zeroextend; "" = allowed, but no conversion needed; inv = invalid, parse error.

9.5. Divergence of Threads in Control Constructs
Threads in a CTA execute together, at least in appearance, until they come to a conditional control construct such as a conditional branch, conditional function call, or conditional return. If threads execute down different control flow paths, the threads are called divergent. If all of the threads act in unison and follow a single control flow path, the threads are called uniform. Both situations occur often in programs.
A CTA with divergent threads may have lower performance than a CTA with uniformly executing threads, so it is important to have divergent threads reconverge as soon as possible. All control constructs are assumed to be divergent points unless the controlflow instruction is marked as uniform, using the .uni suffix. For divergent control flow, the optimizing code generator automatically determines points of reconvergence. Therefore, a compiler or code author targeting PTX can ignore the issue of divergent threads, but has the opportunity to improve performance by marking branch points as uniform when the compiler or author can guarantee that the branch point is nondivergent.
9.6. Semantics
The goal of the semantic description of an instruction is to describe the results in all cases in as simple language as possible. The semantics are described using C, until C is not expressive enough.
9.6.1. MachineSpecific Semantics of 16bit Code
A PTX program may execute on a GPU with either a 16bit or a 32bit data path. When executing on a 32bit data path, 16bit registers in PTX are mapped to 32bit physical registers, and 16bit computations are promoted to 32bit computations. This can lead to computational differences between code run on a 16bit machine versus the same code run on a 32bit machine, since the promoted computation may have bits in the highorder halfword of registers that are not present in 16bit physical registers. These extra precision bits can become visible at the application level, for example, by a rightshift instruction.
At the PTX language level, one solution would be to define semantics for 16bit code that is consistent with execution on a 16bit data path. This approach introduces a performance penalty for 16bit code executing on a 32bit data path, since the translated code would require many additional masking instructions to suppress extra precision bits in the highorder halfword of 32bit registers.
Rather than introduce a performance penalty for 16bit code running on 32bit GPUs, the semantics of 16bit instructions in PTX is machinespecific. A compiler or programmer may chose to enforce portable, machineindependent 16bit semantics by adding explicit conversions to 16bit values at appropriate points in the program to guarantee portability of the code. However, for many performancecritical applications, this is not desirable, and for many applications the difference in execution is preferable to limiting performance.
9.7. Instructions
All PTX instructions may be predicated. In the following descriptions, the optional guard predicate is omitted from the syntax.
9.7.1. Integer Arithmetic Instructions
Integer arithmetic instructions operate on the integer types in register and constant immediate forms. The integer arithmetic instructions are:
 add
 sub
 mul
 mad
 mul24
 mad24
 sad
 div
 rem
 abs
 neg
 min
 max
 popc
 clz
 bfind
 fns
 brev
 bfe
 bfi
 bmsk
 szext
 dp4a
 dp2a
9.7.1.1. Integer Arithmetic Instructions: add
add
Add two values.
Syntax
add.type d, a, b; add{.sat}.s32 d, a, b; // .sat applies only to .s32 .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Performs addition and writes the resulting value into a destination register.
Semantics
d = a + b;
Notes
Saturation modifier: .sat
 limits result to MININT..MAXINT (no overflow) for the size of the operation. Applies only to .s32 type.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
@p add.u32 x,y,z; add.sat.s32 c,c,1;
9.7.1.2. Integer Arithmetic Instructions: sub
sub
Subtract one value from another.
Syntax
sub.type d, a, b; sub{.sat}.s32 d, a, b; // .sat applies only to .s32 .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Performs subtraction and writes the resulting value into a destination register.
Semantics
d = a  b;
Notes
 .sat
 limits result to MININT..MAXINT (no overflow) for the size of the operation. Applies only to .s32 type.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
sub.s32 c,a,b;
9.7.1.3. Integer Arithmetic Instructions: mul
mul
Multiply two values.
Syntax
mul.mode.type d, a, b; .mode = { .hi, .lo, .wide }; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Compute the product of two values.
Semantics
t = a * b; n = bitwidth of type; d = t; // for .wide d = t<2n1..n>; // for .hi variant d = t<n1..0>; // for .lo variant
Notes
The type of the operation represents the types of the a and b operands. If .hi or .lo is specified, then d is the same size as a and b, and either the upper or lower half of the result is written to the destination register. If .wide is specified, then d is twice as wide as a and b to receive the full result of the multiplication.
The .wide suffix is supported only for 16 and 32bit integer types.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
mul.wide.s16 fa,fxs,fys; // 16*16 bits yields 32 bits mul.lo.s16 fa,fxs,fys; // 16*16 bits, save only the low 16 bits mul.wide.s32 z,x,y; // 32*32 bits, creates 64 bit result
9.7.1.4. Integer Arithmetic Instructions: mad
mad
Multiply two values, optionally extract the high or low half of the intermediate result, and add a third value.
Syntax
mad.mode.type d, a, b, c; mad.hi.sat.s32 d, a, b, c; .mode = { .hi, .lo, .wide }; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Multiplies two values, optionally extracts the high or low half of the intermediate result, and adds a third value. Writes the result into a destination register.
Semantics
t = a * b; n = bitwidth of type; d = t + c; // for .wide d = t<2n1..n> + c; // for .hi variant d = t<n1..0> + c; // for .lo variant
Notes
The type of the operation represents the types of the a and b operands. If .hi or .lo is specified, then d and c are the same size as a and b, and either the upper or lower half of the result is written to the destination register. If .wide is specified, then d and c are twice as wide as a and b to receive the result of the multiplication.
The .wide suffix is supported only for 16bit and 32bit integer types.
Saturation modifier:
 .sat

limits result to MININT..MAXINT (no overflow) for the size of the operation.
Applies only to .s32 type in .hi mode.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
@p mad.lo.s32 d,a,b,c; mad.lo.s32 r,p,q,r;
9.7.1.5. Integer Arithmetic Instructions: mul24
mul24
Multiply two 24bit integer values.
Syntax
mul24.mode.type d, a, b; .mode = { .hi, .lo }; .type = { .u32, .s32 };
Description
Compute the product of two 24bit integer values held in 32bit source registers, and return either the high or low 32bits of the 48bit result.
Semantics
t = a * b; d = t<47..16>; // for .hi variant d = t<31..0>; // for .lo variant
Notes
Integer multiplication yields a result that is twice the size of the input operands, i.e., 48bits.
mul24.hi performs a 24x24bit multiply and returns the high 32 bits of the 48bit result.
mul24.lo performs a 24x24bit multiply and returns the low 32 bits of the 48bit result.
All operands are of the same type and size.
mul24.hi may be less efficient on machines without hardware support for 24bit multiply.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
mul24.lo.s32 d,a,b; // low 32bits of 24x24bit signed multiply.
9.7.1.6. Integer Arithmetic Instructions: mad24
mad24
Multiply two 24bit integer values and add a third value.
Syntax
mad24.mode.type d, a, b, c; mad24.hi.sat.s32 d, a, b, c; .mode = { .hi, .lo }; .type = { .u32, .s32 };
Description
Compute the product of two 24bit integer values held in 32bit source registers, and add a third, 32bit value to either the high or low 32bits of the 48bit result. Return either the high or low 32bits of the 48bit result.
Semantics
t = a * b; d = t<47..16> + c; // for .hi variant d = t<31..0> + c; // for .lo variant
Notes
Integer multiplication yields a result that is twice the size of the input operands, i.e., 48bits.
mad24.hi performs a 24x24bit multiply and adds the high 32 bits of the 48bit result to a third value.
mad24.lo performs a 24x24bit multiply and adds the low 32 bits of the 48bit result to a third value.
All operands are of the same type and size.
 .sat
 limits result of 32bit signed addition to MININT..MAXINT (no overflow). Applies only to .s32 type in .hi mode.
mad24.hi may be less efficient on machines without hardware support for 24bit multiply.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
mad24.lo.s32 d,a,b,c; // low 32bits of 24x24bit signed multiply.
9.7.1.7. Integer Arithmetic Instructions: sad
sad
Sum of absolute differences.
Syntax
sad.type d, a, b, c; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Adds the absolute value of ab to c and writes the resulting value into d.
Semantics
d = c + ((a<b) ? ba : ab);
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
sad.s32 d,a,b,c; sad.u32 d,a,b,d; // running sum
9.7.1.8. Integer Arithmetic Instructions: div
div
Divide one value by another.
Syntax
div.type d, a, b; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Divides a by b, stores result in d.
Semantics
d = a / b;
Notes
Division by zero yields an unspecified, machinespecific value.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
div.s32 b,n,i;
9.7.1.9. Integer Arithmetic Instructions: rem
rem
The remainder of integer division.
Syntax
rem.type d, a, b; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Divides a by b, store the remainder in d.
Semantics
d = a % b;
Notes
The behavior for negative numbers is machinedependent and depends on whether divide rounds towards zero or negative infinity.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
rem.s32 x,x,8; // x = x%8;
9.7.1.10. Integer Arithmetic Instructions: abs
abs
Absolute value.
Syntax
abs.type d, a; .type = { .s16, .s32, .s64 };
Description
Take the absolute value of a and store it in d.
Semantics
d = a;
Notes
Only for signed integers.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
abs.s32 r0,a;
9.7.1.11. Integer Arithmetic Instructions: neg
neg
Arithmetic negate.
Syntax
neg.type d, a; .type = { .s16, .s32, .s64 };
Description
Negate the sign of a and store the result in d.
Semantics
d = a;
Notes
Only for signed integers.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
neg.s32 r0,a;
9.7.1.12. Integer Arithmetic Instructions: min
min
Find the minimum of two values.
Syntax
min.type d, a, b; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Store the minimum of a and b in d.
Semantics
d = (a < b) ? a : b; // Integer (signed and unsigned)
Notes
Signed and unsigned differ.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
min.s32 r0,a,b; @p min.u16 h,i,j;
9.7.1.13. Integer Arithmetic Instructions: max
max
Find the maximum of two values.
Syntax
max.type d, a, b; .type = { .u16, .u32, .u64, .s16, .s32, .s64 };
Description
Store the maximum of a and b in d.
Semantics
d = (a > b) ? a : b; // Integer (signed and unsigned)
Notes
Signed and unsigned differ.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
Supported on all target architectures.
Examples
max.u32 d,a,b; max.s32 q,q,0;
9.7.1.14. Integer Arithmetic Instructions: popc
popc
Population count.
Syntax
popc.type d, a; .type = { .b32, .b64 };
Description
Count the number of one bits in a and place the resulting population count in 32bit destination register d. Operand a has the instruction type and destination d has type .u32.
Semantics
.u32 d = 0; while (a != 0) { if (a & 0x1) d++; a = a >> 1; }
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
popc requires sm_20 or higher.
Examples
popc.b32 d, a; popc.b64 cnt, X; // cnt is .u32
9.7.1.15. Integer Arithmetic Instructions: clz
clz
Count leading zeros.
Syntax
clz.type d, a; .type = { .b32, .b64 };
Description
Count the number of leading zeros in a starting with the mostsignificant bit and place the result in 32bit destination register d. Operand a has the instruction type, and destination d has type .u32. For .b32 type, the number of leading zeros is between 0 and 32, inclusively. For.b64 type, the number of leading zeros is between 0 and 64, inclusively.
Semantics
.u32 d = 0; if (.type == .b32) { max = 32; mask = 0x80000000; } else { max = 64; mask = 0x8000000000000000; } while (d < max && (a&mask == 0) ) { d++; a = a << 1; }
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
clz requires sm_20 or higher.
Examples
clz.b32 d, a; clz.b64 cnt, X; // cnt is .u32
9.7.1.16. Integer Arithmetic Instructions: bfind
bfind
Find most significant nonsign bit.
Syntax
bfind.type d, a; bfind.shiftamt.type d, a; .type = { .u32, .u64, .s32, .s64 };
Description
Find the bit position of the most significant nonsign bit in a and place the result in d. Operand a has the instruction type, and destination d has type .u32. For unsigned integers, bfind returns the bit position of the most significant 1. For signed integers, bfind returns the bit position of the most significant 0 for negative inputs and the most significant 1 for nonnegative inputs.
If .shiftamt is specified, bfind returns the shift amount needed to leftshift the found bit into the mostsignificant bit position.
bfind returns 0xffffffff if no nonsign bit is found.
Semantics
msb = (.type==.u32  .type==.s32) ? 31 : 63; // negate negative signed inputs if ( (.type==.s32  .type==.s64) && (a & (1<<msb)) ) { a = ~a; } .u32 d = 0xffffffff; for (.s32 i=msb; i>=0; i) { if (a & (1<<i)) { d = i; break; } } if (.shiftamt && d != 0xffffffff) { d = msb  d; }
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
bfind requires sm_20 or higher.
Examples
bfind.u32 d, a; bfind.shiftamt.s64 cnt, X; // cnt is .u32
9.7.1.17. Integer Arithmetic Instructions: fns
fns
Find the nth set bit
Syntax
fns.b32 d, mask, base, offset;
Description
Given a 32bit value mask and an integer value base (between 0 and 31), find the nth (given by offset) set bit in mask from the base bit, and store the bit position in d. If not found, store 0xffffffff in d.
Operand mask has a 32bit type. Operand base has .b32, .u32 or .s32 type. Operand offset has .s32 type. Destination d has type .b32.
Operand base must be <= 31, otherwise behavior is undefined.
Semantics
d = 0xffffffff; if (offset == 0) { if (mask[base] == 1) { d = base; } } else { pos = base; count = offset  1; inc = (offset > 0) ? 1 : 1; while ((pos >= 0) && (pos < 32)) { if (mask[pos] == 1) { if (count == 0) { d = pos; break; } else { count = count – 1; } } pos = pos + inc; } }
PTX ISA Notes
Introduced in PTX ISA version 6.0.
Target ISA Notes
fns requires sm_30 or higher.
Examples
fns.b32 d, 0xaaaaaaaa, 3, 1; // d = 3 fns.b32 d, 0xaaaaaaaa, 3, 1; // d = 3 fns.b32 d, 0xaaaaaaaa, 2, 1; // d = 3 fns.b32 d, 0xaaaaaaaa, 2, 1; // d = 1
9.7.1.18. Integer Arithmetic Instructions: brev
brev
Bit reverse.
Syntax
brev.type d, a; .type = { .b32, .b64 };
Description
Perform bitwise reversal of input.
Semantics
msb = (.type==.b32) ? 31 : 63; for (i=0; i<=msb; i++) { d[i] = a[msbi]; }
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
brev requires sm_20 or higher.
Examples
brev.b32 d, a;
9.7.1.19. Integer Arithmetic Instructions: bfe
bfe
Bit Field Extract.
Syntax
bfe.type d, a, b, c; .type = { .u32, .u64, .s32, .s64 };
Description
Extract bit field from a and place the zero or signextended result in d. Source b gives the bit field starting bit position, and source c gives the bit field length in bits.
Operands a and d have the same type as the instruction type. Operands b and c are type .u32, but are restricted to the 8bit value range 0..255.
 .u32, .u64:
 zero
 .s32, .s64:
 msb of input a if the extracted field extends beyond the msb of a msb of extracted field, otherwise
If the bit field length is zero, the result is zero.
The destination d is padded with the sign bit of the extracted field. If the start position is beyond the msb of the input, the destination d is filled with the replicated sign bit of the extracted field.
Semantics
msb = (.type==.u32  .type==.s32) ? 31 : 63; pos = b & 0xff; // pos restricted to 0..255 range len = c & 0xff; // len restricted to 0..255 range if (.type==.u32  .type==.u64  len==0) sbit = 0; else sbit = a[min(pos+len1,msb)]; d = 0; for (i=0; i<=msb; i++) { d[i] = (i<len && pos+i<=msb) ? a[pos+i] : sbit; }
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
bfe requires sm_20 or higher.
Examples
bfe.b32 d,a,start,len;
9.7.1.20. Integer Arithmetic Instructions: bfi
bfi
Bit Field Insert.
Syntax
bfi.type f, a, b, c, d; .type = { .b32, .b64 };
Description
Align and insert a bit field from a into b, and place the result in f. Source c gives the starting bit position for the insertion, and source d gives the bit field length in bits.
Operands a, b, and f have the same type as the instruction type. Operands c and d are type .u32, but are restricted to the 8bit value range 0..255.
If the bit field length is zero, the result is b.
If the start position is beyond the msb of the input, the result is b.
Semantics
msb = (.type==.b32) ? 31 : 63; pos = c & 0xff; // pos restricted to 0..255 range len = d & 0xff; // len restricted to 0..255 range f = b; for (i=0; i<len && pos+i<=msb; i++) { f[pos+i] = a[i]; }
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
bfi requires sm_20 or higher.
Examples
bfi.b32 d,a,b,start,len;
9.7.1.21. Integer Arithmetic Instructions: szext
szext
Signextend or Zeroextend.
Syntax
szext.mode.type d, a, b; .mode = { .clamp, .wrap }; .type = { .u32, .s32 };
Description
Signextends or zeroextends an Nbit value from operand a where N is specified in operand b. The resulting value is stored in the destination operand d.
For the .s32 instruction type, the value in a is treated as an Nbit signed value and the most significant bit of this Nbit value is replicated up to bit 31. For the .u32 instruction type, the value in a is treated as an Nbit unsigned number and is zeroextended to 32 bits. Operand b is an unsigned 32bit value.
If the value of N is 0, then the result of szext is 0. If the value of N is 32 or higher, then the result of szext depends upon the value of the .mode qualifier as follows:
 If .mode is .clamp, then the result is the same as the source operand a.
 If .mode is .wrap, then the result is computed using the wrapped value of N.
Semantics
b1 = b & 0x1f; too_large = (b >= 32 && .mode == .clamp) ? true : false; mask = too_large ? 0 : (~0) << b1; sign_pos = (b1  1) & 0x1f; if (b1 == 0  too_large  .type != .s32) { sign_bit = false; } else { sign_bit = (a >> sign_pos) & 1; } d = (a & ~mask)  (sign_bit ? mask  0);
PTX ISA Notes
Introduced in PTX ISA version 7.6.
Target ISA Notes
szext requires sm_70 or higher.
Examples
szext.clamp.s32 rd, ra, rb; szext.wrap.u32 rd, 0xffffffff, 0; // Result is 0.
9.7.1.22. Integer Arithmetic Instructions: bmsk
bmsk
Bit Field Mask.
Syntax
bmsk.mode.b32 d, a, b; .mode = { .clamp, .wrap };
Description
Generates a 32bit mask starting from the bit position specified in operand a, and of the width specified in operand b. The generated bitmask is stored in the destination operand d.
The resulting bitmask is 0 in the following cases:
 When the value of a is 32 or higher and .mode is .clamp.
 When either the specified value of b or the wrapped value of b (when .mode is specified as .wrap) is 0.
Semantics
a1 = a & 0x1f; mask0 = (~0) << a1; b1 = b & 0x1f; sum = a1 + b1; mask1 = (~0) << sum; sumoverflow = sum >= 32 ? true : false; bitpositionoverflow = false; bitwidthoverflow = false; if (.mode == .clamp) { if (a >= 32) { bitpositionoverflow = true; mask0 = 0; } if (b >= 32) { bitwidthoverflow = true; } } if (sumoverflow  bitpositionoverflow  bitwidthoverflow) { mask1 = 0; } else if (b1 == 0) { mask1 = ~0; } d = mask0 & ~mask1;
Notes
The bitmask width specified by operand b is limited to range 0..32 in .clamp mode and to range 0..31 in .wrap mode.
PTX ISA Notes
Introduced in PTX ISA version 7.6.
Target ISA Notes
bmsk requires sm_70 or higher.
Examples
bmsk.clamp.b32 rd, ra, rb; bmsk.wrap.b32 rd, 1, 2; // Creates a bitmask of 0x00000006.
9.7.1.23. Integer Arithmetic Instructions: dp4a
dp4a
Fourway byte dot productaccumulate.
Syntax
dp4a.atype.btype d, a, b, c; .atype = .btype = { .u32, .s32 };
Description
Fourway byte dot product which is accumulated in 32bit result.
Operand a and b are 32bit inputs which hold 4 byte inputs in packed form for dot product.
Operand c has type .u32 if both .atype and .btype are .u32 else operand c has type .s32.
Semantics
d = c; // Extract 4 bytes from a 32bit input and sign or zero extend // based on input type. Va = extractAndSignOrZeroExt_4(a, .atype); Vb = extractAndSignOrZeroExt_4(b, .btype); for (i = 0; i < 4; ++i) { d += Va[i] * Vb[i]; }
PTX ISA Notes
Introduced in PTX ISA version 5.0.
Target ISA Notes
Requires sm_61 or higher.
Examples
dp4a.u32.u32 d0, a0, b0, c0; dp4a.u32.s32 d1, a1, b1, c1;
9.7.1.24. Integer Arithmetic Instructions: dp2a
dp2a
Twoway dot productaccumulate.
Syntax
dp2a.mode.atype.btype d, a, b, c; .atype = .btype = { .u32, .s32 }; .mode = { .lo, .hi };
Description
Twoway 16bit to 8bit dot product which is accumulated in 32bit result.
Operand a and b are 32bit inputs. Operand a holds two 16bits inputs in packed form and operand b holds 4 byte inputs in packed form for dot product.
Depending on the .mode specified, either lower half or upper half of operand b will be used for dot product.
Operand c has type .u32 if both .atype and .btype are .u32 else operand c has type .s32.
Semantics
d = c; // Extract two 16bit values from a 32bit input and sign or zero extend // based on input type. Va = extractAndSignOrZeroExt_2(a, .atype); // Extract four 8bit values from a 32bit input and sign or zer extend // based on input type. Vb = extractAndSignOrZeroExt_4(b, .btype); b_select = (.mode == .lo) ? 0 : 2; for (i = 0; i < 2; ++i) { d += Va[i] * Vb[b_select + i]; }
PTX ISA Notes
Introduced in PTX ISA version 5.0.
Target ISA Notes
Requires sm_61 or higher.
Examples
dp2a.lo.u32.u32 d0, a0, b0, c0; dp2a.hi.u32.s32 d1, a1, b1, c1;
9.7.2. ExtendedPrecision Integer Arithmetic Instructions
Instructions add.cc, addc, sub.cc, subc, mad.cc and madc reference an implicitly specified condition code register (CC) having a single carry flag bit (CC.CF) holding carryin/carryout or borrowin/borrowout. These instructions support extendedprecision integer addition, subtraction, and multiplication. No other instructions access the condition code, and there is no support for setting, clearing, or testing the condition code. The condition code register is not preserved across calls and is mainly intended for use in straightline code sequences for computing extendedprecision integer addition, subtraction, and multiplication.
 add.cc, addc
 sub.cc, subc
 mad.cc, madc
9.7.2.1. ExtendedPrecision Arithmetic Instructions: add.cc
add.cc
Add two values with carryout.
Syntax
add.cc.type d, a, b; .type = { .u32, .s32, .u64, .s64 };
Description
Performs integer addition and writes the carryout value into the condition code register.
Semantics
d = a + b;
carryout written to CC.CF
Notes
No integer rounding modifiers.
No saturation.
Behavior is the same for unsigned and signed integers.
PTX ISA Notes
32bit add.cc introduced in PTX ISA version 1.2.
64bit add.cc introduced in PTX ISA version 4.3.
Target ISA Notes
32bit add.cc is supported on all target architectures.
64bit add.cc requires sm_20 or higher.
Examples
@p add.cc.u32 x1,y1,z1; // extendedprecision addition of @p addc.cc.u32 x2,y2,z2; // two 128bit values @p addc.cc.u32 x3,y3,z3; @p addc.u32 x4,y4,z4;
9.7.2.2. ExtendedPrecision Arithmetic Instructions: addc
addc
Add two values with carryin and optional carryout.
Syntax
addc{.cc}.type d, a, b; .type = { .u32, .s32, .u64, .s64 };
Description
Performs integer addition with carryin and optionally writes the carryout value into the condition code register.
Semantics
d = a + b + CC.CF;
if .cc specified, carryout written to CC.CF
Notes
No integer rounding modifiers.
No saturation.
Behavior is the same for unsigned and signed integers.
PTX ISA Notes
32bit addc introduced in PTX ISA version 1.2.
64bit addc introduced in PTX ISA version 4.3.
Target ISA Notes
32bit addc is supported on all target architectures.
64bit addc requires sm_20 or higher.
Examples
@p add.cc.u32 x1,y1,z1; // extendedprecision addition of @p addc.cc.u32 x2,y2,z2; // two 128bit values @p addc.cc.u32 x3,y3,z3; @p addc.u32 x4,y4,z4;
9.7.2.3. ExtendedPrecision Arithmetic Instructions: sub.cc
sub.cc
Subtract one value from another, with borrowout.
Syntax
sub.cc.type d, a, b; .type = { .u32, .s32, .u64, .s64 };
Description
Performs integer subtraction and writes the borrowout value into the condition code register.
Semantics
d = a  b;
borrowout written to CC.CF
Notes
No integer rounding modifiers.
No saturation.
Behavior is the same for unsigned and signed integers.
PTX ISA Notes
32bit sub.cc introduced in PTX ISA version 1.2.
64bit sub.cc introduced in PTX ISA version 4.3.
Target ISA Notes
32bit sub.cc is supported on all target architectures.
64bit sub.cc requires sm_20 or higher.
Examples
@p sub.cc.u32 x1,y1,z1; // extendedprecision subtraction @p subc.cc.u32 x2,y2,z2; // of two 128bit values @p subc.cc.u32 x3,y3,z3; @p subc.u32 x4,y4,z4;
9.7.2.4. ExtendedPrecision Arithmetic Instructions: subc
subc
Subtract one value from another, with borrowin and optional borrowout.
Syntax
subc{.cc}.type d, a, b; .type = { .u32, .s32, .u64, .s64 };
Description
Performs integer subtraction with borrowin and optionally writes the borrowout value into the condition code register.
Semantics
d = a  (b + CC.CF);
if .cc specified, borrowout written to CC.CF
Notes
No integer rounding modifiers.
No saturation.
Behavior is the same for unsigned and signed integers.
PTX ISA Notes
32bit subc introduced in PTX ISA version 1.2.
64bit subc introduced in PTX ISA version 4.3.
Target ISA Notes
32bit subc is supported on all target architectures.
64bit subc requires sm_20 or higher.
Examples
@p sub.cc.u32 x1,y1,z1; // extendedprecision subtraction @p subc.cc.u32 x2,y2,z2; // of two 128bit values @p subc.cc.u32 x3,y3,z3; @p subc.u32 x4,y4,z4;
9.7.2.5. ExtendedPrecision Arithmetic Instructions: mad.cc
mad.cc
Multiply two values, extract high or low half of result, and add a third value with carryout.
Syntax
mad{.hi,.lo}.cc.type d, a, b, c; .type = { .u32, .s32, .u64, .s64 };
Description
Multiplies two values, extracts either the high or low part of the result, and adds a third value. Writes the result to the destination register and the carryout from the addition into the condition code register.
Semantics
t = a * b; d = t<63..32> + c; // for .hi variant d = t<31..0> + c; // for .lo variant
carryout from addition is written to CC.CF
Notes
Generally used in combination with madc and addc to implement extendedprecision multiword multiplication. See madc for an example.
PTX ISA Notes
32bit mad.cc introduced in PTX ISA version 3.0.
64bit mad.cc introduced in PTX ISA version 4.3.
Target ISA Notes
Requires target sm_20 or higher.
Examples
@p mad.lo.cc.u32 d,a,b,c; mad.lo.cc.u32 r,p,q,r;
9.7.2.6. ExtendedPrecision Arithmetic Instructions: madc
madc
Multiply two values, extract high or low half of result, and add a third value with carryin and optional carryout.
Syntax
madc{.hi,.lo}{.cc}.type d, a, b, c; .type = { .u32, .s32, .u64, .s64 };
Description
Multiplies two values, extracts either the high or low part of the result, and adds a third value along with carryin. Writes the result to the destination register and optionally writes the carryout from the addition into the condition code register.
Semantics
t = a * b; d = t<63..32> + c + CC.CF; // for .hi variant d = t<31..0> + c + CC.CF; // for .lo variant
if .cc specified, carryout from addition is written to CC.CF
Notes
Generally used in combination with mad.cc and addc to implement extendedprecision multiword multiplication. See example below.
PTX ISA Notes
32bit madc introduced in PTX ISA version 3.0.
64bit madc introduced in PTX ISA version 4.3.
Target ISA Notes
Requires target sm_20 or higher.
Examples
// extendedprecision multiply: [r3,r2,r1,r0] = [r5,r4] * [r7,r6] mul.lo.u32 r0,r4,r6; // r0=(r4*r6).[31:0], no carryout mul.hi.u32 r1,r4,r6; // r1=(r4*r6).[63:32], no carryout mad.lo.cc.u32 r1,r5,r6,r1; // r1+=(r5*r6).[31:0], may carryout madc.hi.u32 r2,r5,r6,0; // r2 =(r5*r6).[63:32]+carryin, // no carryout mad.lo.cc.u32 r1,r4,r7,r1; // r1+=(r4*r7).[31:0], may carryout madc.hi.cc.u32 r2,r4,r7,r2; // r2+=(r4*r7).[63:32]+carryin, // may carryout addc.u32 r3,0,0; // r3 = carryin, no carryout mad.lo.cc.u32 r2,r5,r7,r2; // r2+=(r5*r7).[31:0], may carryout madc.hi.u32 r3,r5,r7,r3; // r3+=(r5*r7).[63:32]+carryin
9.7.3. FloatingPoint Instructions
Floatingpoint instructions operate on .f32 and .f64 register operands and constant immediate values. The floatingpoint instructions are:
 testp
 copysign
 add
 sub
 mul
 fma
 mad
 div
 abs
 neg
 min
 max
 rcp
 sqrt
 rsqrt
 sin
 cos
 lg2
 ex2
 tanh
Instructions that support rounding modifiers are IEEE754 compliant. Doubleprecision instructions support subnormal inputs and results. Singleprecision instructions support subnormal inputs and results by default for sm_20 and subsequent targets, and flush subnormal inputs and results to signpreserving zero for sm_1x targets. The optional .ftz modifier on singleprecision instructions provides backward compatibility with sm_1x targets by flushing subnormal inputs and results to signpreserving zero regardless of the target architecture.
Singleprecision add, sub, mul, and mad support saturation of results to the range [0.0, 1.0], with NaNs being flushed to positive zero. NaN payloads are supported for doubleprecision instructions (except for rcp.approx.ftz.f64 and rsqrt.approx.ftz.f64, which maps input NaNs to a canonical NaN). Singleprecision instructions return an unspecified NaN. Note that future implementations may support NaN payloads for singleprecision instructions, so PTX programs should not rely on the specific singleprecision NaNs being generated.
Table 26 summarizes floatingpoint instructions in PTX.
Instruction  .rn  .rz  .rm  .rp  .ftz  .sat  Notes 

{add,sub,mul}.rnd.f32  x  x  x  x  x  x  If no rounding modifier is specified, default is .rn and instructions may be folded into a multiplyadd. 
{add,sub,mul}.rnd.f64  x  x  x  x  n/a  n/a  If no rounding modifier is specified, default is .rn and instructions may be folded into a multiplyadd. 
mad.f32  n/a  n/a  n/a  n/a  x  x 
.target sm_1x No rounding modifier. 
{mad,fma}.rnd.f32  x  x  x  x  x  x 
.target sm_20 or higher mad.f32 and fma.f32 are the same. 
{mad,fma}.rnd.f64  x  x  x  x  n/a  n/a  mad.f64 and fma.f64 are the same. 
div.full.f32  n/a  n/a  n/a  n/a  x  n/a  No rounding modifier. 
{div,rcp,sqrt}.approx.f32  n/a  n/a  n/a  n/a  x  n/a  n/a 
rcp.approx.ftz.f64  n/a  n/a  n/a  n/a  x  n/a  .target sm_20 or higher 
{div,rcp,sqrt}.rnd.f32  x  x  x  x  x  n/a  .target sm_20 or higher 
{div,rcp,sqrt}.rnd.f64  x  x  x  x  n/a  n/a  .target sm_20 or higher 
{abs,neg,min,max}.f32  n/a  n/a  n/a  n/a  x  n/a  
{abs,neg,min,max}.f64  n/a  n/a  n/a  n/a  n/a  n/a  
rsqrt.approx.f32  n/a  n/a  n/a  n/a  x  n/a  
rsqrt.approx.f64  n/a  n/a  n/a  n/a  n/a  n/a  
rsqrt.approx.ftz.f64  n/a  n/a  n/a  n/a  x  n/a  .target sm_20 or higher 
{sin,cos,lg2,ex2}.approx.f32  n/a  n/a  n/a  n/a  x  n/a  
tanh.approx.f32  n/a  n/a  n/a  n/a  n/a  n/a  .target sm_75 or higher 
9.7.3.1. Floating Point Instructions: testp
testp
Test floatingpoint property.
Syntax
testp.op.type p, a; // result is .pred .op = { .finite, .infinite, .number, .notanumber, .normal, .subnormal }; .type = { .f32, .f64 };
Description
testp tests common properties of floatingpoint numbers and returns a predicate value of 1 if True and 0 if False.
 testp.finite
 True if the input is not infinite or NaN
 testp.infinite
 True if the input is positive or negative infinity
 testp.number
 True if the input is not NaN
 testp.notanumber
 True if the input is NaN
 testp.normal
 True if the input is a normal number (not NaN, not infinity)
 testp.subnormal
 True if the input is a subnormal number (not NaN, not infinity)
As a special case, positive and negative zero are considered normal numbers.
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
Requires sm_20 or higher.
Examples
testp.notanumber.f32 isnan, f0; testp.infinite.f64 p, X;
9.7.3.2. Floating Point Instructions: copysign
copysign
Copy sign of one input to another.
Syntax
copysign.type d, a, b; .type = { .f32, .f64 };
Description
Copy sign bit of a into value of b, and return the result as d.
PTX ISA Notes
Introduced in PTX ISA version 2.0.
Target ISA Notes
Requires sm_20 or higher.
Examples
copysign.f32 x, y, z; copysign.f64 A, B, C;
9.7.3.3. Floating Point Instructions: add
add
Add two values.
Syntax
add{.rnd}{.ftz}{.sat}.f32 d, a, b; add{.rnd}.f64 d, a, b; .rnd = { .rn, .rz, .rm, .rp };
Description
Performs addition and writes the resulting value into a destination register.
Semantics
d = a + b;
Notes
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
The default value of rounding modifier is .rn. Note that an add instruction with an explicit rounding modifier is treated conservatively by the code optimizer. An add instruction with no rounding modifier defaults to roundtonearesteven and may be optimized aggressively by the code optimizer. In particular, mul/add sequences with no rounding modifiers may be optimized to use fusedmultiplyadd instructions on the target device.
 sm_20+

By default, subnormal numbers are supported.
add.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

add.f64 supports subnormal numbers.
add.f32 flushes subnormal inputs and results to signpreserving zero.
Saturation modifier:
add.sat.f32 clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
add.f32 supported on all target architectures.
add.f64 requires sm_13 or higher.
Rounding modifiers have the following target requirements:
 .rn, .rz
 available for all targets
 .rm, .rp

for add.f64, requires sm_13 or higher.
for add.f32, requires sm_20 or higher.
Examples
@p add.rz.ftz.f32 f1,f2,f3;
9.7.3.4. Floating Point Instructions: sub
sub
Subtract one value from another.
Syntax
sub{.rnd}{.ftz}{.sat}.f32 d, a, b; sub{.rnd}.f64 d, a, b; .rnd = { .rn, .rz, .rm, .rp };
Description
Performs subtraction and writes the resulting value into a destination register.
Semantics
d = a  b;
Notes
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
The default value of rounding modifier is .rn. Note that a sub instruction with an explicit rounding modifier is treated conservatively by the code optimizer. A sub instruction with no rounding modifier defaults to roundtonearesteven and may be optimized aggressively by the code optimizer. In particular, mul/sub sequences with no rounding modifiers may be optimized to use fusedmultiplyadd instructions on the target device.
 sm_20+

By default, subnormal numbers are supported.
sub.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

sub.f64 supports subnormal numbers.
sub.f32 flushes subnormal inputs and results to signpreserving zero.
Saturation modifier:
sub.sat.f32 clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
sub.f32 supported on all target architectures.
sub.f64 requires sm_13 or higher.
 .rn, .rz
 available for all targets
 .rm, .rp

for sub.f64, requires sm_13 or higher.
for sub.f32, requires sm_20 or higher.
Examples
sub.f32 c,a,b; sub.rn.ftz.f32 f1,f2,f3;
9.7.3.5. Floating Point Instructions: mul
mul
Multiply two values.
Syntax
mul{.rnd}{.ftz}{.sat}.f32 d, a, b; mul{.rnd}.f64 d, a, b; .rnd = { .rn, .rz, .rm, .rp };
Description
Compute the product of two values.
Semantics
d = a * b;
Notes
For floatingpoint multiplication, all operands must be the same size.
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
The default value of rounding modifier is .rn. Note that a mul instruction with an explicit rounding modifier is treated conservatively by the code optimizer. A mul instruction with no rounding modifier defaults to roundtonearesteven and may be optimized aggressively by the code optimizer. In particular, mul/add and mul/sub sequences with no rounding modifiers may be optimized to use fusedmultiplyadd instructions on the target device.
 sm_20+

By default, subnormal numbers are supported.
mul.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

mul.f64 supports subnormal numbers.
mul.f32 flushes subnormal inputs and results to signpreserving zero.
Saturation modifier:
mul.sat.f32 clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
mul.f32 supported on all target architectures.
mul.f64 requires sm_13 or higher.
 .rn, .rz
 available for all targets
 .rm, .rp

for mul.f64, requires sm_13 or higher.
for mul.f32, requires sm_20 or higher.
Examples
mul.ftz.f32 circumf,radius,pi // a singleprecision multiply
9.7.3.6. Floating Point Instructions: fma
fma
Fused multiplyadd.
Syntax
fma.rnd{.ftz}{.sat}.f32 d, a, b, c; fma.rnd.f64 d, a, b, c; .rnd = { .rn, .rz, .rm, .rp };
Description
Performs a fused multiplyadd with no loss of precision in the intermediate product and addition.
Semantics
d = a*b + c;
Notes
fma.f32 computes the product of a and b to infinite precision and then adds c to this product, again in infinite precision. The resulting value is then rounded to single precision using the rounding mode specified by .rnd.
fma.f64 computes the product of a and b to infinite precision and then adds c to this product, again in infinite precision. The resulting value is then rounded to double precision using the rounding mode specified by .rnd.
fma.f64 is the same as mad.f64.
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
 sm_20+

By default, subnormal numbers are supported.
fma.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

fma.f64 supports subnormal numbers.
fma.f32 is unimplemented for sm_1x targets.
Saturation:
fma.sat.f32 clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
fma.f64 introduced in PTX ISA version 1.4.
fma.f32 introduced in PTX ISA version 2.0.
Target ISA Notes
fma.f32 requires sm_20 or higher.
fma.f64 requires sm_13 or higher.
Examples
fma.rn.ftz.f32 w,x,y,z; @p fma.rn.f64 d,a,b,c;
9.7.3.7. Floating Point Instructions: mad
mad
Multiply two values and add a third value.
Syntax
mad{.ftz}{.sat}.f32 d, a, b, c; // .target sm_1x mad.rnd{.ftz}{.sat}.f32 d, a, b, c; // .target sm_20 mad.rnd.f64 d, a, b, c; // .target sm_13 and higher .rnd = { .rn, .rz, .rm, .rp };
Description
Multiplies two values and adds a third, and then writes the resulting value into a destination register.
Semantics
d = a*b + c;
Notes
 mad.f32 computes the product of a and b to infinite precision and then adds c to this product, again in infinite precision. The resulting value is then rounded to single precision using the rounding mode specified by .rnd.
 mad.f64 computes the product of a and b to infinite precision and then adds c to this product, again in infinite precision. The resulting value is then rounded to double precision using the rounding mode specified by .rnd.
 mad.{f32,f64} is the same as fma.{f32,f64}.
 mad.f32 computes the product of a and b at double precision, and then the mantissa is truncated to 23 bits, but the exponent is preserved. Note that this is different from computing the product with mul, where the mantissa can be rounded and the exponent will be clamped. The exception for mad.f32 is when c = +/0.0, mad.f32 is identical to the result computed using separate mul and add instructions. When JITcompiled for SM 2.0 devices, mad.f32 is implemented as a fused multiplyadd (i.e., fma.rn.ftz.f32). In this case, mad.f32 can produce slightly different numeric results and backward compatibility is not guaranteed in this case.
 mad.f64 computes the product of a and b to infinite precision and then adds c to this product, again in infinite precision. The resulting value is then rounded to double precision using the rounding mode specified by .rnd. Unlike mad.f32, the treatment of subnormal inputs and output follows IEEE 754 standard.
 mad.f64 is the same as fma.f64.
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
 sm_20+

By default, subnormal numbers are supported.
mad.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

mad.f64 supports subnormal numbers.
mad.f32 flushes subnormal inputs and results to signpreserving zero.
Saturation modifier:
mad.sat.f32 clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
In PTX ISA versions 1.4 and later, a rounding modifier is required for mad.f64.
Legacy mad.f64 instructions having no rounding modifier will map to mad.rn.f64.
In PTX ISA versions 2.0 and later, a rounding modifier is required for mad.f32 for sm_20 and higher targets.
Errata
mad.f32 requires a rounding modifier for sm_20 and higher targets. However for PTX ISA version 3.0 and earlier, ptxas does not enforce this requirement and mad.f32 silently defaults to mad.rn.f32. For PTX ISA version 3.1, ptxas generates a warning and defaults to mad.rn.f32, and in subsequent releases ptxas will enforce the requirement for PTX ISA version 3.2 and later.
Target ISA Notes
mad.f32 supported on all target architectures.
mad.f64 requires sm_13 or higher.
 .rn,.rz,.rm,.rp for mad.f64, requires sm_13 or higher.
 .rn,.rz,.rm,.rp for mad.f32, requires sm_20 or higher.
Examples
@p mad.f32 d,a,b,c;
9.7.3.8. Floating Point Instructions: div
div
Divide one value by another.
Syntax
div.approx{.ftz}.f32 d, a, b; // fast, approximate divide div.full{.ftz}.f32 d, a, b; // fullrange approximate divide div.rnd{.ftz}.f32 d, a, b; // IEEE 754 compliant rounding div.rnd.f64 d, a, b; // IEEE 754 compliant rounding .rnd = { .rn, .rz, .rm, .rp };
Description
Divides a by b, stores result in d.
Semantics
d = a / b;
Notes
Fast, approximate singleprecision divides:
 div.approx.f32 implements a fast approximation to divide, computed as d = a * (1/b). For b in [2^{126}, 2^{126}], the maximum ulp error is 2. For 2^{126} < b < 2^{128}, if a is infinity, div.approx.f32 returns NaN, otherwise it returns 0.
 div.full.f32 implements a relatively fast, fullrange approximation that scales operands to achieve better accuracy, but is not fully IEEE 754 compliant and does not support rounding modifiers. The maximum ulp error is 2 across the full range of inputs.
 Subnormal inputs and results are flushed to signpreserving zero. Fast, approximate division by zero creates a value of infinity (with same sign as a).
Divide with IEEE 754 compliant rounding:
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
 sm_20+
 By default, subnormal numbers are supported.
div.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x
 div.f64 supports subnormal numbers.
div.f32 flushes subnormal inputs and results to signpreserving zero.
PTX ISA Notes
div.f32 and div.f64 introduced in PTX ISA version 1.0.
Explicit modifiers .approx, .full, .ftz, and rounding introduced in PTX ISA version 1.4.
For PTX ISA version 1.4 and later, one of .approx, .full, or .rnd is required.
For PTX ISA versions 1.0 through 1.3, div.f32 defaults to div.approx.ftz.f32, and div.f64 defaults to div.rn.f64.
Target ISA Notes
div.approx.f32 and div.full.f32 supported on all target architectures.
div.rnd.f32 requires sm_20 or higher.
div.rn.f64 requires sm_13 or higher, or .target map_f64_to_f32.
div.{rz,rm,rp}.f64 requires sm_20 or higher.
Examples
div.approx.ftz.f32 diam,circum,3.14159; div.full.ftz.f32 x, y, z; div.rn.f64 xd, yd, zd;
9.7.3.9. Floating Point Instructions: abs
abs
Absolute value.
Syntax
abs{.ftz}.f32 d, a; abs.f64 d, a;
Description
Take the absolute value of a and store the result in d.
Semantics
d = a;
Notes
 sm_20+

By default, subnormal numbers are supported.
abs.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

abs.f64 supports subnormal numbers.
abs.f32 flushes subnormal inputs and results to signpreserving zero.
For abs.f32, NaN input yields unspecified NaN. For abs.f64, NaN input is passed through unchanged. Future implementations may comply with the IEEE 754 standard by preserving payload and modifying only the sign bit.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
abs.f32 supported on all target architectures.
abs.f64 requires sm_13 or higher.
Examples
abs.ftz.f32 x,f0;
9.7.3.10. Floating Point Instructions: neg
neg
Arithmetic negate.
Syntax
neg{.ftz}.f32 d, a; neg.f64 d, a;
Description
Negate the sign of a and store the result in d.
Semantics
d = a;
Notes
 sm_20+

By default, subnormal numbers are supported.
neg.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

neg.f64 supports subnormal numbers.
neg.f32 flushes subnormal inputs and results to signpreserving zero.
NaN inputs yield an unspecified NaN. Future implementations may comply with the IEEE 754 standard by preserving payload and modifying only the sign bit.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
Target ISA Notes
neg.f32 supported on all target architectures.
neg.f64 requires sm_13 or higher.
Examples
neg.ftz.f32 x,f0;
9.7.3.11. Floating Point Instructions: min
min
Find the minimum of two values.
Syntax
min{.ftz}{.NaN}{.xorsign.abs}.f32 d, a, b; min.f64 d, a, b;
Description
Store the minimum of a and b in d.
If .NaN modifier is specified, then the result is canonical NaN if either of the inputs is NaN.
If .abs modifier is specified, the magnitude of destination operand d is the minimum of absolute values of both the input arguments.
If .xorsign modifier is specified, the sign bit of destination d is equal to the XOR of the sign bits of both the inputs.
Modifiers .abs and .xorsign must be specified together and .xorsign considers the sign bit of both inputs before applying .abs operation.
If the result of min is NaN then the .xorsign and .abs modifiers will be ignored.
Semantics
if (.xorsign) { xorsign = getSignBit(a) ^ getSignBit(b); if (.abs) { a = a; b = b; } } if (isNaN(a) && isNaN(b)) d = NaN; else if (.NaN && (isNaN(a)  isNaN(b))) d = NaN; else if (isNaN(a)) d = b; else if (isNaN(b)) d = a; else d = (a < b) ? a : b; if (.xorsign && !isNaN(d)) { setSignBit(d, xorsign); }
Notes
 sm_20+

By default, subnormal numbers are supported.
min.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

min.f64 supports subnormal numbers.
min.f32 flushes subnormal inputs and results to signpreserving zero.
If values of both inputs are 0.0, then +0.0 > 0.0.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
min.NaN introduced in PTX ISA version 7.0.
min.xorsign.abs introduced in PTX ISA version 7.2.
Target ISA Notes
min.f32 supported on all target architectures.
min.f64 requires sm_13 or higher.
min.NaN requires sm_80 or higher.
min.xorsign.abs requires sm_86 or higher.
Examples
@p min.ftz.f32 z,z,x; min.f64 a,b,c; // fp32 min with .NaN min.NaN.f32 f0,f1,f2; // fp32 min with .xorsign.abs min.xorsign.abs.f32 Rd, Ra, Rb;
9.7.3.12. Floating Point Instructions: max
max
Find the maximum of two values.
Syntax
max{.ftz}{.NaN}{.xorsign.abs}.f32 d, a, b; max.f64 d, a, b;
Description
Store the maximum of a and b in d.
If .NaN modifier is specified, the result is canonical NaN if either of the inputs is NaN.
If .abs modifier is specified, the magnitude of destination operand d is the maximum of absolute values of both the input arguments.
If .xorsign modifier is specified, the sign bit of destination d is equal to the XOR of the sign bits of both the inputs.
Modifiers .abs and .xorsign must be specified together and .xorsign considers the sign bit of both inputs before applying .abs operation.
If the result of max is NaN then the .xorsign and .abs modifiers will be ignored.
Semantics
if (.xorsign) { xorsign = getSignBit(a) ^ getSignBit(b); if (.abs) { a = a; b = b; } } if (isNaN(a) && isNaN(b)) d = NaN; else if (.NaN && (isNaN(a)  isNaN(b))) d = NaN; else if (isNaN(a)) d = b; else if (isNaN(b)) d = a; else d = (a > b) ? a : b; if (.xorsign && !isNaN(d)) { setSignBit(d, xorsign); }
Notes
 sm_20+

By default, subnormal numbers are supported.
max.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

max.f64 supports subnormal numbers.
max.f32 flushes subnormal inputs and results to signpreserving zero.
If values of both inputs are 0.0, then +0.0 > 0.0.
PTX ISA Notes
Introduced in PTX ISA version 1.0.
max.NaN introduced in PTX ISA version 7.0.
max.xorsign.abs introduced in PTX ISA version 7.2.
Target ISA Notes
max.f32 supported on all target architectures.
max.f64 requires sm_13 or higher.
max.NaN requires sm_80 or higher.
max.xorsign.abs requires sm_86 or higher.
Examples
max.ftz.f32 f0,f1,f2; max.f64 a,b,c; // fp32 max with .NaN max.NaN.f32 f0,f1,f2; // fp32 max with .xorsign.abs max.xorsign.abs.f32 Rd, Ra, Rb;
9.7.3.13. Floating Point Instructions: rcp
rcp
Take the reciprocal of a value.
Syntax
rcp.approx{.ftz}.f32 d, a; // fast, approximate reciprocal rcp.rnd{.ftz}.f32 d, a; // IEEE 754 compliant rounding rcp.rnd.f64 d, a; // IEEE 754 compliant rounding .rnd = { .rn, .rz, .rm, .rp };
Description
Compute 1/a, store result in d.
Semantics
d = 1 / a;
Notes
Fast, approximate singleprecision reciprocal:
rcp.approx.f32 implements a fast approximation to reciprocal. The maximum absolute error is 2^{23.0} over the range 1.02.0.
Input  Result 

Inf  0.0 
subnormal  Inf 
0.0  Inf 
+0.0  +Inf 
+subnormal  +Inf 
+Inf  +0.0 
NaN  NaN 
Reciprocal with IEEE 754 compliant rounding:
Rounding modifiers (no default):
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
Subnormal numbers:
 sm_20+

By default, subnormal numbers are supported.
rcp.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

rcp.f64 supports subnormal numbers.
rcp.f32 flushes subnormal inputs and results to signpreserving zero.
PTX ISA Notes
rcp.f32 and rcp.f64 introduced in PTX ISA version 1.0. rcp.rn.f64 and explicit modifiers .approx and .ftz were introduced in PTX ISA version 1.4. General rounding modifiers were added in PTX ISA version 2.0.
For PTX ISA version 1.4 and later, one of .approx or .rnd is required.
For PTX ISA versions 1.0 through 1.3, rcp.f32 defaults to rcp.approx.ftz.f32, and rcp.f64 defaults to rcp.rn.f64.
Target ISA Notes
rcp.approx.f32 supported on all target architectures.
rcp.rnd.f32 requires sm_20 or higher.
rcp.rn.f64 requires sm_13 or higher, or .target map_f64_to_f32.
rcp.{rz,rm,rp}.f64 requires sm_20 or higher.
Examples
rcp.approx.ftz.f32 ri,r; rcp.rn.ftz.f32 xi,x; rcp.rn.f64 xi,x;
9.7.3.14. Floating Point Instructions: rcp.approx.ftz.f64
rcp.approx.ftz.f64
Compute a fast, gross approximation to the reciprocal of a value.
Syntax
rcp.approx.ftz.f64 d, a;
Description
 extract the mostsignificant 32 bits of .f64 operand a in 1.11.20 IEEE floatingpoint format (i.e., ignore the leastsignificant 32 bits of a),
 compute an approximate .f64 reciprocal of this value using the mostsignificant 20 bits of the mantissa of operand a,
 place the resulting 32bits in 1.11.20 IEEE floatingpoint format in the mostsignificant 32bits of destination d,and
 zero the least significant 32 mantissa bits of .f64 destination d.
Semantics
tmp = a[63:32]; // upper word of a, 1.11.20 format d[63:32] = 1.0 / tmp; d[31:0] = 0x00000000;
Notes
rcp.approx.ftz.f64 implements a fast, gross approximation to reciprocal.
Input a[63:32]  Result d[63:32] 

Inf  0.0 
subnormal  Inf 
0.0  Inf 
+0.0  +Inf 
+subnormal  +Inf 
+Inf  +0.0 
NaN  NaN 
Input NaNs map to a canonical NaN with encoding 0x7fffffff00000000.
Subnormal inputs and results are flushed to signpreserving zero.
PTX ISA Notes
rcp.approx.ftz.f64 introduced in PTX ISA version 2.1.
Target ISA Notes
rcp.approx.ftz.f64 requires sm_20 or higher.
Examples
rcp.ftz.f64 xi,x;
9.7.3.15. Floating Point Instructions: sqrt
sqrt
Take the square root of a value.
Syntax
sqrt.approx{.ftz}.f32 d, a; // fast, approximate square root sqrt.rnd{.ftz}.f32 d, a; // IEEE 754 compliant rounding sqrt.rnd.f64 d, a; // IEEE 754 compliant rounding .rnd = { .rn, .rz, .rm, .rp };
Description
Compute sqrt(a) and store the result in d.
Semantics
d = sqrt(a);
Notes
sqrt.approx.f32 implements a fast approximation to square root.
Input  Result 

Inf  NaN 
normal  NaN 
subnormal  0.0 
0.0  0.0 
+0.0  +0.0 
+subnormal  +0.0 
+Inf  +Inf 
NaN  NaN 
Square root with IEEE 754 compliant rounding:
 .rn
 mantissa LSB rounds to nearest even
 .rz
 mantissa LSB rounds towards zero
 .rm
 mantissa LSB rounds towards negative infinity
 .rp
 mantissa LSB rounds towards positive infinity
 sm_20+

By default, subnormal numbers are supported.
sqrt.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

sqrt.f64 supports subnormal numbers.
sqrt.f32 flushes subnormal inputs and results to signpreserving zero.
PTX ISA Notes
sqrt.f32 and sqrt.f64 introduced in PTX ISA version 1.0. sqrt.rn.f64 and explicit modifiers .approx and .ftz were introduced in PTX ISA version 1.4. General rounding modifiers were added in PTX ISA version 2.0.
For PTX ISA version 1.4 and later, one of .approx or .rnd is required.
For PTX ISA versions 1.0 through 1.3, sqrt.f32 defaults to sqrt.approx.ftz.f32, and sqrt.f64 defaults to sqrt.rn.f64.
Target ISA Notes
sqrt.approx.f32 supported on all target architectures.
sqrt.rnd.f32 requires sm_20 or higher.
sqrt.rn.f64 requires sm_13 or higher, or .target map_f64_to_f32.
sqrt.{rz,rm,rp}.f64 requires sm_20 or higher.
Examples
sqrt.approx.ftz.f32 r,x; sqrt.rn.ftz.f32 r,x; sqrt.rn.f64 r,x;
9.7.3.16. Floating Point Instructions: rsqrt
rsqrt
Take the reciprocal of the square root of a value.
Syntax
rsqrt.approx{.ftz}.f32 d, a; rsqrt.approx.f64 d, a;
Description
Compute 1/sqrt(a) and store the result in d.
Semantics
d = 1/sqrt(a);
Notes
rsqrt.approx implements an approximation to the reciprocal square root.
Input  Result 

Inf  NaN 
normal  NaN 
subnormal  Inf 
0.0  Inf 
+0.0  +Inf 
+subnormal  +Inf 
+Inf  +0.0 
NaN  NaN 
The maximum absolute error for rsqrt.f32 is 2^{22.4} over the range 1.04.0.
 sm_20+

By default, subnormal numbers are supported.
rsqrt.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x

rsqrt.f64 supports subnormal numbers.
rsqrt.f32 flushes subnormal inputs and results to signpreserving zero.
Note that rsqrt.approx.f64 is emulated in software and are relatively slow.
PTX ISA Notes
rsqrt.f32 and rsqrt.f64 were introduced in PTX ISA version 1.0. Explicit modifiers .approx and .ftz were introduced in PTX ISA version 1.4.
For PTX ISA version 1.4 and later, the .approx modifier is required.
For PTX ISA versions 1.0 through 1.3, rsqrt.f32 defaults to rsqrt.approx.ftz.f32, and rsqrt.f64 defaults to rsqrt.approx.f64.
Target ISA Notes
rsqrt.f32 supported on all target architectures.
rsqrt.f64 requires sm_13 or higher.
Examples
rsqrt.approx.ftz.f32 isr, x; rsqrt.approx.f64 ISR, X;
9.7.3.17. Floating Point Instructions: rsqrt.approx.ftz.f64
rsqrt.approx.ftz.f64
Compute an approximation of the square root reciprocal of a value.
Syntax
rsqrt.approx.ftz.f64 d, a;
Description
Compute a doubleprecision (.f64) approximation of the square root reciprocal of a value. The least significant 32 bits of the doubleprecision (.f64) destination d are all zeros.
Semantics
tmp = a[63:32]; // upper word of a, 1.11.20 format d[63:32] = 1.0 / sqrt(tmp); d[31:0] = 0x00000000;
Notes
rsqrt.approx.ftz.f64 implements a fast approximation of the square root reciprocal of a value.
Input  Result 

Inf  NaN 
subnormal  Inf 
0.0  Inf 
+0.0  +Inf 
+subnormal  +Inf 
+Inf  +0.0 
NaN  NaN 
Input NaNs map to a canonical NaN with encoding 0x7fffffff00000000.
Subnormal inputs and results are flushed to signpreserving zero.
PTX ISA Notes
rsqrt.approx.ftz.f64 introduced in PTX ISA version 4.0.
Target ISA Notes
rsqrt.approx.ftz.f64 requires sm_20 or higher.
Examples
rsqrt.approx.ftz.f64 xi,x;
9.7.3.18. Floating Point Instructions: sin
sin
Find the sine of a value.
Syntax
sin.approx{.ftz}.f32 d, a;
Description
Find the sine of the angle a (in radians).
Semantics
d = sin(a);
Notes
sin.approx.f32 implements a fast approximation to sine.
Input  Result 

Inf  NaN 
subnormal  0.0 
0.0  0.0 
+0.0  +0.0 
+subnormal  +0.0 
+Inf  NaN 
NaN  NaN 
The maximum absolute error is 2^{20.9} in quadrant 00.
 sm_20+

By default, subnormal numbers are supported.
sin.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x
 Subnormal inputs and results to signpreserving zero.
PTX ISA Notes
sin.f32 introduced in PTX ISA version 1.0. Explicit modifiers .approx and .ftz introduced in PTX ISA version 1.4.
For PTX ISA version 1.4 and later, the .approx modifier is required.
For PTX ISA versions 1.0 through 1.3, sin.f32 defaults to sin.approx.ftz.f32.
Target ISA Notes
Supported on all target architectures.
Examples
sin.approx.ftz.f32 sa, a;
9.7.3.19. Floating Point Instructions: cos
cos
Find the cosine of a value.
Syntax
cos.approx{.ftz}.f32 d, a;
Description
Find the cosine of the angle a (in radians).
Semantics
d = cos(a);
Notes
cos.approx.f32 implements a fast approximation to cosine.
Input  Result 

Inf  NaN 
subnormal  +1.0 
0.0  +1.0 
+0.0  +1.0 
+subnormal  +1.0 
+Inf  NaN 
NaN  NaN 
The maximum absolute error is 2^{20.9} in quadrant 00.
 sm_20+

By default, subnormal numbers are supported.
cos.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x
 Subnormal inputs and results to signpreserving zero.
PTX ISA Notes
cos.f32 introduced in PTX ISA version 1.0. Explicit modifiers .approx and .ftz introduced in PTX ISA version 1.4.
For PTX ISA version 1.4 and later, the .approx modifier is required.
For PTX ISA versions 1.0 through 1.3, cos.f32 defaults to cos.approx.ftz.f32.
Target ISA Notes
Supported on all target architectures.
Examples
cos.approx.ftz.f32 ca, a;
9.7.3.20. Floating Point Instructions: lg2
lg2
Find the base2 logarithm of a value.
Syntax
lg2.approx{.ftz}.f32 d, a;
Description
Determine the log_{2} of a.
Semantics
d = log(a) / log(2);
Notes
lg2.approx.f32 implements a fast approximation to log_{2}(a).
Input  Result 

Inf  NaN 
subnormal  Inf 
0.0  Inf 
+0.0  Inf 
+subnormal  Inf 
+Inf  +Inf 
NaN  NaN 
The maximum absolute error is 2^{22.6} for mantissa.
 sm_20+

By default, subnormal numbers are supported.
lg2.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x
 Subnormal inputs and results to signpreserving zero.
PTX ISA Notes
lg2.f32 introduced in PTX ISA version 1.0. Explicit modifiers .approx and .ftz introduced in PTX ISA version 1.4.
For PTX ISA version 1.4 and later, the .approx modifier is required.
For PTX ISA versions 1.0 through 1.3, lg2.f32 defaults to lg2.approx.ftz.f32.
Target ISA Notes
Supported on all target architectures.
Examples
lg2.approx.ftz.f32 la, a;
9.7.3.21. Floating Point Instructions: ex2
ex2
Find the base2 exponential of a value.
Syntax
ex2.approx{.ftz}.f32 d, a;
Description
Raise 2 to the power a.
Semantics
d = 2 ^ a;
Notes
ex2.approx.f32 implements a fast approximation to 2^{a}.
Input  Result 

Inf  +0.0 
subnormal  +1.0 
0.0  +1.0 
+0.0  +1.0 
+subnormal  +1.0 
+Inf  +Inf 
NaN  NaN 
The maximum absolute error is 2^{22.5} for fraction in the primary range.
 sm_20+

By default, subnormal numbers are supported.
ex2.ftz.f32 flushes subnormal inputs and results to signpreserving zero.
 sm_1x
 Subnormal inputs and results to signpreserving zero.
PTX ISA Notes
ex2.f32 introduced in PTX ISA version 1.0. Explicit modifiers .approx and .ftz introduced in PTX ISA version 1.4.
For PTX ISA version 1.4 and later, the .approx modifier is required.
For PTX ISA versions 1.0 through 1.3, ex2.f32 defaults to ex2.approx.ftz.f32.
Target ISA Notes
Supported on all target architectures.
Examples
ex2.approx.ftz.f32 xa, a;
9.7.3.22. Floating Point Instructions: tanh
tanh
Find the hyperbolic tangent of a value (in radians)
Syntax
tanh.approx.f32 d, a;
Description
Take hyperbolic tangent value of a.
The operands d and a are of type .f32.
Semantics
d = tanh(a);
Notes
tanh.approx.f32 implements a fast approximation to FP32 hyperbolictangent.
Results of tanh for various cornercase inputs are as follows:
Input  Result 

Inf  1.0 
subnormal  Same as input 
0.0  0.0 
+0.0  +0.0 
+subnormal  Same as input 
+Inf  1.0 
NaN  NaN 
The subnormal numbers are supported.
PTX ISA Notes
Introduced in PTX ISA version 7.0.
Target ISA Notes
Requires sm_75 or higher.
Examples
tanh.approx.f32 sa, a;
9.7.4. Half Precision FloatingPoint Instructions
Half precision floatingpoint instructions operate on .f16 and .f16x2 register operands. The half precision floatingpoint instructions are:
 add
 sub
 mul
 fma
 neg
 abs
 min
 max
 tanh
 ex2
Halfprecision add, sub, mul, and fma support saturation of results to the range [0.0, 1.0], with NaNs being flushed to positive zero. Halfprecision instructions return an unspecified NaN.
9.7.4.1. Half Precision Floating Point Instructions: add
add
Add two values.
Syntax
add{.rnd}{.ftz}{.sat}.f16 d, a, b; add{.rnd}{.ftz}{.sat}.f16x2 d, a, b; .rnd = { .rn };
Description
Performs addition and writes the resulting value into a destination register.
For .f16x2 instruction type, forms input vectors by half word values from source operands. Halfword operands are then added in parallel to produce .f16x2 result in destination.
For .f16 instruction type, operands d, a and b have .f16 or .b16 type. For .f16x2 instruction type, operands d, a and b have .b32 type.
Semantics
if (type == f16) { d = a + b; } else if (type == f16x2) { fA[0] = a[0:15]; fA[1] = a[16:31]; fB[0] = b[0:15]; fB[1] = b[16:31]; for (i = 0; i < 2; i++) { d[i] = fA[i] + fB[i]; } }
Notes
 .rn
 mantissa LSB rounds to nearest even
The default value of rounding modifier is .rn. Note that an add instruction with an explicit rounding modifier is treated conservatively by the code optimizer. An add instruction with no rounding modifier defaults to roundtonearesteven and may be optimized aggressively by the code optimizer. In particular, mul/add sequences with no rounding modifiers may be optimized to use fusedmultiplyadd instructions on the target device.
 Subnormal numbers:
 By default, subnormal numbers are supported.
 add.ftz.{f16, f16x2} flushes subnormal inputs and results to signpreserving zero.
 Saturation modifier:
 add.sat.{f16, f16x2} clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 4.2.
Target ISA Notes
Requires sm_53 or higher.
Examples
// scalar f16 additions add.f16 d0, a0, b0; add.rn.f16 d1, a1, b1; // SIMD f16 addition cvt.rn.f16.f32 h0, f0; cvt.rn.f16.f32 h1, f1; cvt.rn.f16.f32 h2, f2; cvt.rn.f16.f32 h3, f3; mov.b32 p1, {h0, h1}; // pack two f16 to 32bit f16x2 mov.b32 p2, {h2, h3}; // pack two f16 to 32bit f16x2 add.f16x2 p3, p1, p2; // SIMD f16x2 addition // SIMD fp16 addition ld.global.b32 f0, [addr]; // load 32 bit which hold packed f16x2 ld.global.b32 f1, [addr + 4]; // load 32 bit which hold packed f16x2 add.f16x2 f2, f0, f1; // SIMD f16x2 addition
9.7.4.2. Half Precision Floating Point Instructions: sub
sub
Subtract two values.
Syntax
sub{.rnd}{.ftz}{.sat}.f16 d, a, b; sub{.rnd}{.ftz}{.sat}.f16x2 d, a, b; .rnd = { .rn };
Description
Performs subtraction and writes the resulting value into a destination register.
For .f16x2 instruction type, forms input vectors by half word values from source operands. Halfword operands are then subtracted in parallel to produce .f16x2 result in destination.
For .f16 instruction type, operands d, a and b have .f16 or .b16 type. For .f16x2 instruction type, operands d, a and b have .b32 type.
Semantics
if (type == f16) { d = a  b; } else if (type == f16x2) { fA[0] = a[0:15]; fA[1] = a[16:31]; fB[0] = b[0:15]; fB[1] = b[16:31]; for (i = 0; i < 2; i++) { d[i] = fA[i]  fB[i]; } }
Notes
 .rn
 mantissa LSB rounds to nearest even
The default value of rounding modifier is .rn. Note that a sub instruction with an explicit rounding modifier is treated conservatively by the code optimizer. A sub instruction with no rounding modifier defaults to roundtonearesteven and may be optimized aggressively by the code optimizer. In particular, mul/sub sequences with no rounding modifiers may be optimized to use fusedmultiplyadd instructions on the target device.
 Subnormal numbers:
 By default, subnormal numbers are supported.
 sub.ftz.{f16, f16x2} flushes subnormal inputs and results to signpreserving zero.
 Saturation modifier:
 sub.sat.{f16, f16x2} clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 4.2.
Target ISA Notes
Requires sm_53 or higher.
Examples
// scalar f16 subtractions sub.f16 d0, a0, b0; sub.rn.f16 d1, a1, b1; // SIMD f16 subtraction cvt.rn.f16.f32 h0, f0; cvt.rn.f16.f32 h1, f1; cvt.rn.f16.f32 h2, f2; cvt.rn.f16.f32 h3, f3; mov.b32 p1, {h0, h1}; // pack two f16 to 32bit f16x2 mov.b32 p2, {h2, h3}; // pack two f16 to 32bit f16x2 sub.f16x2 p3, p1, p2; // SIMD f16x2 subtraction // SIMD fp16 subtraction ld.global.b32 f0, [addr]; // load 32 bit which hold packed f16x2 ld.global.b32 f1, [addr + 4]; // load 32 bit which hold packed f16x2 sub.f16x2 f2, f0, f1; // SIMD f16x2 subtraction
9.7.4.3. Half Precision Floating Point Instructions: mul
mul
Multiply two values.
Syntax
mul{.rnd}{.ftz}{.sat}.f16 d, a, b; mul{.rnd}{.ftz}{.sat}.f16x2 d, a, b; .rnd = { .rn };
Description
Performs multiplication and writes the resulting value into a destination register.
For .f16x2 instruction type, forms input vectors by half word values from source operands. Halfword operands are then multiplied in parallel to produce .f16x2 result in destination.
For .f16 instruction type, operands d, a and b have .f16 or .b16 type. For .f16x2 instruction type, operands d, a and b have .b32 type.
Semantics
if (type == f16) { d = a * b; } else if (type == f16x2) { fA[0] = a[0:15]; fA[1] = a[16:31]; fB[0] = b[0:15]; fB[1] = b[16:31]; for (i = 0; i < 2; i++) { d[i] = fA[i] * fB[i]; } }
Notes
 .rn
 mantissa LSB rounds to nearest even
The default value of rounding modifier is .rn. Note that a mul instruction with an explicit rounding modifier is treated conservatively by the code optimizer. A mul instruction with no rounding modifier defaults to roundtonearesteven and may be optimized aggressively by the code optimizer. In particular, mul/add and mul/sub sequences with no rounding modifiers may be optimized to use fusedmultiplyadd instructions on the target device.
 Subnormal numbers:
 By default, subnormal numbers are supported.
 mul.ftz.{f16, f16x2} flushes subnormal inputs and results to signpreserving zero.
 Saturation modifier:
 mul.sat.{f16, f16x2} clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
PTX ISA Notes
Introduced in PTX ISA version 4.2.
Target ISA Notes
Requires sm_53 or higher.
Examples
// scalar f16 multiplications mul.f16 d0, a0, b0; mul.rn.f16 d1, a1, b1; // SIMD f16 multiplication cvt.rn.f16.f32 h0, f0; cvt.rn.f16.f32 h1, f1; cvt.rn.f16.f32 h2, f2; cvt.rn.f16.f32 h3, f3; mov.b32 p1, {h0, h1}; // pack two f16 to 32bit f16x2 mov.b32 p2, {h2, h3}; // pack two f16 to 32bit f16x2 mul.f16x2 p3, p1, p2; // SIMD f16x2 multiplication // SIMD fp16 multiplication ld.global.b32 f0, [addr]; // load 32 bit which hold packed f16x2 ld.global.b32 f1, [addr + 4]; // load 32 bit which hold packed f16x2 mul.f16x2 f2, f0, f1; // SIMD f16x2 multiplication
9.7.4.4. Half Precision Floating Point Instructions: fma
fma
Fused multiplyadd
Syntax
fma.rnd{.ftz}{.sat}.f16 d, a, b, c;
fma.rnd{.ftz}{.sat}.f16x2 d, a, b, c;
fma.rnd{.ftz}.relu.f16 d, a, b, c;
fma.rnd{.ftz}.relu.f16x2 d, a, b, c;
fma.rnd{.relu}.bf16 d, a, b, c;
fma.rnd{.relu}.bf16x2 d, a, b, c;
.rnd = { .rn };
Description
Performs a fused multiplyadd with no loss of precision in the intermediate product and addition.
For .f16x2 and .bf16x2 instruction type, forms input vectors by half word values from source operands. Halfword operands are then operated in parallel to produce .f16x2 or .bf16x2 result in destination.
For .f16 instruction type, operands d, a, b and c have .f16 or .b16 type. For .f16x2 instruction type, operands d, a, b and c have .b32 type. For .bf16 instruction type, operands d, a, b and c have .b16 type. For .bf16x2 instruction type, operands d, a, b and c have .b32 type.
Semantics
if (type == f16  type == bf16) { d = a * b + c; } else if (type == f16x2  type == bf16x2) { fA[0] = a[0:15]; fA[1] = a[16:31]; fB[0] = b[0:15]; fB[1] = b[16:31]; fC[0] = c[0:15]; fC[1] = c[16:31]; for (i = 0; i < 2; i++) { d[i] = fA[i] * fB[i] + fC[i]; } }
Notes
 .rn
 mantissa LSB rounds to nearest even
 Subnormal numbers:
 By default, subnormal numbers are supported.
 fma.ftz.{f16, f16x2} flushes subnormal inputs and results to signpreserving zero.
 Saturation modifier:
 fma.sat.{f16, f16x2} clamps the result to [0.0, 1.0]. NaN results are flushed to +0.0f.
 fma.relu.{f16, f16x2, bf16, bf16x2} clamps the result to 0 if negative. NaN result is converted to canonical NaN.
PTX ISA Notes
Introduced in PTX ISA version 4.2.
fma.relu.{f16, f16x2} and fma{.relu}.{bf16, bf16x2} introduced in PTX ISA version 7.0.
Target ISA Notes
Requires sm_53 or higher.
fma.relu.{f16, f16x2} and fma{.relu}.{bf16, bf16x2} require sm_80 or higher.
Examples
// scalar f16 fused multiplyadd fma.rn.f16 d0, a0, b0, c0; fma.rn.f16 d1, a1, b1, c1; fma.rn.relu.f16 d1, a1, b1, c1; // scalar bf16 fused multiplyadd fma.rn.bf16 d1, a1, b1, c1; fma.rn.relu.bf16 d1, a1, b1, c1; // SIMD f16 fused multiplyadd cvt.rn.f16.f32 h0, f0; cvt.rn.f16.f32 h1, f1; cvt.rn.f16.f32 h2, f2; cvt.rn.f16.f32 h3, f3; mov.b32 p1, {h0, h1}; // pack two f16 to 32bit f16x2 mov.b32 p2, {h2, h3}; // pack two f16 to 32bit f16x2 fma.rn.f16x2 p3, p1, p2, p2; // SIMD f16x2 fused multiplyadd fma.rn.relu.f16x2 p3, p1, p2, p2; // SIMD f16x2 fused multiplyadd with relu saturation mode // SIMD fp16 fused multiplyadd ld.global.b32 f0, [addr]; // load 32 bit which hold packed f16x2 ld.global.b32 f1, [addr + 4]; // load 32 bit which hold packed f16x2 fma.rn.f16x2 f2, f0, f1, f1; // SIMD f16x2 fused multiplyadd // SIMD bf16 fused multiplyadd fma.rn.bf16x2 f2, f0, f1, f1; // SIMD bf16x2 fused multiplyadd fma.rn.relu.bf16x2 f2, f0, f1, f1; // SIMD bf16x2 fused multiplyadd with relu saturation mode
9.7.4.5. Half Precision Floating Point Instructions: neg
neg
Arithmetic negate.
Syntax
neg{.ftz}.f16 d, a;
neg{.ftz}.f16x2 d, a;
neg.bf16 d, a;
neg.bf16x2 d, a;
Description
Negate the sign of a and store the result in d.
For .f16x2 and .bf16x2 instruction type, forms input vector by extracting half word values from the source operand. Halfword operands are then negated in parallel to produce .f16x2 or .bf16x2 result in destination.
For .f16 instruction type, operands d and a have .f16 or .b16 type. For .f16x2 instruction type, operands d and a have .b32 type. For .bf16 instruction type, operands d and a have .b16 type. For bf16x2 instruction type, operands d and a have .b32 type.
Semantics
if (type == f16  type == bf16) { d = a; } else if (type == f16x2  type == bf16x2) { fA[0] = a[0:15]; fA[1] = a[16:31]; for (i = 0; i < 2; i++) { d[i] = fA[i]; } }
Notes
 Subnormal numbers:
 By default, subnormal numbers are supported.
 neg.ftz.{f16, f16x2} flushes subnormal inputs and results to signpreserving zero.
NaN inputs yield an unspecified NaN. Future implementations may comply with the IEEE 754 standard by preserving payload and modifying only the sign bit.
PTX ISA Notes
Introduced in PTX ISA version 6.0.
neg.bf16 and neg.bf16x2 introduced in PTX ISA 7.0.
Target ISA Notes
Requires sm_53 or higher.
neg.bf16 and neg.bf16x2 requires architecture sm_80 or higher.
Examples
neg.ftz.f16 x,f0;
neg.bf16 x,b0;
neg.bf16x2 x1,b1;
9.7.4.6. Half Precision Floating Point Instructions: abs
abs
Absolute value
Syntax
abs{.ftz}.f16 d, a;
abs{.ftz}.f16x2 d, a;
abs.bf16 d, a;
abs.bf16x2 d, a;
Description
Take absolute value of a and store the result in d.
For .f16x2 and .bf16x2 instruction type, forms input vector by extracting half word values from the source operand. Absolute values of halfword operands are then computed in parallel to produce .f16x2 or .bf16x2 result in destination.
For .f16 instruction type, operands d and a have .f16 or .b16 type. For .f16x2 instruction type, operands d and a have .f16x2 or .b32 type. For .bf16 instruction type, operands d and a have .b16 type. For .bf16x2 instruction type, operands d and a have .b32 type.
Semantics
if (type == f16  type == bf16) { d = a; } else if (type == f16x2  type == bf16x2) { fA[0] = a[0:15]; fA[1] = a[16:31]; for (i = 0; i < 2; i++) { d[i] = fA[i]; } }
Notes
 Subnormal numbers:
 By default, subnormal numbers are supported.
 abs.ftz.{f16, f16x2} flushes subnormal inputs and results to signpreserving zero.
NaN inputs yield an unspecified NaN. Future implementations may comply with the IEEE 754 standard by preserving payload and modifying only the sign bit.
PTX ISA Notes
Introduced in PTX ISA version 6.5.
abs.bf16 and abs.bf16x2 introduced in PTX ISA 7.0.
Target ISA Notes
Requires sm_53 or higher.
abs.bf16 and abs.bf16x2 requires architecture sm_80 or higher.
Examples
abs.ftz.f16 x,f0;
abs.bf16 x,b0;
abs.bf16x2 x1,b1;