fft#
-
nvmath.
distributed. fft. fft( - operand,
- distribution: Slab | Sequence[Box],
- sync_symmetric_memory: bool = True,
- options: FFTOptions | None = None,
- stream: AnyStream | None = None,
Perform an N-D complex-to-complex (C2C) distributed FFT on the provided complex operand.
- Parameters:
operand –
A tensor (ndarray-like object). The currently supported types are
numpy.ndarray
,cupy.ndarray
, andtorch.Tensor
.Important
GPU operands must be on the symmetric heap (for example, allocated with
nvmath.
).distributed. allocate_symmetric_memory() distribution – Specifies the distribution of input and output operands across processes, which can be: (i) according to a Slab distribution (see
Slab
), or (ii) a custom box distribution. With Slab distribution, this indicates the distribution of the input operand (the output operand will use the complementary Slab distribution). With box distribution, this indicates the input and output boxes.sync_symmetric_memory – Indicates whether to issue a symmetric memory synchronization operation on the execute stream before the FFT. Note that before the FFT starts executing, it is required that the input operand be ready on all processes. A symmetric memory synchronization ensures completion and visibility by all processes of previously issued local stores to symmetric memory. Advanced users who choose to manage the synchronization on their own using the appropriate NVSHMEM API, or who know that GPUs are already synchronized on the source operand, can set this to False.
options – Specify options for the FFT as a
FFTOptions
object. Alternatively, adict
containing the parameters for theFFTOptions
constructor can also be provided. If not specified, the value will be set to the default-constructedFFTOptions
object.stream – Provide the CUDA stream to use for executing the operation. Acceptable inputs include
cudaStream_t
(as Pythonint
),cupy.cuda.Stream
, andtorch.cuda.Stream
. If a stream is not provided, the current stream from the operand package will be used.
- Returns:
A transformed operand that retains the same data type as the input. The resulting shape will depend on the choice of distribution and reshape option. The operand remains on the same device and uses the same package as the input operand.
Examples
>>> import cupy as cp >>> import nvmath.distributed
Get MPI communicator used to initialize nvmath.distributed (for information on initializing nvmath.distributed, you can refer to the documentation or to the FFT examples in nvmath/examples/distributed/fft):
>>> comm = nvmath.distributed.get_context().communicator >>> nranks = comm.Get_size()
Create a 3-D complex128 ndarray on GPU symmetric memory, distributed according to the Slab distribution on the Y axis (the global shape is (256, 256, 256)):
>>> shape = 256, 256 // nranks, 256 >>> dtype = cp.complex128 >>> a = nvmath.distributed.allocate_symmetric_memory(shape, cp, dtype=dtype) >>> a[:] = cp.random.rand(*shape, dtype=cp.float64) + 1j * cp.random.rand( ... *shape, dtype=cp.float64 ... )
Perform a 3-D C2C FFT using
fft()
. The resultr
is also a CuPy complex128 ndarray:>>> r = nvmath.distributed.fft.fft(a, distribution=nvmath.distributed.fft.Slab.Y)
See
FFTOptions
for the complete list of available options.The package current stream is used by default, but a stream can be explicitly provided to the FFT operation. This can be done if the FFT operand is computed on a different stream, for example:
>>> s = cp.cuda.Stream() >>> with s: ... a = nvmath.distributed.allocate_symmetric_memory(shape, cp, dtype=dtype) ... a[:] = cp.random.rand(*shape) + 1j * cp.random.rand(*shape) >>> r = nvmath.distributed.fft.fft(a, stream=s)
The operation above runs on stream
s
and is ordered with respect to the input computation.Create a NumPy ndarray on the CPU.
>>> import numpy as np >>> b = np.random.rand(*shape) + 1j * np.random.rand(*shape)
Provide the NumPy ndarray to
fft()
, with the result also being a NumPy ndarray:>>> r = nvmath.distributed.fft.fft(b, nvmath.distributed.fft.Slab.Y)
Notes
This function is a convenience wrapper around
FFT
and and is specifically meant for single use. The same computation can be performed with the stateful API using the defaultdirection
argument inFFT.execute()
.
Further examples can be found in the nvmath/examples/distributed/fft directory.