API Reference

VideoSuperRes

class nvvfx.VideoSuperRes(quality: QualityLevel = QualityLevel.HIGH, device: int = 0)

Bases: Effect

AI-powered video super resolution effect.

Framework-agnostic: accepts any DLPack-compatible CUDA array and returns a DLPack capsule that can be consumed by any framework.

Set output dimensions via properties, then call load() before run(). Input dimensions are inferred automatically from the input array. Quality level and dimensions can be changed dynamically between calls.

Parameters:
  • quality – Quality level for enhancement (default: HIGH)

  • device – CUDA device index (default: 0)

Example with PyTorch:
>>> import torch
>>> from nvvfx import VideoSuperRes
>>> input_frame = torch.rand(3, 540, 960, device="cuda")
>>> with VideoSuperRes() as sr:
...     sr.output_width = 1920
...     sr.output_height = 1080
...     sr.load()
...     output = torch.from_dlpack(sr.run(input_frame).image).clone()
>>> output.shape
torch.Size([3, 1080, 1920])
property output_width: int | None

Output width in pixels.

property output_height: int | None

Output height in pixels.

property quality: QualityLevel

Quality level for enhancement.

run(input_array: Any, *, non_blocking: bool = False, stream_ptr: int = 0) VideoSuperResOutput

Apply super resolution to input array.

Parameters:
  • input_array – Input image array supporting DLPack protocol - Shape: (3, H, W) - RGB channels first - Dtype: float32 - Device: CUDA - Range: [0, 1]

  • non_blocking – If True, return immediately (async). Default False.

  • stream_ptr – CUDA stream pointer as int (0 for default stream). For PyTorch: stream.cuda_stream For CuPy: stream.ptr

Returns:

  • PyTorch: torch.from_dlpack(result.image).clone()
    • CuPy: cupy.from_dlpack(result.image).copy()

    • JAX: jax.dlpack.from_dlpack(result.image)

IMPORTANT: The returned capsule references C++ memory. You must copy/clone it immediately before the next call or close().

Return type:

VideoSuperResOutput containing the upscaled image. Convert with

Raises:
  • NvVFXError – If load() was not called, output dimensions not set, or inference fails.

  • TypeError – If input doesn’t support DLPack.

  • ValueError – If input is not on CUDA device.

QualityLevel

class nvvfx.VideoSuperRes.QualityLevel(*values)

Bases: IntEnum

Quality levels for Video Super Resolution.

Selects the AI model and processing strategy. Modes are grouped by operation type; choose the group that matches your use case, then choose an intensity level within it (LOW → ULTRA trades speed for quality).

Values 5-7 are reserved.

Standard upscaling modes (0-4): Used when output resolution is larger than input. These models are trained on typical compressed video and will remove compression artifacts while upscaling.

  • BICUBIC: Non-AI bicubic interpolation (fastest, baseline quality)

  • LOW: AI upscaling, speed-optimized

  • MEDIUM: AI upscaling, balanced speed/quality

  • HIGH: AI upscaling, quality-favoring (default)

  • ULTRA: AI upscaling, maximum detail preservation

Denoise modes (8-11): Same-resolution processing (output == input dimensions). Removes noise and compression artifacts (macro-blocking, mosquito noise) while preserving resolution. Higher intensities remove more noise but may soften fine texture.

  • DENOISE_LOW: Light cleanup, maximum texture preservation

  • DENOISE_MEDIUM: Moderate noise/artifact removal

  • DENOISE_HIGH: Aggressive noise/artifact removal

  • DENOISE_ULTRA: Maximum denoising strength

Deblur modes (12-15): Same-resolution processing. Sharpens soft or blurry footage rather than targeting noise. Best for out-of-focus or motion-blurred sources.

  • DEBLUR_LOW: Light sharpening

  • DEBLUR_MEDIUM: Moderate sharpening

  • DEBLUR_HIGH: Aggressive sharpening

  • DEBLUR_ULTRA: Maximum sharpening strength

High-bitrate modes (16-19): Upscaling optimized for high-bitrate or lossless sources (e.g., ProRes, high-quality H.265) that have few compression artifacts. Unlike standard modes, these skip artifact suppression to avoid degrading already-clean detail.

  • HIGHBITRATE_LOW: AI upscaling, speed-optimized (clean source)

  • HIGHBITRATE_MEDIUM: AI upscaling, balanced (clean source)

  • HIGHBITRATE_HIGH: AI upscaling, quality-favoring (clean source)

  • HIGHBITRATE_ULTRA: AI upscaling, maximum detail (clean source)

BICUBIC = 0
LOW = 1
MEDIUM = 2
HIGH = 3
ULTRA = 4
DENOISE_LOW = 8
DENOISE_MEDIUM = 9
DENOISE_HIGH = 10
DENOISE_ULTRA = 11
DEBLUR_LOW = 12
DEBLUR_MEDIUM = 13
DEBLUR_HIGH = 14
DEBLUR_ULTRA = 15
HIGHBITRATE_LOW = 16
HIGHBITRATE_MEDIUM = 17
HIGHBITRATE_HIGH = 18
HIGHBITRATE_ULTRA = 19

Effect

class nvvfx.effects.base.Effect(device: int = 0)

Bases: object

Base class for all NVVFX effects.

Provides a thin wrapper around the C++ _Effect bindings with state tracking and context manager support. Effect-specific classes inherit from this and implement load() and run().

is_loaded

Whether the effect has been loaded.

needs_reload

Whether load() must be called again due to config changes.

device

CUDA device index.

selector

Effect selector string.

property is_loaded: bool

Check if the effect is loaded.

property needs_reload: bool
property device: int

CUDA device index.

property selector: str

Effect selector string.

load() None

Load the effect.

Must be called after setting all configuration parameters and before calling run(). Some parameters can be changed after load without requiring a reload.

Raises:

NvVFXError – If loading fails.

run(input_array: Any, **kwargs: Any) Any

Run inference. Subclasses must implement.

Raises:

NotImplementedError – If not implemented by subclass.

close() None

Release resources. Effect should not be used after this.