What can I help you with?
NVIDIA PhysicsNeMo Core (Latest Release)

PhysicsNeMo Utils

The PhysicsNeMo Utils module provides a comprehensive set of utilities that support various aspects of scientific computing, machine learning, and physics simulations. These utilities range from optimization helpers and distributed computing tools to specialized functions for weather/climate modeling and geometry processing. The module is designed to simplify common tasks while maintaining high performance and scalability.

The optimization utilities provide tools for capturing and managing training states, gradients, and optimization processes. These are particularly useful when implementing custom training loops or specialized optimization strategies.

class physicsnemo.utils.capture.StaticCaptureEvaluateNoGrad(*args, **kwargs)[source]

Bases: _StaticCapture

An performance optimization decorator for PyTorch no grad evaluation.

This class should be initialized as a decorator on a function that computes run the forward pass of the model that does not require gradient calculations. This is the recommended method to use for inference and validation methods.

Parameters
  • model (physicsnemo.models.Module) – PhysicsNeMo Model

  • logger (Optional[Logger], optional) – PhysicsNeMo Launch Logger, by default None

  • use_graphs (bool, optional) – Toggle CUDA graphs if supported by model, by default True

  • use_amp (bool, optional) – Toggle AMP if supported by mode, by default True

  • cuda_graph_warmup (int, optional) – Number of warmup steps for cuda graphs, by default 11

  • amp_type (Union[float16, bfloat16], optional) – Auto casting type for AMP, by default torch.float16

  • label (Optional[str], optional) – Static capture checkpoint label, by default None

Raises

ValueError – If the model provided is not a physicsnemo.models.Module. I.e. has no meta data.

Example

Copy
Copied!
            

>>> # Create model >>> model = physicsnemo.models.mlp.FullyConnected(2, 64, 2) >>> input = torch.rand(8, 2) >>> # Create evaluate function with optimization wrapper >>> @StaticCaptureEvaluateNoGrad(model=model) ... def eval_step(model, invar): ... predvar = model(invar) ... return predvar ... >>> output = eval_step(model, input) >>> output.size() torch.Size([8, 2])

Note

Capturing multiple cuda graphs in a single program can lead to potential invalid CUDA memory access errors on some systems. Prioritize capturing training graphs when this occurs.

class physicsnemo.utils.capture.StaticCaptureTraining(*args, **kwargs)[source]

Bases: _StaticCapture

A performance optimization decorator for PyTorch training functions.

This class should be initialized as a decorator on a function that computes the forward pass of the neural network and loss function. The user should only call the defind training step function. This will apply optimizations including: AMP and Cuda Graphs.

Parameters
  • model (physicsnemo.models.Module) – PhysicsNeMo Model

  • optim (torch.optim) – Optimizer

  • logger (Optional[Logger], optional) – PhysicsNeMo Launch Logger, by default None

  • use_graphs (bool, optional) – Toggle CUDA graphs if supported by model, by default True

  • use_amp (bool, optional) – Toggle AMP if supported by mode, by default True

  • cuda_graph_warmup (int, optional) – Number of warmup steps for cuda graphs, by default 11

  • amp_type (Union[float16, bfloat16], optional) – Auto casting type for AMP, by default torch.float16

  • gradient_clip_norm (Optional[float], optional) – Threshold for gradient clipping

  • label (Optional[str], optional) – Static capture checkpoint label, by default None

Raises

ValueError – If the model provided is not a physicsnemo.models.Module. I.e. has no meta data.

Example

Copy
Copied!
            

>>> # Create model >>> model = physicsnemo.models.mlp.FullyConnected(2, 64, 2) >>> input = torch.rand(8, 2) >>> output = torch.rand(8, 2) >>> # Create optimizer >>> optim = torch.optim.Adam(model.parameters(), lr=0.001) >>> # Create training step function with optimization wrapper >>> @StaticCaptureTraining(model=model, optim=optim) ... def training_step(model, invar, outvar): ... predvar = model(invar) ... loss = torch.sum(torch.pow(predvar - outvar, 2)) ... return loss ... >>> # Sample training loop >>> for i in range(3): ... loss = training_step(model, input, output) ...

Note

Static captures must be checkpointed when training using the state_dict() if AMP is being used with gradient scaler. By default, this requires static captures to be instantiated in the same order as when they were checkpointed. The label parameter can be used to relax/circumvent this ordering requirement.

Note

Capturing multiple cuda graphs in a single program can lead to potential invalid CUDA memory access errors on some systems. Prioritize capturing training graphs when this occurs.

A collection of utilities specifically designed for working with the GraphCast model, including data processing, graph construction, and specialized loss functions. These utilities are essential for implementing and training GraphCast-based weather prediction models.

class physicsnemo.utils.graphcast.data_utils.StaticData(static_dataset_path: str, latitudes: Tensor, longitudes: Tensor)[source]

Bases: object

Class to load static data from netCDF files. Static data includes land-sea mask, geopotential, and latitude-longitude coordinates.

Parameters
  • static_dataset_path (str) – Path to directory containing static data.

  • latitudes (Tensor) – Tensor with shape (lat,) that includes latitudes.

  • longitudes (Tensor) – Tensor with shape (lon,) that includes longitudes.

get() → Tensor[source]

Get all static data.

Returns

Tensor with shape (1, 5, lat, lon) that includes land-sea mask, geopotential, cosine of latitudes, sine and cosine of longitudes.

Return type

Tensor

get_geop(normalize: bool = True) → Tensor[source]

Get geopotential from netCDF file.

Parameters

normalize (bool, optional) – Whether to normalize the geopotential, by default True

Returns

Normalized geopotential with shape (1, 1, lat, lon).

Return type

Tensor

get_lat_lon() → Tensor[source]

Computes cosine of latitudes and sine and cosine of longitudes.

Returns

Tensor with shape (1, 3, lat, lon) tha includes cosine of latitudes, sine and cosine of longitudes.

Return type

Tensor

get_lsm() → Tensor[source]

Get land-sea mask from netCDF file.

Returns

Land-sea mask with shape (1, 1, lat, lon).

Return type

Tensor

class physicsnemo.utils.graphcast.graph.Graph(lat_lon_grid: Tensor, mesh_level: int = 6, multimesh: bool = True, khop_neighbors: int = 0, dtype=torch.float32)[source]

Bases: object

Graph class for creating the graph2mesh, latent mesh, and mesh2graph graphs.

Parameters
  • lat_lon_grid (Tensor) – Tensor with shape (lat, lon, 2) that includes the latitudes and longitudes meshgrid.

  • mesh_level (int, optional) – Level of the latent mesh, by default 6

  • multimesh (bool, optional) – If the latent mesh is a multimesh, by default True If True, the latent mesh includes the nodes corresponding to the specified mesh_level`and incorporates the edges from all mesh levels ranging from level 0 up to and including `mesh_level.

  • khop_neighbors (int, optional) – This option is used to retrieve a list of indices for the k-hop neighbors of all mesh nodes. It is applicable when a graph transformer is used as the processor. If set to 0, this list is not computed. If a message passing processor is used, it is forced to 0. By default 0.

  • dtype (torch.dtype, optional) – Data type of the graph, by default torch.float

create_g2m_graph(verbose: bool = True) → Tensor[source]

Create the graph2mesh graph.

Parameters

verbose (bool, optional) – verbosity, by default True

Returns

Graph2mesh graph.

Return type

DGLGraph

create_m2g_graph(verbose: bool = True) → Tensor[source]

Create the mesh2grid graph.

Parameters

verbose (bool, optional) – verbosity, by default True

Returns

Mesh2grid graph.

Return type

DGLGraph

create_mesh_graph(verbose: bool = True) → Tensor[source]

Create the multimesh graph.

Parameters

verbose (bool, optional) – verbosity, by default True

Returns

Multimesh graph

Return type

DGLGraph

physicsnemo.utils.graphcast.graph_utils.add_edge_features(graph: DGLGraph, pos: Tensor, normalize: bool = True) → DGLGraph[source]

Adds edge features to the graph.

Parameters
  • graph (DGLGraph) – The graph to add edge features to.

  • pos (Tensor) – The node positions.

  • normalize (bool, optional) – Whether to normalize the edge features, by default True

Returns

The graph with edge features.

Return type

DGLGraph

physicsnemo.utils.graphcast.graph_utils.add_node_features(graph: DGLGraph, pos: Tensor) → DGLGraph[source]

Adds cosine of latitude, sine and cosine of longitude as the node features to the graph.

Parameters
  • graph (DGLGraph) – The graph to add node features to.

  • pos (Tensor) – The node positions.

Returns

graph – The graph with node features.

Return type

DGLGraph

physicsnemo.utils.graphcast.graph_utils.azimuthal_angle(lon: Tensor) → Tensor[source]

Gives the azimuthal angle of a point on the sphere

Parameters

lon (Tensor) – Tensor of shape (N, ) containing the longitude of the point

Returns

Tensor of shape (N, ) containing the azimuthal angle

Return type

Tensor

physicsnemo.utils.graphcast.graph_utils.cell_to_adj(cells: List[List[int]])[source]

creates adjancy matrix in COO format from mesh cells

Parameters

cells (List[List[int]]) – List of cells, each cell is a list of 3 vertices

Returns

src, dst – List of source and destination vertices

Return type

List[int], List[int]

physicsnemo.utils.graphcast.graph_utils.create_graph(src: List, dst: List, to_bidirected: bool = True, add_self_loop: bool = False, dtype: dtype = torch.int32) → DGLGraph[source]

Creates a DGL graph from an adj matrix in COO format.

Parameters
  • src (List) – List of source nodes

  • dst (List) – List of destination nodes

  • to_bidirected (bool, optional) – Whether to make the graph bidirectional, by default True

  • add_self_loop (bool, optional) – Whether to add self loop to the graph, by default False

  • dtype (torch.dtype, optional) – Graph index data type, by default torch.int32

Returns

The dgl Graph.

Return type

DGLGraph

physicsnemo.utils.graphcast.graph_utils.create_heterograph(src: List, dst: List, labels: str, dtype: dtype = torch.int32, num_nodes_dict: dict = None) → DGLGraph[source]

Creates a heterogeneous DGL graph from an adj matrix in COO format.

Parameters
  • src (List) – List of source nodes

  • dst (List) – List of destination nodes

  • labels (str) – Label of the edge type

  • dtype (torch.dtype, optional) – Graph index data type, by default torch.int32

  • num_nodes_dict (dict, optional) – number of nodes for some node types, see dgl.heterograph for more information

Returns

The dgl Graph.

Return type

DGLGraph

physicsnemo.utils.graphcast.graph_utils.deg2rad(deg: Tensor) → Tensor[source]

Converts degrees to radians

Parameters

deg – Tensor of shape (N, ) containing the degrees

Returns

Tensor of shape (N, ) containing the radians

Return type

Tensor

physicsnemo.utils.graphcast.graph_utils.geospatial_rotation(invar: Tensor, theta: Tensor, axis: str, unit: str = 'rad') → Tensor[source]

Rotation using right hand rule

Parameters
  • invar (Tensor) – Tensor of shape (N, 3) containing x, y, z coordinates

  • theta (Tensor) – Tensor of shape (N, ) containing the rotation angle

  • axis (str) – Axis of rotation

  • unit (str, optional) – Unit of the theta, by default “rad”

Returns

Tensor of shape (N, 3) containing the rotated x, y, z coordinates

Return type

Tensor

physicsnemo.utils.graphcast.graph_utils.get_face_centroids(vertices: List[Tuple[float, float, float]], faces: List[List[int]]) → List[Tuple[float, float, float]][source]

Compute the centroids of triangular faces in a graph.

Parameters: vertices (List[Tuple[float, float, float]]): A list of tuples representing the coordinates of the vertices. faces (List[List[int]]): A list of lists, where each inner list contains three indices representing a triangular face.

Returns: List[Tuple[float, float, float]]: A list of tuples representing the centroids of the faces.

physicsnemo.utils.graphcast.graph_utils.latlon2xyz(latlon: Tensor, radius: float = 1, unit: str = 'deg') → Tensor[source]

Converts latlon in degrees to xyz Based on: https://stackoverflow.com/questions/1185408 - The x-axis goes through long,lat (0,0); - The y-axis goes through (0,90); - The z-axis goes through the poles.

Parameters
  • latlon (Tensor) – Tensor of shape (N, 2) containing latitudes and longitudes

  • radius (float, optional) – Radius of the sphere, by default 1

  • unit (str, optional) – Unit of the latlon, by default “deg”

Returns

Tensor of shape (N, 3) containing x, y, z coordinates

Return type

Tensor

physicsnemo.utils.graphcast.graph_utils.max_edge_length(vertices: List[List[float]], source_nodes: List[int], destination_nodes: List[int]) → float[source]

Compute the maximum edge length in a graph.

Parameters: vertices (List[List[float]]): A list of tuples representing the coordinates of the vertices. source_nodes (List[int]): A list of indices representing the source nodes of the edges. destination_nodes (List[int]): A list of indices representing the destination nodes of the edges.

Returns: The maximum edge length in the graph (float).

physicsnemo.utils.graphcast.graph_utils.polar_angle(lat: Tensor) → Tensor[source]

Gives the polar angle of a point on the sphere

Parameters

lat (Tensor) – Tensor of shape (N, ) containing the latitude of the point

Returns

Tensor of shape (N, ) containing the polar angle

Return type

Tensor

physicsnemo.utils.graphcast.graph_utils.rad2deg(rad)[source]

Converts radians to degrees

Parameters

rad – Tensor of shape (N, ) containing the radians

Returns

Tensor of shape (N, ) containing the degrees

Return type

Tensor

physicsnemo.utils.graphcast.graph_utils.xyz2latlon(xyz: Tensor, radius: float = 1, unit: str = 'deg') → Tensor[source]

Converts xyz to latlon in degrees Based on: https://stackoverflow.com/questions/1185408 - The x-axis goes through long,lat (0,0); - The y-axis goes through (0,90); - The z-axis goes through the poles.

Parameters
  • xyz (Tensor) – Tensor of shape (N, 3) containing x, y, z coordinates

  • radius (float, optional) – Radius of the sphere, by default 1

  • unit (str, optional) – Unit of the latlon, by default “deg”

Returns

Tensor of shape (N, 2) containing latitudes and longitudes

Return type

Tensor

class physicsnemo.utils.graphcast.loss.CellAreaWeightedLossFunction(area)[source]

Bases: Module

Loss function with cell area weighting.

Parameters

area (torch.Tensor) – Cell area with shape [H, W].

forward(invar, outvar)[source]

Implicit forward function which computes the loss given a prediction and the corresponding targets.

Parameters
  • invar (torch.Tensor) – prediction of shape [T, C, H, W].

  • outvar (torch.Tensor) – target values of shape [T, C, H, W].

class physicsnemo.utils.graphcast.loss.CustomCellAreaWeightedLossAutogradFunction(*args, **kwargs)[source]

Bases: Function

Autograd fuunction for custom loss with cell area weighting.

static backward(ctx, grad_loss: Tensor)[source]

Backward method of custom loss function with cell area weighting.

static forward(ctx, invar: Tensor, outvar: Tensor, area: Tensor)[source]

Forward of custom loss function with cell area weighting.

class physicsnemo.utils.graphcast.loss.CustomCellAreaWeightedLossFunction(area: Tensor)[source]

Bases: CellAreaWeightedLossFunction

Custom loss function with cell area weighting.

Parameters

area (torch.Tensor) – Cell area with shape [H, W].

forward(invar: Tensor, outvar: Tensor) → Tensor[source]

Implicit forward function which computes the loss given a prediction and the corresponding targets.

Parameters
  • invar (torch.Tensor) – prediction of shape [T, C, H, W].

  • outvar (torch.Tensor) – target values of shape [T, C, H, W].

class physicsnemo.utils.graphcast.loss.GraphCastLossFunction(area, channels_list, dataset_metadata_path, time_diff_std_path)[source]

Bases: Module

Loss function as specified in GraphCast. :param area: Cell area with shape [H, W]. :type area: torch.Tensor

assign_atmosphere_weights()[source]

Assigns weights to atmospheric variables

assign_surface_weights()[source]

Assigns weights to surface variables

assign_variable_weights()[source]

assigns per-variable per-pressure level weights

calculate_linear_weights(variables)[source]

Calculate weights for each variable group.

forward(invar, outvar)[source]

Implicit forward function which computes the loss given a prediction and the corresponding targets. :param invar: prediction of shape [T, C, H, W]. :type invar: torch.Tensor :param outvar: target values of shape [T, C, H, W]. :type outvar: torch.Tensor

get_channel_dict(dataset_metadata_path, channels_list)[source]

Gets lists of surface and atmospheric channels

get_time_diff_std(time_diff_std_path, channels_list)[source]

Gets the time difference standard deviation

parse_variable(variable_list)[source]

Parse variable into its letter and numeric parts.

Utilities for handling file operations, caching, and data management across different storage systems. These utilities abstract away the complexity of dealing with different filesystem types and provide consistent interfaces for data access.

class physicsnemo.utils.filesystem.Package(root: str, seperator: str = '/')[source]

Bases: object

A generic file system abstraction. Can be used to represent local and remote file systems. Remote files are automatically fetched and stored in the $LOCAL_CACHE or $HOME/.cache/physicsnemo folder. The get method can then be used to access files present.

Presently one can use Package with the following directories: - Package(“/path/to/local/directory”) = local file system - Package(“s3://bucket/path/to/directory”) = object store file system - Package(“http://url/path/to/directory”) = http file system - Package(“ngc://model/<org_id/team_id/model_id>@<version>”) = ngc model file system

Parameters
  • root (str) – Root directory for file system

  • seperator (str, optional) – directory seperator. Defaults to “/”.

get(path: str, recursive: bool = False) → str[source]

Get a local path to the item at path

path might be a remote file, in which case it is downloaded to a local cache at $LOCAL_CACHE or $HOME/.cache/physicsnemo first.

Tools for working with generative models, including deterministic and stochastic sampling utilities. These are particularly useful when implementing diffusion models or other generative approaches.

physicsnemo.utils.generative.deterministic_sampler.deterministic_sampler(net: ~torch.nn.modules.module.Module, latents: ~torch.Tensor, img_lr: ~torch.Tensor, class_labels: ~typing.Optional[~torch.Tensor] = None, randn_like: ~typing.Callable = <built-in method randn_like of type object>, num_steps: int = 18, sigma_min: ~typing.Optional[float] = None, sigma_max: ~typing.Optional[float] = None, rho: float = 7.0, solver: ~typing.Literal['heun', 'euler'] = 'heun', discretization: ~typing.Literal['vp', 've', 'iddpm', 'edm'] = 'edm', schedule: ~typing.Literal['vp', 've', 'linear'] = 'linear', scaling: ~typing.Literal['vp', 'none'] = 'none', epsilon_s: float = 0.001, C_1: float = 0.001, C_2: float = 0.008, M: int = 1000, alpha: float = 1.0, S_churn: int = 0, S_min: float = 0.0, S_max: float = inf, S_noise: float = 1.0) → Tensor[source]

Generalized sampler, representing the superset of all sampling methods discussed in the paper “Elucidating the Design Space of Diffusion-Based Generative Models” (EDM). - https://arxiv.org/abs/2206.00364

This function integrates an ODE (probability flow) or SDE over multiple time-steps to generate samples from the diffusion model provided by the argument ‘net’. It can be used to combine multiple choices to design a custom sampler, including multiple integration solver, discretization method, noise schedule, and so on.

nettorch.nn.Module

The diffusion model to use in the sampling process.

latentstorch.Tensor

The latent random noise used as the initial condition for the stochastic ODE.

img_lrtorch.Tensor

Low-resolution input image for conditioning the diffusion process. Passed as a keywork argument to the model ‘net’.

class_labelsOptional[torch.Tensor]

Labels of the classes used as input to a class-conditionned diffusion model. Passed as a keyword argument to the model ‘net’. If provided, it must be a tensor containing integer values. Defaults to None, in which case it is ignored.

randn_like: Callable

Random Number Generator to generate random noise that is added during the stochastic sampling. Must have the same signature as torch.randn_like and return torch.Tensor. Defaults to torch.randn_like.

num_stepsOptional[int]

Number of time-steps for the stochastic ODE integration. Defaults to 18.

sigma_minOptional[float]

Minimum noise level for the diffusion process. ‘sigma_min’, ‘sigma_max’, and ‘rho’ are used to compute the time-step discretization, based on the choice of discretization. For the default choice (“discretization=’heun’”), the noise level schedule is computed as: :math:`sigma_i = (sigma_{max}^{1/

ho} + i / (num_steps - 1) * (sigma_{min}^{1/ ho} - sigma_{max}^{1/ ho}))^{rho}`.

For other choices of ‘discretization’, see details in the EDM paper. Defaults to None, in which case defaults values depending of the specified discretization are used.

sigma_maxOptional[float]

Maximum noise level for the diffusion process. See sigma_min for details. Defaults to None, in which case defaults values depending of the specified discretization are used.

rhofloat, optional

Exponent used in the noise schedule. See sigma_min for details. Only used when ‘discretization’ is ‘heun’. Values in the range [5, 10] produce better images. Lower values lead to truncation errors equalized over all time steps. Defaults to 7.

solverLiteral[“heun”, “euler”]

The numerical method used to integrate the stochastic ODE. “euler” is 1st order solver, which is faster but produces lower-quality images. “heun” is 2nd order, more expensive, but produces higher-quality images. Defaults to “heun”.

discretizationLiteral[“vp”, “ve”, “iddpm”, “edm”]

The method to discretize time-steps \(t_i\) in the diffusion process. See the EDM papper for details. Defaults to “edm”.

scheduleLiteral[“vp”, “ve”, “linear”]

The type of noise level schedule. Defaults to “linear”. If schedule=’ve’, then \(\sigma(t) = \sqrt{t}\). If schedule=’linear’, then \(\sigma(t) = t\). If schedule=’vp’, see EDM paper for details. Defaults to “linear”.

scalingLiteral[“vp”, “none”]

The type of time-dependent signal scaling \(s(t)\), such that \(x = s(t) \hat{x}\). See EDM paper for details on the ‘vp’ scaling. Defaults to ‘none’, in which case \(s(t)=1\).

epsilon_sfloat, optional

Parameter to compute both the noise level schedule and the time-step discetization. Only used when discretization=’vp’ or schedule=’vp’. Ignored in other cases. Defaults to 1e-3.

C_1float, optional

Parameters to compute the time-step discetization. Only used when discretization=’iddpm’. Defaults to 0.001.

C_2float, optional

Same as for C_1. Only used when discretization=’iddpm’. Defaults to 0.008.

Mint, optional

Same as for C_1 and C_2. Only used when discretization=’iddpm’. Defaults to 1000.

alphafloat, optional

Controls (i.e. multiplies) the step size \(t_{i+1} - \hat{t}_i\) in the stochastic sampler, where \(\hat{t}_i\) is the temporarily increased noise level. Defaults to 1.0, which is the recommended value.

S_churnint, optional

Controls the amount of stochasticty injected in the SDE in the stochatsic sampler. Larger values of S_churn lead to larger values of \(\hat{t}_i\), which in turn lead to injecting more

stochasticity in the SDE by Defaults to 0, which means no stochasticity is injected.

S_minfloat, optional

S_min and S_max control the time-step range obver which stochasticty is injected in the SDE. Stochasticity is injected through hat{t}_i for time-steps \(t_i\) such that \(S_{min} \leq t_i \leq S_{max}\). Defaults to 0.0.

S_maxfloat, optional

See S_min. Defaults to float(“inf”).

S_noisefloat, optional

Controls the amount of stochasticty injected in the SDE in the stochatsic sampler. Added signal noise is proportinal to \(\epsilon_i\) where epsilon_i ~ N(0, S_{noise}^2). Defaults to 1.0.

torch.Tensor:

Generated batch of samples. Same shape as the input ‘latents’.

physicsnemo.utils.generative.stochastic_sampler.stochastic_sampler(net: ~torch.nn.modules.module.Module, latents: ~torch.Tensor, img_lr: ~torch.Tensor, class_labels: ~typing.Optional[~torch.Tensor] = None, randn_like: ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <built-in method randn_like of type object>, patching: ~typing.Optional[~physicsnemo.utils.patching.GridPatching2D] = None, mean_hr: ~typing.Optional[~torch.Tensor] = None, lead_time_label: ~typing.Optional[~torch.Tensor] = None, num_steps: int = 18, sigma_min: float = 0.002, sigma_max: float = 800, rho: float = 7, S_churn: float = 0, S_min: float = 0, S_max: float = inf, S_noise: float = 1) → Tensor[source]

Proposed EDM sampler (Algorithm 2) with minor changes to enable super-resolution and patch-based diffusion.

Parameters
  • net (torch.nn.Module) –

    The neural network model that generates denoised images from noisy inputs. Expected signature: net(x, x_lr, t_hat, class_labels, lead_time_label=lead_time_label, embedding_selector=embedding_selector), where:

    x (torch.Tensor): Noisy input of shape (batch_size, C_out, H, W) x_lr (torch.Tensor): Conditioning input of shape (batch_size, C_cond, H, W) t_hat (torch.Tensor): Noise level of shape (batch_size, 1, 1, 1) or scalar class_labels (torch.Tensor, optional): Optional class labels lead_time_label (torch.Tensor, optional): Optional lead time labels embedding_selector (callable, optional): Function to select positional embeddings. Used for patch-based diffusion.

    Returns:

    torch.Tensor: Denoised prediction of shape (batch_size, C_out, H, W)

    Required attributes:

    sigma_min (float): Minimum supported noise level for the model sigma_max (float): Maximum supported noise level for the model round_sigma (callable): Method to convert sigma values to tensor representation

  • latents (Tensor) – The latent variables (e.g., noise) used as the initial input for the sampler. Has shape (batch_size, C_out, img_shape_y, img_shape_x).

  • img_lr (Tensor) – Low-resolution input image for conditioning the super-resolution process. Must have shape (batch_size, C_lr, img_lr_ shape_y, img_lr_shape_x).

  • class_labels (Optional[Tensor], optional) – Class labels for conditional generation, if required by the model. By default None.

  • randn_like (Callable[[Tensor], Tensor]) – Function to generate random noise with the same shape as the input tensor. By default torch.randn_like.

  • patching (Optional[GridPatching2D], optional) –

    A patching utility for patch-based diffusion. Implements methods to

    extract patches from an image and batch the patches along dim=0. Should also implement a fuse method to reconstruct the original image

    from a batch of patches. See physicsnemo.utils.patching.GridPatching2D for details. By default None, in which case non-patched diffusion is used.

  • mean_hr (Optional[Tensor], optional) – Optional tensor containing mean high-resolution images for conditioning. Must have same height and width as img_lr, with shape (B_hr, C_hr, img_lr_shape_y, img_lr_shape_x) where the batch dimension B_hr can be either 1, either equal to batch_size, or can be omitted. If B_hr = 1 or is omitted, mean_hr will be expanded to match the shape of img_lr. By default None.

  • lead_time_label (Optional[Tensor], optional) – Optional lead time labels. By default None.

  • num_steps (int) – Number of time steps for the sampler. By default 18.

  • sigma_min (float) – Minimum noise level. By default 0.002.

  • sigma_max (float) – Maximum noise level. By default 800.

  • rho (float) – Exponent used in the time step discretization. By default 7.

  • S_churn (float) – Churn parameter controlling the level of noise added in each step. By default 0.

  • S_min (float) – Minimum time step for applying churn. By default 0.

  • S_max (float) – Maximum time step for applying churn. By default float(“inf”).

  • S_noise (float) – Noise scaling factor applied during the churn step. By default 1.

Returns

The final denoised image produced by the sampler. Same shape as latents: (batch_size, C_out, img_shape_y, img_shape_x).

Return type

Tensor

See also
physicsnemo.models.diffusion.EDMPrecondSuperResolution

A model wrapper that provides preconditioning for super-resolution diffusion models and implements the required interface for this sampler.

Miscellaneous utility classes and functions.

class physicsnemo.utils.generative.utils.EasyDict[source]

Bases: dict

Convenience class that behaves like a dict but allows access with the attribute syntax.

class physicsnemo.utils.generative.utils.InfiniteSampler(dataset: Dataset, rank: int = 0, num_replicas: int = 1, shuffle: bool = True, seed: int = 0, window_size: float = 0.5)[source]

Bases: Sampler[int]

Sampler for torch.utils.data.DataLoader that loops over the dataset indefinitely.

This sampler yields indices indefinitely, optionally shuffling items as it goes. It can also perform distributed sampling when rank and num_replicas are specified.

Parameters
  • dataset (torch.utils.data.Dataset) – The dataset to sample from

  • rank (int, default=0) – The rank of the current process within num_replicas processes

  • num_replicas (int, default=1) – The number of processes participating in distributed sampling

  • shuffle (bool, default=True) – Whether to shuffle the indices

  • seed (int, default=0) – Random seed for reproducibility when shuffling

  • window_size (float, default=0.5) – Fraction of dataset to use as window for shuffling. Must be between 0 and 1. A larger window means more thorough shuffling but slower iteration.

class physicsnemo.utils.generative.utils.StackedRandomGenerator(device, seeds)[source]

Bases: object

Wrapper for torch.Generator that allows specifying a different random seed for each sample in a minibatch.

physicsnemo.utils.generative.utils.assert_shape(tensor, ref_shape)[source]

Assert that the shape of a tensor matches the given list of integers. None indicates that the size of a dimension is allowed to vary. Performs symbolic assertion when used in torch.jit.trace().

physicsnemo.utils.generative.utils.call_func_by_name(*args, func_name: str = None, **kwargs) → Any[source]

Finds the python object with the given name and calls it as a function.

physicsnemo.utils.generative.utils.check_ddp_consistency(module, ignore_regex=None)[source]

Check DistributedDataParallel consistency across processes.

physicsnemo.utils.generative.utils.constant(value, shape=None, dtype=None, device=None, memory_format=None)[source]

Cached construction of constant tensors

physicsnemo.utils.generative.utils.construct_class_by_name(*args, class_name: str = None, **kwargs) → Any[source]

Finds the python class with the given name and constructs it with the given arguments.

physicsnemo.utils.generative.utils.convert_datetime_to_cftime(time: ~datetime.datetime, cls=<class 'cftime._cftime.DatetimeGregorian'>) → DatetimeGregorian[source]

Convert a Python datetime object to a cftime DatetimeGregorian object.

physicsnemo.utils.generative.utils.copy_files_and_create_dirs(files: List[Tuple[str, str]]) → None[source]

Takes in a list of tuples of (src, dst) paths and copies files. Will create all necessary directories.

physicsnemo.utils.generative.utils.copy_params_and_buffers(src_module, dst_module, require_all=False)[source]

Copy parameters and buffers from a source module to target module

physicsnemo.utils.generative.utils.ddp_sync(module, sync)[source]

Context manager for easily enabling/disabling DistributedDataParallel synchronization.

physicsnemo.utils.generative.utils.format_time(seconds: Union[int, float]) → str[source]

Convert the seconds to human readable string with days, hours, minutes and seconds.

physicsnemo.utils.generative.utils.format_time_brief(seconds: Union[int, float]) → str[source]

Convert the seconds to human readable string with days, hours, minutes and seconds.

physicsnemo.utils.generative.utils.get_dtype_and_ctype(type_obj: Any) → Tuple[dtype, Any][source]

Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.

physicsnemo.utils.generative.utils.get_module_dir_by_obj_name(obj_name: str) → str[source]

Get the directory path of the module containing the given object name.

physicsnemo.utils.generative.utils.get_module_from_obj_name(obj_name: str) → Tuple[module, str][source]

Searches for the underlying module behind the name to some python object. Returns the module and the object name (original name with module part removed).

physicsnemo.utils.generative.utils.get_obj_by_name(name: str) → Any[source]

Finds the python object with the given name.

physicsnemo.utils.generative.utils.get_obj_from_module(module: module, obj_name: str) → Any[source]

Traverses the object name and returns the last (rightmost) python object.

physicsnemo.utils.generative.utils.get_top_level_function_name(obj: Any) → str[source]

Return the fully-qualified name of a top-level function.

physicsnemo.utils.generative.utils.is_top_level_function(obj: Any) → bool[source]

Determine whether the given object is a top-level function, i.e., defined at module scope using ‘def’.

physicsnemo.utils.generative.utils.list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) → List[Tuple[str, str]][source]

List all files recursively in a given directory while ignoring given file and directory names. Returns list of tuples containing both absolute and relative paths.

physicsnemo.utils.generative.utils.named_params_and_buffers(module)[source]

Get named parameters and buffers of a nn.Module

physicsnemo.utils.generative.utils.params_and_buffers(module)[source]

Get parameters and buffers of a nn.Module

physicsnemo.utils.generative.utils.parse_int_list(s)[source]

Parse a comma separated list of numbers or ranges and return a list of ints. Example: ‘1,2,5-10’ returns [1, 2, 5, 6, 7, 8, 9, 10]

physicsnemo.utils.generative.utils.print_module_summary(module, inputs, max_nesting=3, skip_redundant=True)[source]

Print summary table of module hierarchy.

physicsnemo.utils.generative.utils.profiled_function(fn)[source]

Function decorator that calls torch.autograd.profiler.record_function().

physicsnemo.utils.generative.utils.suppress_tracer_warnings()[source]

Context manager to temporarily suppress known warnings in torch.jit.trace(). Note: Cannot use catch_warnings because of https://bugs.python.org/issue29672

physicsnemo.utils.generative.utils.time_range(start_time: datetime, end_time: datetime, step: timedelta, inclusive: bool = False)[source]

Like the Python range iterator, but with datetimes.

physicsnemo.utils.generative.utils.tuple_product(t: Tuple) → Any[source]

Calculate the product of the tuple elements.

Utilities for geometric operations, including neighbor search and signed distance field calculations. These are essential for physics simulations and geometric deep learning applications.

Performs a radius search for each query point within a specified radius, using a hash grid for efficient spatial querying.

Parameters
  • points – An array of points in space.

  • queries – An array of query points.

  • radius – The search radius around each query point.

  • grid_dim – The dimensions of the hash grid, either as an integer or a tuple of three integers.

  • device – The device (e.g., ‘cuda’ or ‘cpu’) on which computations are performed.

Returns

A tuple containing the indices of neighboring points, their distances to the query points, and an offset array for result indexing.

Specialized utilities for weather and climate modeling, including calculations for solar radiation and atmospheric parameters. These utilities are used extensively in weather prediction models.

physicsnemo.utils.insolation.insolation(dates, lat, lon, scale=1.0, daily=False, enforce_2d=False, clip_zero=True)[source]

Calculate the approximate solar insolation for given dates.

For an example reference, see: https://brian-rose.github.io/ClimateLaboratoryBook/courseware/insolation.html

Parameters
  • dates (np.ndarray) –

  • dates – 1d array: datetime or Timestamp

  • lat (np.ndarray) – 1d or 2d array of latitudes

  • lon (np.ndarray) – 1d or 2d array of longitudes (0-360deg). If 2d, must match the shape of lat.

  • scale (float, optional) – scaling factor (solar constant)

  • daily (bool, optional) – if True, return the daily max solar radiation (lat and day of year dependent only)

  • enforce_2d (bool, optional) – if True and lat/lon are 1-d arrays, turns them into 2d meshes.

  • clip_zero (bool, optional) – if True, set values below 0 to 0

Returns

np.ndarray

Return type

insolation (date, lat, lon)

Utilities for handling data patching operations, particularly useful in image-based deep learning models where processing needs to be done on patches of the input data.

class physicsnemo.utils.patching.BasePatching2D(img_shape: Tuple[int, int], patch_shape: Tuple[int, int])[source]

Bases: ABC

Abstract base class for 2D image patching operations.

This class provides a foundation for implementing various image patching strategies. It handles basic validation and provides abstract methods that must be implemented by subclasses.

Parameters
  • img_shape (Tuple[int, int]) – The height and width of the input images (img_shape_y, img_shape_x).

  • patch_shape (Tuple[int, int]) – The height and width of the patches (patch_shape_y, patch_shape_x) to extract.

abstract apply(input: Tensor, **kwargs) → Tensor[source]

Apply the patching operation to the input tensor.

Parameters
  • input (Tensor) – Input tensor of shape (batch_size, channels, img_shape_y, img_shape_x).

  • **kwargs (dict) – Additional keyword arguments specific to the patching implementation.

Returns

Patched tensor, shape depends on specific implementation.

Return type

Tensor

fuse(input: Tensor, **kwargs) → Tensor[source]

Fuse patches back into a complete image.

Parameters
  • input (Tensor) – Input tensor containing patches.

  • **kwargs (dict) – Additional keyword arguments specific to the fusion implementation.

Returns

Fused tensor, shape depends on specific implementation.

Return type

Tensor

Raises

NotImplementedError – If the subclass does not implement this method.

global_index(batch_size: int, device: Union[device, str] = 'cpu') → Tensor[source]

Returns a tensor containing the global indices for each patch.

Global indices correspond to (y, x) global grid coordinates of each element within the original image (before patching). It is typically used to keep track of the original position of each patch in the original image.

Parameters
  • batch_size (int) – The size of the batch of images to patch.

  • device (Union[torch.device, str]) – Proper device to initialize global_index on. Default to cpu

Returns

A tensor of shape (self.patch_num, 2, patch_shape_y, patch_shape_x). global_index[:, 0, :, :] contains the y-coordinate (height), and global_index[:, 1, :, :] contains the x-coordinate (width).

Return type

Tensor

class physicsnemo.utils.patching.GridPatching2D(img_shape: Tuple[int, int], patch_shape: Tuple[int, int], overlap_pix: int = 0, boundary_pix: int = 0)[source]

Bases: BasePatching2D

Class for deterministically extracting patches from 2D images in a grid pattern.

This class provides utilities to extract patches from images in a deterministic manner, with configurable overlap and boundary pixels. The patches are extracted in a grid-like pattern covering the entire image.

Parameters
  • img_shape (Tuple[int, int]) – The height and width of the input images (img_shape_y, img_shape_x).

  • patch_shape (Tuple[int, int]) – The height and width of the patches (patch_shape_y, patch_shape_x) to extract.

  • overlap_pix (int, optional) – Number of pixels to overlap between adjacent patches, by default 0.

  • boundary_pix (int, optional) – Number of pixels to crop as boundary from each patch, by default 0.

patch_num

Total number of patches that will be extracted from the image, calculated as patch_num_x * patch_num_y.

Type

int

See also
physicsnemo.utils.patching.BasePatching2D

The base class providing the patching interface.

physicsnemo.utils.patching.RandomPatching2D

Alternative patching strategy using random patch locations.

apply(input: Tensor, additional_input: Optional[Tensor] = None) → Tensor[source]

Apply deterministic patching to the input tensor.

Splits the input tensor into patches in a grid-like pattern. Can optionally concatenate additional interpolated data to each patch. Extracted patches are batched along the first dimension of the output. The layout of the output assumes that for any i, out[B * i: B * (i + 1)] corresponds to the same patch exacted from each batch element of input. The patches can be reconstructed back into the original image using the fuse method.

Parameters
  • input (Tensor) – Input tensor of shape (batch_size, channels, img_shape_y, img_shape_x).

  • additional_input (Optional[Tensor], optional) – Additional data to concatenate to each patch. Will be interpolated to match patch dimensions. Shape must be (batch_size, additional_channels, H, W), by default None.

Returns

Tensor containing patches with shape (batch_size * patch_num, channels [+ additional_channels], patch_shape_y, patch_shape_x). If additional_input is provided, its channels are concatenated along the channel dimension.

Return type

Tensor

See also
physicsnemo.utils.patching.image_batching()

The underlying function used to perform the patching operation.

fuse(input: Tensor, batch_size: int) → Tensor[source]

Fuse patches back into a complete image.

Reconstructs the original image by stitching together patches, accounting for overlapping regions and boundary pixels. In overlapping regions, values are averaged.

Parameters
  • input (Tensor) – Input tensor containing patches with shape (batch_size * patch_num, channels, patch_shape_y, patch_shape_x).

  • batch_size (int) – The original batch size before patching.

Returns

Reconstructed image tensor with shape (batch_size, channels, img_shape_y, img_shape_x).

Return type

Tensor

See also
physicsnemo.utils.patching.image_fuse()

The underlying function used to perform the fusion operation.

class physicsnemo.utils.patching.RandomPatching2D(img_shape: Tuple[int, int], patch_shape: Tuple[int, int], patch_num: int)[source]

Bases: BasePatching2D

Class for randomly extracting patches from 2D images.

This class provides utilities to randomly extract patches from images represented as 4D tensors. It maintains a list of random patch indices that can be reset as needed.

Parameters
  • img_shape (Tuple[int, int]) – The height and width of the input images (img_shape_y, img_shape_x).

  • patch_shape (Tuple[int, int]) – The height and width of the patches (patch_shape_y, patch_shape_x) to extract.

  • patch_num (int) – The number of patches to extract.

patch_indices

The indices of the patches to extract from the images. These indices correspond to the (y, x) coordinates of the lower left corner of each patch.

Type

List[Tuple[int, int]]

See also
physicsnemo.utils.patching.BasePatching2D

The base class providing the patching interface.

physicsnemo.utils.patching.GridPatching2D

Alternative patching strategy using deterministic patch locations.

apply(input: Tensor, additional_input: Optional[Tensor] = None) → Tensor[source]

Applies the patching operation by extracting patches specified by self.patch_indices from the input Tensor. Extracted patches are batched along the first dimension of the output. The layout of the output assumes that for any i, out[B * i: B * (i + 1)] corresponds to the same patch exacted from each batch element of input.

Parameters
  • input (Tensor) – The input tensor representing the full image with shape (batch_size, channels_in, img_shape_y, img_shape_x).

  • additional_input (Optional[Tensor], optional) – If provided, it is concatenated to each patch along dim=1. Must have same batch size as input. Bilinear interpolation is used to interpolate additional_input onto a 2D grid of shape (patch_shape_y, patch_shape_x).

Returns

A tensor of shape (batch_size * self.patch_num, channels [+ additional_channels], patch_shape_y, patch_shape_x). If additional_input is provided, its channels are concatenated along the channel dimension.

Return type

Tensor

get_patch_indices() → List[Tuple[int, int]][source]

Get the current list of patch starting indices.

These are the upper-left coordinates of each extracted patch from the full image.

Returns

A list of (row, column) tuples representing patch starting positions.

Return type

List[Tuple[int, int]]

property patch_num: int

Get the number of patches to extract.

Returns

The number of patches to extract.

Return type

int

reset_patch_indices() → None[source]

Generate new random indices for the patches to extract. These are the starting indices of the patches to extract (upper left corner).

Return type

None

set_patch_num(value: int) → None[source]

Set the number of patches to extract and reset patch indices. This is the only way to modify the patch_num value.

Parameters

value (int) – The new number of patches to extract.

physicsnemo.utils.patching.image_batching(input: Tensor, patch_shape_y: int, patch_shape_x: int, overlap_pix: int, boundary_pix: int, input_interp: Optional[Tensor] = None) → Tensor[source]

Splits a full image into a batch of patched images.

This function takes a full image and splits it into patches, adding padding where necessary. It can also concatenate additional interpolated data to each patch if provided.

Parameters
  • input (Tensor) – The input tensor representing the full image with shape (batch_size, channels, img_shape_y, img_shape_x).

  • patch_shape_y (int) – The height (y-dimension) of each image patch.

  • patch_shape_x (int) – The width (x-dimension) of each image patch.

  • overlap_pix (int) – The number of overlapping pixels between adjacent patches.

  • boundary_pix (int) – The number of pixels to crop as a boundary from each patch.

  • input_interp (Optional[Tensor], optional) – Optional additional data to concatenate to each patch with shape (batch_size, interp_channels, patch_shape_y, patch_shape_x). By default None.

Returns

A tensor containing the image patches, with shape (total_patches * batch_size, channels [+ interp_channels], patch_shape_x, patch_shape_y).

Return type

Tensor

physicsnemo.utils.patching.image_fuse(input: Tensor, img_shape_y: int, img_shape_x: int, batch_size: int, overlap_pix: int, boundary_pix: int) → Tensor[source]

Reconstructs a full image from a batch of patched images. Reverts the patching operation performed by image_batching().

This function takes a batch of image patches and reconstructs the full image by stitching the patches together. The function accounts for overlapping and boundary pixels, ensuring that overlapping areas are averaged.

Parameters
  • input (Tensor) – The input tensor containing the image patches with shape (patch_num * batch_size, channels, patch_shape_y, patch_shape_x).

  • img_shape_y (int) – The height (y-dimension) of the original full image.

  • img_shape_x (int) – The width (x-dimension) of the original full image.

  • batch_size (int) – The original batch size before patching.

  • overlap_pix (int) – The number of overlapping pixels between adjacent patches.

  • boundary_pix (int) – The number of pixels to crop as a boundary from each patch.

Returns

The reconstructed full image tensor with shape (batch_size, channels, img_shape_y, img_shape_x).

Return type

Tensor

See also
physicsnemo.utils.patching.image_batching()

The function this reverses, which splits images into patches.

Utilities for working with the Domino model, including data processing and grid construction. These utilities are essential for implementing and training Domino-based models.

Important utilities for data processing and training, testing DoMINO.

physicsnemo.utils.domino.utils.area_weighted_shuffle_array(arr: Union[ndarray, ndarray], npoin: int, area: Union[ndarray, ndarray]) → Tuple[Union[ndarray, ndarray], Union[ndarray, ndarray]][source]

Function for area weighted shuffling

physicsnemo.utils.domino.utils.array_type(arr: Union[ndarray, ndarray])[source]

Function to return the array type. It’s just leveraging cupy to do this if available, fallback is numpy.

physicsnemo.utils.domino.utils.calculate_center_of_mass(stl_centers: Union[ndarray, ndarray], stl_sizes: Union[ndarray, ndarray]) → Union[ndarray, ndarray][source]

Function to calculate center of mass

physicsnemo.utils.domino.utils.calculate_normal_positional_encoding(coordinates_a: Union[ndarray, ndarray], coordinates_b: Optional[Union[ndarray, ndarray]] = None, cell_length: Sequence[float] = []) → Union[ndarray, ndarray][source]

Function to get normal positional encoding

physicsnemo.utils.domino.utils.calculate_pos_encoding(nx: Union[ndarray, ndarray], d: int = 8) → Union[ndarray, ndarray][source]

Function for calculating positional encoding

physicsnemo.utils.domino.utils.combine_dict(old_dict, new_dict)[source]

Function to combine dictionaries

physicsnemo.utils.domino.utils.convert_to_tet_mesh(polydata: Any) → Any[source]

Function to convert tet to stl

physicsnemo.utils.domino.utils.create_directory(filepath: str) → None[source]

Function to create directories

physicsnemo.utils.domino.utils.create_grid(mx: Union[ndarray, ndarray], mn: Union[ndarray, ndarray], nres: Union[ndarray, ndarray]) → Union[ndarray, ndarray][source]

Function to create grid

physicsnemo.utils.domino.utils.dict_to_device(state_dict, device, exclude_keys=['filename'])[source]

Function to load dictionary to device

physicsnemo.utils.domino.utils.extract_surface_triangles(tet_mesh: Any) → List[int][source]

Extracts the surface triangles from a triangular mesh.

physicsnemo.utils.domino.utils.get_fields(data, variables)[source]

Function to get fields from VTP/VTU

physicsnemo.utils.domino.utils.get_fields_from_cell(ptdata, var_list)[source]

Function to get fields from elem

physicsnemo.utils.domino.utils.get_filenames(filepath: str, exclude_dirs: bool = False) → List[str][source]

Function to get filenames from a directory

physicsnemo.utils.domino.utils.get_node_to_elem(polydata: Any) → Any[source]

Function to convert node to elem

physicsnemo.utils.domino.utils.get_surface_data(polydata, variables)[source]

Function to get surface data

physicsnemo.utils.domino.utils.get_vertices(polydata)[source]

Function to get vertices

physicsnemo.utils.domino.utils.get_volume_data(polydata, variables)[source]

Function to get volume data

physicsnemo.utils.domino.utils.mean_std_sampling(field: Union[ndarray, ndarray], mean: Union[ndarray, ndarray], std: Union[ndarray, ndarray], tolerance: float = 3.0) → Union[ndarray, ndarray][source]

Function for mean/std based sampling

physicsnemo.utils.domino.utils.merge(*lists)[source]

Function to merge lists

physicsnemo.utils.domino.utils.nd_interpolator(coodinates, field, grid)[source]

Function to for nd interpolation

physicsnemo.utils.domino.utils.normalize(field: Union[ndarray, ndarray], mx: Union[ndarray, ndarray], mn: Union[ndarray, ndarray]) → Union[ndarray, ndarray][source]

Function to normalize fields

physicsnemo.utils.domino.utils.pad(arr: Union[ndarray, ndarray], npoin: int, pad_value: float = 0.0) → Union[ndarray, ndarray][source]

Function for padding

physicsnemo.utils.domino.utils.pad_inp(arr: Union[ndarray, ndarray], npoin: int, pad_value: float = 0.0) → Union[ndarray, ndarray][source]

Function for padding arrays

physicsnemo.utils.domino.utils.shuffle_array(arr: Union[ndarray, ndarray], npoin: int) → Tuple[Union[ndarray, ndarray], Union[ndarray, ndarray]][source]

Function for shuffling arrays

physicsnemo.utils.domino.utils.shuffle_array_without_sampling(arr: Union[ndarray, ndarray]) → Tuple[Union[ndarray, ndarray], Union[ndarray, ndarray]][source]

Function for shuffline arrays without sampling.

physicsnemo.utils.domino.utils.standardize(field: Union[ndarray, ndarray], mean: Union[ndarray, ndarray], std: Union[ndarray, ndarray]) → Union[ndarray, ndarray][source]

Function to standardize fields

physicsnemo.utils.domino.utils.unnormalize(field: Union[ndarray, ndarray], mx: Union[ndarray, ndarray], mn: Union[ndarray, ndarray]) → Union[ndarray, ndarray][source]

Function to unnormalize fields

physicsnemo.utils.domino.utils.unstandardize(field: Union[ndarray, ndarray], mean: Union[ndarray, ndarray], std: Union[ndarray, ndarray]) → Union[ndarray, ndarray][source]

Function to unstandardize fields

physicsnemo.utils.domino.utils.write_to_vtp(polydata: Any, filename: str)[source]

Function to write polydata to vtp

physicsnemo.utils.domino.utils.write_to_vtu(polydata: Any, filename: str)[source]

Function to write polydata to vtu

Utilities for working with the CorrDiff model, particularly for the diffusion and regression steps.

class physicsnemo.utils.corrdiff.utils.NetCDFWriter(f, lat, lon, input_channels, output_channels, has_lead_time=False)[source]

Bases: object

NetCDF Writer

write_input(channel_name, time_index, val)[source]

Write input data to NetCDF file.

write_prediction(channel_name, time_index, ensemble_index, val)[source]

Write prediction data to NetCDF file.

write_time(time_index, time)[source]

Write time information to NetCDF file.

write_truth(channel_name, time_index, val)[source]

Write ground truth data to NetCDF file.

physicsnemo.utils.corrdiff.utils.diffusion_step(net: Module, sampler_fn: callable, img_shape: tuple, img_out_channels: int, rank_batches: list, img_lr: Tensor, rank: int, device: device, mean_hr: Tensor = None, lead_time_label: Tensor = None) → Tensor[source]

Generate images using diffusion techniques as described in the relevant paper.

This function applies a diffusion model to generate high-resolution images based on low-resolution inputs. It supports optional conditioning on high-resolution mean predictions and lead time labels.

For each low-resolution sample in img_lr, the function generates multiple high-resolution samples, with different random seeds, specified in rank_batches. The function then concatenates these high-resolution samples across the batch dimension.

Parameters
  • net (torch.nn.Module) – The diffusion model network.

  • sampler_fn (callable) – Function used to sample images from the diffusion model.

  • img_shape (tuple) – Shape of the images, (height, width).

  • img_out_channels (int) – Number of output channels for the image.

  • rank_batches (list) – List of batches of seeds to process.

  • img_lr (torch.Tensor) – Low-resolution input image with shape (seed_batch_size, channels_lr, height, width).

  • rank (int, optional) – Rank of the current process for distributed processing.

  • device (torch.device, optional) – Device to perform computations.

  • mean_hr (torch.Tensor, optional) – High-resolution mean tensor to be used as an additional input, with shape (1, channels_hr, height, width). Default is None.

  • lead_time_label (torch.Tensor, optional) – Lead time label tensor for temporal conditioning, with shape (batch_size, lead_time_dims). Default is None.

Returns

Generated images concatenated across batches with shape (seed_batch_size * len(rank_batches), out_channels, height, width).

Return type

torch.Tensor

physicsnemo.utils.corrdiff.utils.get_time_from_range(times_range, time_format='%Y-%m-%dT%H:%M:%S')[source]

Generates a list of times within a given range.

Parameters
  • times_range – A list containing start time, end time, and optional interval (hours).

  • time_format – The format of the input times (default: “%Y-%m-%dT%H:%M:%S”).

Returns

A list of times within the specified range.

physicsnemo.utils.corrdiff.utils.regression_step(net: Module, img_lr: Tensor, latents_shape: Size, lead_time_label: Optional[Tensor] = None) → Tensor[source]

Perform a regression step to produce ensemble mean prediction.

This function takes a low-resolution input and performs a regression step to produce an ensemble mean prediction. It processes a single instance and then replicates the results across the batch dimension if needed.

Parameters
  • net (torch.nn.Module) – U-Net model for regression.

  • img_lr (torch.Tensor) – Low-resolution input to the network with shape (1, channels, height, width). Must have a batch dimension of 1.

  • latents_shape (torch.Size) – Shape of the latent representation with format (batch_size, out_channels, image_shape_y, image_shape_x).

  • lead_time_label (Optional[torch.Tensor], optional) – Lead time label tensor for lead time conditioning, with shape (1, lead_time_dims). Default is None.

Returns

Predicted ensemble mean at the next time step with shape matching latents_shape.

Return type

torch.Tensor

Raises

ValueError – If img_lr has a batch size greater than 1.

Utilities for profiling the performance of a model.

Previous PhysicsNeMo ShardTensor
Next PhysicsNeMo Launch Logging
© Copyright 2023, NVIDIA PhysicsNeMo Team. Last updated on Jun 11, 2025.