Modulus Models
- class modulus.models.mlp.fully_connected.FullyConnected(in_features: int = 512, layer_size: int = 512, out_features: int = 512, num_layers: int = 6, activation_fn: Union[Module, List[Module]] = SiLU(), skip_connections: bool = False, adaptive_activations: bool = False, weight_norm: bool = False)[source]
Bases:
Module
A densely-connected MLP architecture
- Parameters
in_features (int, optional) – Size of input features, by default 512
layer_size (int, optional) – Size of every hidden layer, by default 512
out_features (int, optional) – Size of output features, by default 512
num_layers (int, optional) – Number of hidden layers, by default 6
activation_fn (Union[nn.Module, List[nn.Module]], optional) – Activation function to use, by default nn.SILU
skip_connections (bool, optional) – Add skip connections every 2 hidden layers, by default False
adaptive_activations (bool, optional) – Use an adaptive activation function, by default False
weight_norm (bool, optional) – Use weight norm on fully connected layers, by default False
Example
>>> model = modulus.models.mlp.FullyConnected(in_features=32, out_features=64) >>> input = torch.randn(128, 32) >>> output = model(input) >>> output.size() torch.Size([128, 64])
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.mlp.fully_connected.MetaData(name: str = 'FullyConnected', jit: bool = True, cuda_graphs: bool = True, amp: bool = True, amp_cpu: bool = None, amp_gpu: bool = None, torch_fx: bool = True, onnx: bool = True, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = True, trt: bool = False, var_dim: int = -1, func_torch: bool = True, auto_grad: bool = True)[source]
Bases: ModelMetaData
- class modulus.models.fno.fno.FNO(decoder_net: Module, in_channels: int, dimension: int, latent_channels: int = 32, num_fno_layers: int = 4, num_fno_modes: Union[int, List[int]] = 16, padding: int = 8, padding_type: str = 'constant', activation_fn: Module = GELU(approximate='none'), coord_features: bool = True)[source]
Bases:
Module
Fourier neural operator (FNO) model.
NoteThe FNO architecture supports options for 1D, 2D and 3D fields which can be controlled using the dimension parameter.
- Parameters
decoder_net (modulus.Module) – Pointwise decoder network, input feature size should match latent_channels
in_channels (int) – Number of input channels
dimension (int) – Model dimensionality (supports 1, 2, 3).
latent_channels (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (int, optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
Example
>>> # define the decoder net >>> decoder = modulus.models.mlp.FullyConnected( ... in_features=32, ... out_features=3, ... num_layers=2, ... layer_size=16, ... ) >>> # define the 2d FNO model >>> model = modulus.models.fno.FNO( ... decoder_net=decoder, ... in_channels=4, ... dimension=2, ... latent_channels=32, ... num_fno_layers=2, ... padding=0, ... ) >>> input = torch.randn(32, 4, 32, 32) #(N, C, H, W) >>> output = model(input) >>> output.size() torch.Size([32, 3, 32, 32])
NoteReference: Li, Zongyi, et al. “Fourier neural operator for parametric partial differential equations.” arXiv preprint arXiv:2010.08895 (2020).
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.fno.fno.FNO1DEncoder(in_channels: int = 1, num_fno_layers: int = 4, fno_layer_size: int = 32, num_fno_modes: Union[int, List[int]] = 16, padding: Union[int, List[int]] = 8, padding_type: str = 'constant', activation_fn: Module = GELU(approximate='none'), coord_features: bool = True)[source]
Bases:
Module
1D Spectral encoder for FNO
- Parameters
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- meshgrid(shape: List[int], device: device) → Tensor[source]
Creates 1D meshgrid feature
- Parameters
shape (List[int]) – Tensor shape
device (torch.device) – Device model is on
- Returns
- Return type
Meshgrid tensor
Tensor
- class modulus.models.fno.fno.FNO2DEncoder(in_channels: int = 1, num_fno_layers: int = 4, fno_layer_size: int = 32, num_fno_modes: Union[int, List[int]] = 16, padding: Union[int, List[int]] = 8, padding_type: str = 'constant', activation_fn: Module = GELU(approximate='none'), coord_features: bool = True)[source]
Bases:
Module
2D Spectral encoder for FNO
- Parameters
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- meshgrid(shape: List[int], device: device) → Tensor[source]
Creates 2D meshgrid feature
- Parameters
shape (List[int]) – Tensor shape
device (torch.device) – Device model is on
- Returns
- Return type
Meshgrid tensor
Tensor
- class modulus.models.fno.fno.FNO3DEncoder(in_channels: int = 1, num_fno_layers: int = 4, fno_layer_size: int = 32, num_fno_modes: Union[int, List[int]] = 16, padding: Union[int, List[int]] = 8, padding_type: str = 'constant', activation_fn: Module = GELU(approximate='none'), coord_features: bool = True)[source]
Bases:
Module
3D Spectral encoder for FNO
- Parameters
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- meshgrid(shape: List[int], device: device) → Tensor[source]
Creates 3D meshgrid feature
- Parameters
shape (List[int]) – Tensor shape
device (torch.device) – Device model is on
- Returns
- Return type
Meshgrid tensor
Tensor
- class modulus.models.fno.fno.MetaData(name: str = 'FourierNeuralOperator', jit: bool = True, cuda_graphs: bool = True, amp: bool = False, amp_cpu: bool = None, amp_gpu: bool = None, torch_fx: bool = False, onnx: bool = False, onnx_gpu: bool = False, onnx_cpu: bool = False, onnx_runtime: bool = False, trt: bool = False, var_dim: int = 1, func_torch: bool = False, auto_grad: bool = False)[source]
Bases: ModelMetaData
- class modulus.models.afno.afno.AFNO(img_size: Tuple[int, int], in_channels: int, out_channels: int, patch_size: Tuple[int, int] = (16, 16), embed_dim: int = 256, depth: int = 4, mlp_ratio: float = 4.0, drop_rate: float = 0.0, num_blocks: int = 16, sparsity_threshold: float = 0.01, hard_thresholding_fraction: float = 1.0)[source]
Bases:
Module
Adaptive Fourier neural operator (AFNO) model.
NoteAFNO is a model that is designed for 2D images only.
- Parameters
img_size (Tuple[int, int]) – Input image dimensions (height, width)
in_channels (int) – Number of input channels
out_channels (int) – Number of output channels
patch_size (Tuple[int, int], optional) – Size of image patches, by default (16, 16)
embed_dim (int, optional) – Embedded channel size, by default 256
depth (int, optional) – Number of AFNO layers, by default 4
mlp_ratio (float, optional) – Ratio of layer MLP latent variable size to input feature size, by default 4.0
drop_rate (float, optional) – Drop out rate in layer MLPs, by default 0.0
num_blocks (int, optional) – Number of blocks in the block-diag frequency weight matrices, by default 16
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
Example
>>> model = modulus.models.afno.AFNO( ... img_size=(32, 32), ... in_channels=2, ... out_channels=1, ... patch_size=(8, 8), ... embed_dim=16, ... depth=2, ... num_blocks=2, ... ) >>> input = torch.randn(32, 2, 32, 32) #(N, C, H, W) >>> output = model(input) >>> output.size() torch.Size([32, 1, 32, 32])
NoteReference: Guibas, John, et al. “Adaptive fourier neural operators: Efficient token mixers for transformers.” arXiv preprint arXiv:2111.13587 (2021).
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- forward_features(x: Tensor) → Tensor[source]
Forward pass of core AFNO
- class modulus.models.afno.afno.AFNO2DLayer(hidden_size: int, num_blocks: int = 8, sparsity_threshold: float = 0.01, hard_thresholding_fraction: float = 1, hidden_size_factor: int = 1)[source]
Bases:
Module
AFNO spectral convolution layer
- Parameters
hidden_size (int) – Feature dimensionality
num_blocks (int, optional) – Number of blocks used in the block diagonal weight matrix, by default 8
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
hidden_size_factor (int, optional) – Factor to increase spectral features by after weight multiplication, by default 1
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.afno.afno.AFNOMlp(in_features: int, latent_features: int, out_features: int, activation_fn: Module = GELU(approximate='none'), drop: float = 0.0)[source]
Bases:
Module
Fully-connected Multi-layer perception used inside AFNO
- Parameters
in_features (int) – Input feature size
latent_features (int) – Latent feature size
out_features (int) – Output feature size
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
drop (float, optional) – Drop out rate, by default 0.0
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.afno.afno.Block(embed_dim: int, num_blocks: int = 8, mlp_ratio: float = 4.0, drop: float = 0.0, activation_fn: ~torch.nn.modules.module.Module = GELU(approximate='none'), norm_layer: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.normalization.LayerNorm'>, double_skip: bool = True, sparsity_threshold: float = 0.01, hard_thresholding_fraction: float = 1.0)[source]
Bases:
Module
AFNO block, spectral convolution and MLP
- Parameters
embed_dim (int) – Embedded feature dimensionality
num_blocks (int, optional) – Number of blocks used in the block diagonal weight matrix, by default 8
mlp_ratio (float, optional) – Ratio of MLP latent variable size to input feature size, by default 4.0
drop (float, optional) – Drop out rate in MLP, by default 0.0
activation_fn (nn.Module, optional) – Activation function used in MLP, by default nn.GELU
norm_layer (nn.Module, optional) – Normalization function, by default nn.LayerNorm
double_skip (bool, optional) – Residual, by default True
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.afno.afno.MetaData(name: str = 'AFNO', jit: bool = False, cuda_graphs: bool = True, amp: bool = True, amp_cpu: bool = None, amp_gpu: bool = None, torch_fx: bool = False, onnx: bool = False, onnx_gpu: bool = True, onnx_cpu: bool = False, onnx_runtime: bool = True, trt: bool = False, var_dim: int = 1, func_torch: bool = False, auto_grad: bool = False)[source]
Bases: ModelMetaData
- class modulus.models.afno.afno.PatchEmbed(img_size: Tuple[int, int], in_channels: int, patch_size: Tuple[int, int] = (16, 16), embed_dim: int = 256)[source]
Bases:
Module
Patch embedding layer
Converts 2D patch into a 1D vector for input to AFNO
- Parameters
img_size (Tuple[int, int]) – Input image dimensions (height, width)
in_channels (int) – Number of input channels
patch_size (Tuple[int, int], optional) – Size of image patches, by default (16, 16)
embed_dim (int, optional) – Embedded channel size, by default 256
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.meshgraphnet.meshgraphnet.MeshGraphNet(input_dim_nodes: int, input_dim_edges: int, output_dim: int, processor_size: int = 15, num_layers_node_processor: int = 2, num_layers_edge_processor: int = 2, hidden_dim_node_encoder: int = 128, num_layers_node_encoder: int = 2, hidden_dim_edge_encoder: int = 128, num_layers_edge_encoder: int = 2, hidden_dim_node_decoder: int = 128, num_layers_node_decoder: int = 2)[source]
Bases:
Module
MeshGraphNet network architecture
- Parameters
input_dim_nodes (int) – Number of node features
input_dim_edges (int) – Number of edge features
output_dim (int) – Number of outputs
processor_size (int, optional) – Number of message passing blocks, by default 15
num_layers_node_processor (int, optional) – Number of MLP layers for processing nodes in each message passing block, by default 2
num_layers_edge_processor (int, optional) – Number of MLP layers for processing edge features in each message passing block, by default 2
hidden_dim_node_encoder (int, optional) – Hidden layer size for the node feature encoder, by default 128
num_layers_node_encoder (int, optional) – Number of MLP layers for the node feature encoder, by default 2
hidden_dim_edge_encoder (int, optional) – Hidden layer size for the edge feature encoder, by default 128
num_layers_edge_encoder (int, optional) – Number of MLP layers for the edge feature encoder, by default 2
hidden_dim_node_decoder (int, optional) – Hidden layer size for the node feature decoder, by default 128
num_layers_node_decoder (int, optional) – Number of MLP layers for the node feature decoder, by default 2
Example
>>> model = modulus.models.meshgraphnet.MeshGraphNet( ... input_dim_nodes=4, ... input_dim_edges=3, ... output_dim=2, ... ) >>> graph = dgl.rand_graph(10, 5) >>> node_features = torch.randn(10, 4) >>> edge_features = torch.randn(5, 3) >>> output = model(graph, node_features, edge_features) >>> output.size() torch.Size([10, 2])
NoteReference: Pfaff, Tobias, et al. “Learning mesh-based simulation with graph networks.” arXiv preprint arXiv:2010.03409 (2020).
- forward(graph: Union[DGLGraph, List[DGLGraph]], node_features: Tensor, edge_features: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.meshgraphnet.meshgraphnet.MetaData(name: str = 'MeshGraphNet', jit: bool = True, cuda_graphs: bool = False, amp: bool = False, amp_cpu: bool = False, amp_gpu: bool = True, torch_fx: bool = False, onnx: bool = False, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = False, trt: bool = False, var_dim: int = -1, func_torch: bool = True, auto_grad: bool = True)[source]
Bases: ModelMetaData
- class modulus.models.graphcast.graph_cast_net.GraphCastNet(meshgraph_path: str, static_dataset_path: str, input_res: tuple = (721, 1440), input_dim_grid_nodes: int = 474, input_dim_mesh_nodes: int = 3, input_dim_edges: int = 4, output_dim_grid_nodes: int = 227, processor_layers: int = 16, hidden_layers: int = 1, hidden_dim: int = 512, aggregation: str = 'sum', activation_fn: Module = SiLU(), norm_type: str = 'LayerNorm', use_cugraphops_encoder: bool = False, use_cugraphops_processor: bool = False, use_cugraphops_decoder: bool = False, do_concat_trick: bool = False, recompute_activation: bool = False)[source]
Bases:
Module
GraphCast network architecture
- Parameters
meshgraph_path (str) – Path to the meshgraph file. If not provided, the meshgraph will be created using PyMesh.
static_dataset_path (str) – Path to the static dataset file.
input_res (Tuple[int, int]) – Input resolution of the latitude-longitude grid
input_dim_grid_nodes (int, optional) – Input dimensionality of the grid node features, by default 474
input_dim_mesh_nodes (int, optional) – Input dimensionality of the mesh node features, by default 3
input_dim_edges (int, optional) – Input dimensionality of the edge features, by default 4
output_dim_grid_nodes (int, optional) – Final output dimensionality of the edge features, by default 227
processor_layers (int, optional) – Number of processor layers, by default 16
hidden_layers (int, optional) – Number of hiddel layers, by default 1
hidden_dim (int, optional) – Number of neurons in each hidden layer, by default 512
aggregation (str, optional) – Message passing aggregation method (“sum”, “mean”), by default “sum”
activation_fn (nn.Module, optional) – Type of activation function, by default nn.SiLU()
norm_type (str, optional) – Normalization type, by default “LayerNorm”
use_cugraphops_encoder (bool, default=False) – Flag to select cugraphops kernels in encoder
use_cugraphops_processor (bool, default=False) – Flag to select cugraphops kernels in the processor
use_cugraphops_decoder (bool, default=False) – Flag to select cugraphops kernels in the decoder
do_conat_trick (: bool, default=False) – Whether to replace concat+MLP with MLP+idx+sum
recompute_activation (bool, optional) – Flag for recomputing activation in backward to save memory, by default False. Currently, only SiLU is supported.
NoteBased on these papers: - “GraphCast: Learning skillful medium-range global weather forecasting”
- “Forecasting Global Weather with Graph Neural Networks”
- “Learning Mesh-Based Simulation with Graph Networks”
- “MultiScale MeshGraphNets”
- custom_forward(grid_nfeat: Tensor) → Tensor[source]
GraphCast forward method with support for gradient checkpointing.
- Parameters
- Returns
- Return type
grid_nfeat (Tensor) – Node features of the latitude-longitude graph.
grid_nfeat_finale – Predicted node features of the latitude-longitude graph.
Tensor
- decoder_forward(mesh_efeat_processed: Tensor, mesh_nfeat_processed: Tensor, grid_nfeat_encoded: Tensor) → Tensor[source]
Forward method for the last layer of the processor, the decoder, and the final MLP.
- Parameters
mesh_efeat_processed (Tensor) – Multimesh edge features processed by the processor.
mesh_nfeat_processed (Tensor) – Multi-mesh node features processed by the processor.
grid_nfeat_encoded (Tensor) – The encoded node features for the latitude-longitude grid.
- Returns
- Return type
grid_nfeat_finale – The final node features for the latitude-longitude grid.
Tensor
- encoder_forward(grid_nfeat: Tensor) → Tensor[source]
Forward method for the embedder, encoder, and the first of the processor.
- Parameters
- Returns
mesh_efeat_processed (Tensor) – Processed edge features for the multimesh.
mesh_nfeat_processed (Tensor) – Processed node features for the multimesh.
grid_nfeat_encoded (Tensor) – Encoded node features for the latitude-longitude grid.
grid_nfeat (Tensor) – Node features for the latitude-longitude grid.
- forward(grid_nfeat: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- prepare_input(invar: Tensor) → Tensor[source]
Prepares the input to the model in the required shape.
- Parameters
- Returns
- Return type
invar (Tensor) – Input in the shape [N, C, H, W].
Reshaped input.
Tensor
- prepare_output(outvar: Tensor) → Tensor[source]
Prepares the output of the model in the shape [N, C, H, W].
- Parameters
- Returns
- Return type
outvar (Tensor) – Output of the final MLP of the model.
The reshaped output of the model.
Tensor
- set_checkpoint_decoder(checkpoint_flag: bool)[source]
Sets checkpoint function for the last layer of the processor, the decoder, and the final MLP.
This function returns the appropriate checkpoint function based on the provided checkpoint_flag flag. If checkpoint_flag is True, the function returns the checkpoint function from PyTorch’s torch.utils.checkpoint. Otherwise, it returns an identity function that simply passes the inputs through the given layer.
- Parameters
- Returns
- Return type
checkpoint_flag (bool) – Whether to use checkpointing for gradient computation. Checkpointing can reduce memory usage during backpropagation at the cost of increased computation time.
The selected checkpoint function to use for gradient computation.
Callable
- set_checkpoint_encoder(checkpoint_flag: bool)[source]
Sets checkpoint function for the embedder, encoder, and the first of the processor.
This function returns the appropriate checkpoint function based on the provided checkpoint_flag flag. If checkpoint_flag is True, the function returns the checkpoint function from PyTorch’s torch.utils.checkpoint. Otherwise, it returns an identity function that simply passes the inputs through the given layer.
- Parameters
- Returns
- Return type
checkpoint_flag (bool) – Whether to use checkpointing for gradient computation. Checkpointing can reduce memory usage during backpropagation at the cost of increased computation time.
The selected checkpoint function to use for gradient computation.
Callable
- set_checkpoint_model(checkpoint_flag: bool)[source]
Sets checkpoint function for the entire model.
This function returns the appropriate checkpoint function based on the provided checkpoint_flag flag. If checkpoint_flag is True, the function returns the checkpoint function from PyTorch’s torch.utils.checkpoint. In this case, all the other gradient checkpoitings will be disabled. Otherwise, it returns an identity function that simply passes the inputs through the given layer.
- Parameters
- Returns
- Return type
checkpoint_flag (bool) – Whether to use checkpointing for gradient computation. Checkpointing can reduce memory usage during backpropagation at the cost of increased computation time.
The selected checkpoint function to use for gradient computation.
Callable
- set_checkpoint_processor(checkpoint_segments: int)[source]
Sets checkpoint function for the processor excluding the first and last layers.
This function returns the appropriate checkpoint function based on the provided checkpoint_segments flag. If checkpoint_segments is positive,
the function returns the checkpoint function from PyTorch’s
torch.utils.checkpoint, with number of checkpointing segments equal to checkpoint_segments. Otherwise, it returns an identity function that simply passes the inputs through the given layer.
- Parameters
- Returns
- Return type
checkpoint_segments (int) – Number of checkpointing segments for gradient computation. Checkpointing can reduce memory usage during backpropagation at the cost of increased computation time.
The selected checkpoint function to use for gradient computation.
Callable
- to(*args: Any, **kwargs: Any) → GraphCastNet[source]
Moves the object to the specified device, dtype, or format. This method moves the object and its underlying graph and graph features to the specified device, dtype, or format, and returns the updated object.
- Parameters
*args (Any) – Positional arguments to be passed to the torch._C._nn._parse_to function.
**kwargs (Any) – Keyword arguments to be passed to the torch._C._nn._parse_to function.
- Returns
- Return type
The updated object after moving to the specified device, dtype, or format.
- class modulus.models.graphcast.graph_cast_net.MetaData(name: str = 'GraphCastNet', jit: bool = False, cuda_graphs: bool = False, amp: bool = False, amp_cpu: bool = False, amp_gpu: bool = True, torch_fx: bool = False, onnx: bool = False, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = False, trt: bool = False, var_dim: int = -1, func_torch: bool = False, auto_grad: bool = False)[source]
Bases: ModelMetaData
- class modulus.models.pix2pix.pix2pix.MetaData(name: str = 'Pix2Pix', jit: bool = True, cuda_graphs: bool = True, amp: bool = False, amp_cpu: bool = False, amp_gpu: bool = True, torch_fx: bool = False, onnx: bool = True, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = False, trt: bool = False, var_dim: int = 1, func_torch: bool = True, auto_grad: bool = True)[source]
Bases: ModelMetaData
- class modulus.models.pix2pix.pix2pix.Pix2Pix(in_channels: int, out_channels: int, dimension: int, conv_layer_size: int = 64, n_downsampling: int = 3, n_upsampling: int = 3, n_blocks: int = 3, activation_fn: Union[Module, List[Module]] = ReLU(), batch_norm: bool = False, padding_type: str = 'reflect')[source]
Bases:
Module
Convolutional encoder-decoder based on pix2pix generator models.
NoteThe pix2pix architecture supports options for 1D, 2D and 3D fields which can be constroled using the dimension parameter.
- Parameters
in_channels (int) – Number of input channels
out_channels (Union[int, Any], optional) – Number of outout channels
dimension (int) – Model dimensionality (supports 1, 2, 3).
conv_layer_size (int, optional) – Latent channel size after first convolution, by default 64
n_downsampling (int, optional) – Number of downsampling blocks, by default 3
n_upsampling (int, optional) – Number of upsampling blocks, by default 3
n_blocks (int, optional) – Number of residual blocks in middle of model, by default 3
activation_fn (Activation, optional) – Activation function, by default ReLU
batch_norm (bool, optional) – Batch normalization, by default False
padding_type (str, optional) – Padding type (‘reflect’, ‘replicate’ or ‘zero’), by default “reflect”
Example
>>> #2D convolutional encoder decoder >>> model = modulus.models.pix2pix.Pix2Pix( ... in_channels=1, ... out_channels=2, ... dimension=2, ... conv_layer_size=4) >>> input = torch.randn(4, 1, 32, 32) #(N, C, H, W) >>> output = model(input) >>> output.size() torch.Size([4, 2, 32, 32])
NoteReference: Isola, Phillip, et al. “Image-To-Image translation with conditional adversarial networks” Conference on Computer Vision and Pattern Recognition, 2017. https://arxiv.org/abs/1611.07004
Reference: Wang, Ting-Chun, et al. “High-Resolution image synthesis and semantic manipulation with conditional GANs” Conference on Computer Vision and Pattern Recognition, 2018. https://arxiv.org/abs/1711.11585
NoteBased on the implementation: https://github.com/NVIDIA/pix2pixHD
- forward(input: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.pix2pix.pix2pix.ResnetBlock(dimension: int, channels: int, padding_type: str = 'reflect', activation: Module = ReLU(), use_batch_norm: bool = False, use_dropout: bool = False)[source]
Bases:
Module
A simple ResNet block
- Parameters
dimension (int) – Model dimensionality (supports 1, 2, 3).
channels (int) – Number of feature channels
padding_type (str, optional) – Padding type (‘reflect’, ‘replicate’ or ‘zero’), by default “reflect”
activation (nn.Module, optional) – Activation function, by default nn.ReLU()
use_batch_norm (bool, optional) – Batch normalization, by default False
- forward(x: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.rnn.rnn_one2many.MetaData(name: str = 'One2ManyRNN', jit: bool = False, cuda_graphs: bool = False, amp: bool = True, amp_cpu: bool = None, amp_gpu: bool = None, torch_fx: bool = True, onnx: bool = False, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = False, trt: bool = False, var_dim: int = -1, func_torch: bool = False, auto_grad: bool = False)[source]
Bases: ModelMetaData
- class modulus.models.rnn.rnn_one2many.One2ManyRNN(input_channels: int, dimension: int = 2, nr_latent_channels: int = 512, nr_residual_blocks: int = 2, activation_fn: Union[Module, List[Module]] = ReLU(), nr_downsamples: int = 2, nr_tsteps: int = 32)[source]
Bases:
Module
A RNN model with encoder/decoder for 2d/3d problems that provides predictions based on single initial condition.
- Parameters
input_channels (int) – Number of channels in the input
dimension (int, optional) – Spatial dimension of the input. Only 2d and 3d are supported, by default 2
nr_latent_channels (int, optional) – Channels for encoding/decoding, by default 512
nr_residual_blocks (int, optional) – Number of residual blocks, by default 2
activation_fn (Union[nn.Module, List[nn.Module]], optional) – Activation function to use, by default nn.ReLU()
nr_downsamples (int, optional) – Number of downsamples, by default 2
nr_tsteps (int, optional) – Time steps to predict, by default 32
Example
>>> model = modulus.models.rnn.One2ManyRNN( ... input_channels=6, ... dimension=2, ... nr_latent_channels=32, ... activation_fn=torch.nn.ReLU(), ... nr_downsamples=2, ... nr_tsteps=16, ... ) >>> input = invar = torch.randn(4, 6, 1, 16, 16) # [N, C, T, H, W] >>> output = model(input) >>> output.size() torch.Size([4, 6, 16, 16, 16])
- forward(x: Tensor) → Tensor[source]
Forward pass
- Parameters
- Returns
- Return type
x (Tensor) – Expects a tensor of size [N, C, 1, H, W] for 2D or [N, C, 1, D, H, W] for 3D Where, N is the batch size, C is the number of channels, 1 is the number of input timesteps and D, H, W are spatial dimensions.
Size [N, C, T, H, W] for 2D or [N, C, T, D, H, W] for 3D. Where, T is the number of timesteps being predicted.
Tensor
- class modulus.models.rnn.rnn_seq2seq.MetaData(name: str = 'Seq2SeqRNN', jit: bool = False, cuda_graphs: bool = False, amp: bool = True, amp_cpu: bool = None, amp_gpu: bool = None, torch_fx: bool = True, onnx: bool = False, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = False, trt: bool = False, var_dim: int = -1, func_torch: bool = False, auto_grad: bool = False)[source]
Bases: ModelMetaData
- class modulus.models.rnn.rnn_seq2seq.Seq2SeqRNN(input_channels: int, dimension: int = 2, nr_latent_channels: int = 512, nr_residual_blocks: int = 2, activation_fn: Union[Module, List[Module]] = ReLU(), nr_downsamples: int = 2, nr_tsteps: int = 32)[source]
Bases:
Module
A RNN model with encoder/decoder for 2d/3d problems. Given input 0 to t-1, predicts signal t to t + nr_tsteps
- Parameters
input_channels (int) – Number of channels in the input
dimension (int, optional) – Spatial dimension of the input. Only 2d and 3d are supported, by default 2
nr_latent_channels (int, optional) – Channels for encoding/decoding, by default 512
nr_residual_blocks (int, optional) – Number of residual blocks, by default 2
activation_fn (Union[nn.Module, List[nn.Module]], optional) – Activation function to use, by default nn.ReLU()
nr_downsamples (int, optional) – Number of downsamples, by default 2
nr_tsteps (int, optional) – Time steps to predict, by default 32
Example
>>> model = modulus.models.rnn.Seq2SeqRNN( ... input_channels=6, ... dimension=2, ... nr_latent_channels=32, ... activation_fn=torch.nn.ReLU(), ... nr_downsamples=2, ... nr_tsteps=16, ... ) >>> input = invar = torch.randn(4, 6, 16, 16, 16) # [N, C, T, H, W] >>> output = model(input) >>> output.size() torch.Size([4, 6, 16, 16, 16])
- forward(x: Tensor) → Tensor[source]
Forward pass
- Parameters
- Returns
- Return type
x (Tensor) – Expects a tensor of size [N, C, T, H, W] for 2D or [N, C, T, D, H, W] for 3D Where, N is the batch size, C is the number of channels, T is the number of input timesteps and D, H, W are spatial dimensions. Currently, this requires input time steps to be same as predicted time steps.
Size [N, C, T, H, W] for 2D or [N, C, T, D, H, W] for 3D. Where, T is the number of timesteps being predicted.
Tensor
- class modulus.models.srrn.super_res_net.ConvolutionalBlock3d(in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, batch_norm: bool = False, activation_fn: Module = Identity())[source]
Bases:
Module
3D convolutional block
- Parameters
in_channels (int) – Input channels
out_channels (int) – Output channels
kernel_size (int) – Kernel size
stride (int, optional) – Convolutional stride, by default 1
batch_norm (bool, optional) – Use batchnorm, by default False
- forward(input: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.srrn.super_res_net.MetaData(name: str = 'SuperResolution', jit: bool = True, cuda_graphs: bool = False, amp: bool = False, amp_cpu: bool = False, amp_gpu: bool = False, torch_fx: bool = False, onnx: bool = True, onnx_gpu: bool = None, onnx_cpu: bool = None, onnx_runtime: bool = False, trt: bool = False, var_dim: int = 1, func_torch: bool = True, auto_grad: bool = True)[source]
Bases: ModelMetaData
- class modulus.models.srrn.super_res_net.PixelShuffle3d(scale: int)[source]
Bases:
Module
3D pixel-shuffle operation
- Parameters
scale (int) – Factor to downscale channel count by
Note- forward(input: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.srrn.super_res_net.ResidualConvBlock3d(n_layers: int = 1, kernel_size: int = 3, conv_layer_size: int = 64, activation_fn: Module = Identity())[source]
Bases:
Module
3D ResNet block
- Parameters
n_layers (int, optional) – Number of convolutional layers, by default 1
kernel_size (int, optional) – Kernel size, by default 3
conv_layer_size (int, optional) – Latent channel size, by default 64
activation_fn (nn.Module, optional) – Activation function, by default nn.Identity()
- forward(input: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.srrn.super_res_net.SRResNet(in_channels: int, out_channels: int, large_kernel_size: int = 7, small_kernel_size: int = 3, conv_layer_size: int = 32, n_resid_blocks: int = 8, scaling_factor: int = 8, activation_fn: Module = PReLU(num_parameters=1))[source]
Bases:
Module
3D convolutional super-resolution network
- Parameters
in_channels (int) – Number of input channels
out_channels (int) – Number of outout channels
large_kernel_size (int, optional) – convolutional kernel size for first and last convolution, by default 7
small_kernel_size (int, optional) – convolutional kernel size for internal convolutions, by default 3
conv_layer_size (int, optional) – Latent channel size, by default 32
n_resid_blocks (int, optional) – Number of residual blocks before , by default 8
scaling_factor (int, optional) – Scaling factor to increase the output feature size compared to the input (2, 4, or 8), by default 8
activation_fn (Activation, optional) – Activation function, by default Activation.PRELU
Example
>>> #3D convolutional encoder decoder >>> model = modulus.models.srrn.SRResNet( ... in_channels=1, ... out_channels=2, ... conv_layer_size=4, ... scaling_factor=2) >>> input = torch.randn(4, 1, 8, 8, 8) #(N, C, D, H, W) >>> output = model(input) >>> output.size() torch.Size([4, 2, 16, 16, 16])
NoteBased on the implementation: https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Super-Resolution
- forward(in_vars: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- class modulus.models.srrn.super_res_net.SubPixel_ConvolutionalBlock3d(kernel_size: int = 3, conv_layer_size: int = 64, scaling_factor: int = 2)[source]
Bases:
Module
Convolutional block with Pixel Shuffle operation
- Parameters
kernel_size (int, optional) – Kernel size, by default 3
conv_layer_size (int, optional) – Latent channel size, by default 64
scaling_factor (int, optional) – Pixel shuffle scaling factor, by default 2
- forward(input: Tensor) → Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.