Graph Neural Networks#
- class physicsnemo.models.meshgraphnet.meshgraphnet.MeshGraphNet(*args, **kwargs)[source]#
Bases:
ModuleMeshGraphNet network architecture
- Parameters:
input_dim_nodes (int) – Number of node features
input_dim_edges (int) – Number of edge features
output_dim (int) – Number of outputs
processor_size (int, optional) – Number of message passing blocks, by default 15
mlp_activation_fn (Union[str, List[str]], optional) – Activation function to use, by default ‘relu’
num_layers_node_processor (int, optional) – Number of MLP layers for processing nodes in each message passing block, by default 2
num_layers_edge_processor (int, optional) – Number of MLP layers for processing edge features in each message passing block, by default 2
hidden_dim_processor (int, optional) – Hidden layer size for the message passing blocks, by default 128
hidden_dim_node_encoder (int, optional) – Hidden layer size for the node feature encoder, by default 128
num_layers_node_encoder (Union[int, None], optional) – Number of MLP layers for the node feature encoder, by default 2. If None is provided, the MLP will collapse to a Identity function, i.e. no node encoder
hidden_dim_edge_encoder (int, optional) – Hidden layer size for the edge feature encoder, by default 128
num_layers_edge_encoder (Union[int, None], optional) – Number of MLP layers for the edge feature encoder, by default 2. If None is provided, the MLP will collapse to a Identity function, i.e. no edge encoder
hidden_dim_node_decoder (int, optional) – Hidden layer size for the node feature decoder, by default 128
num_layers_node_decoder (Union[int, None], optional) – Number of MLP layers for the node feature decoder, by default 2. If None is provided, the MLP will collapse to a Identity function, i.e. no decoder
aggregation (str, optional) – Message aggregation type, by default “sum”
do_conat_trick (: bool, default=False) – Whether to replace concat+MLP with MLP+idx+sum
num_processor_checkpoint_segments (int, optional) – Number of processor segments for gradient checkpointing, by default 0 (checkpointing disabled)
checkpoint_offloading (bool, optional) – Whether to offload the checkpointing to the CPU, by default False
Example
>>> # `norm_type` in MeshGraphNet is deprecated, >>> # TE will be automatically used if possible unless told otherwise. >>> # (You don't have to set this varialbe, it's faster to use TE!) >>> # Example of how to disable: >>> import os >>> os.environ['PHYSICSNEMO_FORCE_TE'] = 'False' >>> >>> model = physicsnemo.models.meshgraphnet.MeshGraphNet( ... input_dim_nodes=4, ... input_dim_edges=3, ... output_dim=2, ... ) >>> graph = dgl.rand_graph(10, 5) >>> node_features = torch.randn(10, 4) >>> edge_features = torch.randn(5, 3) >>> output = model(node_features, edge_features, graph) >>> output.size() torch.Size([10, 2])
Note
Reference: Pfaff, Tobias, et al. “Learning mesh-based simulation with graph networks.” arXiv preprint arXiv:2010.03409 (2020).
- class physicsnemo.models.meshgraphnet.meshgraphnet.MeshGraphNetProcessor(
- processor_size: int = 15,
- input_dim_node: int = 128,
- input_dim_edge: int = 128,
- num_layers_node: int = 2,
- num_layers_edge: int = 2,
- aggregation: str = 'sum',
- norm_type: str = 'LayerNorm',
- activation_fn: Module = ReLU(),
- do_concat_trick: bool = False,
- num_processor_checkpoint_segments: int = 0,
- checkpoint_offloading: bool = False,
Bases:
ModuleMeshGraphNet processor block
- run_function(
- segment_start: int,
- segment_end: int,
Custom forward for gradient checkpointing
- Parameters:
segment_start (int) – Layer index as start of the segment
segment_end (int) – Layer index as end of the segment
- Returns:
Custom forward function
- Return type:
Callable
- class physicsnemo.models.mesh_reduced.mesh_reduced.Mesh_Reduced(
- input_dim_nodes: int,
- input_dim_edges: int,
- output_decode_dim: int,
- output_encode_dim: int = 3,
- processor_size: int = 15,
- num_layers_node_processor: int = 2,
- num_layers_edge_processor: int = 2,
- hidden_dim_processor: int = 128,
- hidden_dim_node_encoder: int = 128,
- num_layers_node_encoder: int = 2,
- hidden_dim_edge_encoder: int = 128,
- num_layers_edge_encoder: int = 2,
- hidden_dim_node_decoder: int = 128,
- num_layers_node_decoder: int = 2,
- k: int = 3,
- aggregation: str = 'mean',
Bases:
ModulePbGMR-GMUS architecture.
A mesh-reduced architecture that combines encoding and decoding processors for physics prediction in reduced mesh space.
- Parameters:
input_dim_nodes (int) – Number of node features.
input_dim_edges (int) – Number of edge features.
output_decode_dim (int) – Number of decoding outputs (per node).
output_encode_dim (int, optional) – Number of encoding outputs (per pivotal position), by default 3.
processor_size (int, optional) – Number of message passing blocks, by default 15.
num_layers_node_processor (int, optional) – Number of MLP layers for processing nodes in each message passing block, by default 2.
num_layers_edge_processor (int, optional) – Number of MLP layers for processing edge features in each message passing block, by default 2.
hidden_dim_processor (int, optional) – Hidden layer size for the message passing blocks, by default 128.
hidden_dim_node_encoder (int, optional) – Hidden layer size for the node feature encoder, by default 128.
num_layers_node_encoder (int, optional) – Number of MLP layers for the node feature encoder, by default 2.
hidden_dim_edge_encoder (int, optional) – Hidden layer size for the edge feature encoder, by default 128.
num_layers_edge_encoder (int, optional) – Number of MLP layers for the edge feature encoder, by default 2.
hidden_dim_node_decoder (int, optional) – Hidden layer size for the node feature decoder, by default 128.
num_layers_node_decoder (int, optional) – Number of MLP layers for the node feature decoder, by default 2.
k (int, optional) – Number of nodes considered for per pivotal position, by default 3.
aggregation (str, optional) – Message aggregation type, by default “mean”.
Notes
Reference: Han, Xu, et al. “Predicting physics in mesh-reduced space with temporal attention.” arXiv preprint arXiv:2201.09113 (2022).
- decode(
- x,
- edge_features,
- graph,
- position_mesh,
- position_pivotal,
Decode pivotal features back to mesh space.
- Parameters:
x (torch.Tensor) – Input features in pivotal space.
edge_features (torch.Tensor) – Edge features.
graph (Union[DGLGraph, pyg.data.Data]) – Input graph.
position_mesh (torch.Tensor) – Mesh positions.
position_pivotal (torch.Tensor) – Pivotal positions.
- Returns:
Decoded features in mesh space.
- Return type:
torch.Tensor
- encode(
- x,
- edge_features,
- graph,
- position_mesh,
- position_pivotal,
Encode mesh features to pivotal space.
- Parameters:
x (torch.Tensor) – Input node features.
edge_features (torch.Tensor) – Edge features.
graph (Union[DGLGraph, pyg.data.Data]) – Input graph.
position_mesh (torch.Tensor) – Mesh positions.
position_pivotal (torch.Tensor) – Pivotal positions.
- Returns:
Encoded features in pivotal space.
- Return type:
torch.Tensor
- knn_interpolate(
- x: Tensor,
- pos_x: Tensor,
- pos_y: Tensor,
- batch_x: Tensor = None,
- batch_y: Tensor = None,
- k: int = 3,
- num_workers: int = 1,
Perform k-nearest neighbor interpolation.
- Parameters:
x (torch.Tensor) – Input features to interpolate.
pos_x (torch.Tensor) – Source positions.
pos_y (torch.Tensor) – Target positions.
batch_x (torch.Tensor, optional) – Batch indices for source positions, by default None.
batch_y (torch.Tensor, optional) – Batch indices for target positions, by default None.
k (int, optional) – Number of nearest neighbors to consider, by default 3.
num_workers (int, optional) – Number of workers for parallel processing, by default 1.
- Returns:
torch.Tensor – Interpolated features.
torch.Tensor – Source indices.
torch.Tensor – Target indices.
torch.Tensor – Interpolation weights.
- class physicsnemo.models.meshgraphnet.bsms_mgn.BiStrideMeshGraphNet(*args, **kwargs)[source]#
Bases:
MeshGraphNetBi-stride MeshGraphNet network architecture
- Parameters:
input_dim_nodes (int) – Number of node features
input_dim_edges (int) – Number of edge features
output_dim (int) – Number of outputs
processor_size (int, optional) – Number of message passing blocks, by default 15
mlp_activation_fn (Union[str, List[str]], optional) – Activation function to use, by default ‘relu’
num_layers_node_processor (int, optional) – Number of MLP layers for processing nodes in each message passing block, by default 2
num_layers_edge_processor (int, optional) – Number of MLP layers for processing edge features in each message passing block, by default 2
hidden_dim_processor (int, optional) – Hidden layer size for the message passing blocks, by default 128
hidden_dim_node_encoder (int, optional) – Hidden layer size for the node feature encoder, by default 128
num_layers_node_encoder (Union[int, None], optional) – Number of MLP layers for the node feature encoder, by default 2. If None is provided, the MLP will collapse to a Identity function, i.e. no node encoder
hidden_dim_edge_encoder (int, optional) – Hidden layer size for the edge feature encoder, by default 128
num_layers_edge_encoder (Union[int, None], optional) – Number of MLP layers for the edge feature encoder, by default 2. If None is provided, the MLP will collapse to a Identity function, i.e. no edge encoder
hidden_dim_node_decoder (int, optional) – Hidden layer size for the node feature decoder, by default 128
num_layers_node_decoder (Union[int, None], optional) – Number of MLP layers for the node feature decoder, by default 2. If None is provided, the MLP will collapse to a Identity function, i.e. no decoder
aggregation (str, optional) – Message aggregation type, by default “sum”
do_conat_trick (: bool, default=False) – Whether to replace concat+MLP with MLP+idx+sum
num_processor_checkpoint_segments (int, optional) – Number of processor segments for gradient checkpointing, by default 0 (checkpointing disabled). The number of segments should be a factor of 2 * processor_size, for example, if processor_size is 15, then num_processor_checkpoint_segments can be 10 since it’s a factor of 15 * 2 = 30. It is recommended to start with a smaller number of segments until the model fits into memory since each segment will affect model training speed.