Recurrent Neural Networks#

class physicsnemo.models.rnn.rnn_one2many.One2ManyRNN(*args, **kwargs)[source]#

Bases: Module

A RNN model with encoder/decoder for 2d/3d problems that provides predictions based on single initial condition.

Parameters:
  • input_channels (int) – Number of channels in the input

  • dimension (int, optional) – Spatial dimension of the input. Only 2d and 3d are supported, by default 2

  • nr_latent_channels (int, optional) – Channels for encoding/decoding, by default 512

  • nr_residual_blocks (int, optional) – Number of residual blocks, by default 2

  • activation_fn (str, optional) – Activation function to use, by default “relu”

  • nr_downsamples (int, optional) – Number of downsamples, by default 2

  • nr_tsteps (int, optional) – Time steps to predict, by default 32

Example

>>> model = physicsnemo.models.rnn.One2ManyRNN(
... input_channels=6,
... dimension=2,
... nr_latent_channels=32,
... activation_fn="relu",
... nr_downsamples=2,
... nr_tsteps=16,
... )
>>> input = invar = torch.randn(4, 6, 1, 16, 16) # [N, C, T, H, W]
>>> output = model(input)
>>> output.size()
torch.Size([4, 6, 16, 16, 16])
class physicsnemo.models.rnn.rnn_seq2seq.Seq2SeqRNN(*args, **kwargs)[source]#

Bases: Module

A RNN model with encoder/decoder for 2d/3d problems. Given input 0 to t-1, predicts signal t to t + nr_tsteps

Parameters:
  • input_channels (int) – Number of channels in the input

  • dimension (int, optional) – Spatial dimension of the input. Only 2d and 3d are supported, by default 2

  • nr_latent_channels (int, optional) – Channels for encoding/decoding, by default 512

  • nr_residual_blocks (int, optional) – Number of residual blocks, by default 2

  • activation_fn (str, optional) – Activation function to use, by default “relu”

  • nr_downsamples (int, optional) – Number of downsamples, by default 2

  • nr_tsteps (int, optional) – Time steps to predict, by default 32

Example

>>> model = physicsnemo.models.rnn.Seq2SeqRNN(
... input_channels=6,
... dimension=2,
... nr_latent_channels=32,
... activation_fn="relu",
... nr_downsamples=2,
... nr_tsteps=16,
... )
>>> input = invar = torch.randn(4, 6, 16, 16, 16) # [N, C, T, H, W]
>>> output = model(input)
>>> output.size()
torch.Size([4, 6, 16, 16, 16])