Weather / Climate Models#
- class physicsnemo.models.dlwp.dlwp.DLWP(*args, **kwargs)[source]#
Bases:
ModuleConvolutional U-Net for Deep Learning Weather Prediction on cubed-sphere grids.
This model operates on cubed-sphere data with six faces and applies face-aware padding so that convolutions respect cubed-sphere connectivity.
Based on Weyn et al. (2021).
- Parameters:
nr_input_channels (int) – Number of input channels \(C_{in}\).
nr_output_channels (int) – Number of output channels \(C_{out}\).
nr_initial_channels (int, optional, default=64) – Number of channels in the first convolution block \(C_{init}\). Defaults to 64.
activation_fn (str, optional, default="leaky_relu") – Activation name resolved with
get_activation(). Defaults to “leaky_relu”.depth (int, optional, default=2) – Depth of the U-Net encoder/decoder stacks. Defaults to 2.
clamp_activation (Tuple[float | int | None, float | int | None], optional, default=(None, 10.0)) – Minimum and maximum bounds applied via
torch.clampafter activation. Defaults to (None, 10.0).
- Forward:
cubed_sphere_input (torch.Tensor) – Input tensor of shape \((B, C_{in}, F, H, W)\) with \(F=6\) faces.
- Outputs:
torch.Tensor – Output tensor of shape \((B, C_{out}, F, H, W)\).
Examples
>>> import torch >>> from physicsnemo.models import DLWP >>> model = DLWP(nr_input_channels=2, nr_output_channels=4) >>> inputs = torch.randn(4, 2, 6, 64, 64) >>> outputs = model(inputs) >>> outputs.shape torch.Size([4, 4, 6, 64, 64])
- activation(
- x: Float[Tensor, 'batch channels height width'],
Apply activation and optional clamping to a face tensor.
- Parameters:
x (torch.Tensor) – Input face tensor of shape \((B, C, H, W)\).
- Returns:
Activated face tensor of shape \((B, C, H, W)\).
- Return type:
torch.Tensor
- class physicsnemo.models.dlwp_healpix.HEALPixRecUNet.HEALPixRecUNet(*args, **kwargs)[source]#
Bases:
ModuleDeep Learning Weather Prediction (DLWP) recurrent UNet on the HEALPix mesh.
- Parameters:
encoder (DictConfig) – Instantiable configuration for the U-Net encoder block.
decoder (DictConfig) – Instantiable configuration for the U-Net decoder block.
input_channels (int) – Number of prognostic input channels per time step.
output_channels (int) – Number of prognostic output channels per time step.
n_constants (int) – Number of constant channels provided for all faces.
decoder_input_channels (int) – Number of prescribed decoder input channels per time step.
input_time_dim (int) – Number of input time steps \(T_{in}\).
output_time_dim (int) – Number of output time steps \(T_{out}\).
delta_time (str, optional) – Time difference between samples, e.g.,
\"6h\". Defaults to\"6h\".reset_cycle (str, optional) – Period for recurrent state reset, e.g.,
\"24h\". Defaults to\"24h\".presteps (int, optional) – Number of warm-up steps used to initialize recurrent states.
enable_nhwc (bool, optional) – If
True, use channels-last tensors.enable_healpixpad (bool, optional) – Enable CUDA HEALPix padding when available.
couplings (list, optional) – Optional coupling specifications appended to the input feature channels.
- Forward:
inputs (Sequence[torch.Tensor]) – Inputs shaped \((B, F, T_{in}, C_{in}, H, W)\) plus decoder inputs, constants, and optional coupling tensors.
output_only_last (bool, optional) – If
True, return only the final forecast step.
- Outputs:
torch.Tensor – Predictions shaped \((B, F, T_{out}, C_{out}, H, W)\).
- forward(
- inputs: Sequence,
- output_only_last: bool = False,
Forward pass of the recurrent HEALPix UNet.
- Parameters:
inputs (Sequence) – List
[prognostics, decoder_inputs, constants]or[prognostics, decoder_inputs, constants, couplings]with shapes consistent with \((B, F, T, C, H, W)\).output_only_last (bool, optional) – If
True, return only the final forecast step.
- Returns:
Model outputs shaped \((B, F, T_{out}, C_{out}, H, W)\).
- Return type:
torch.Tensor
- property integration_steps#
Number of implicit forward integration steps.
- Returns:
Integration horizon \(T_{out} / T_{in}\) (minimum 1).
- Return type:
int
- class physicsnemo.models.fengwu.fengwu.Fengwu(*args, **kwargs)[source]#
Bases:
ModuleFengWu weather forecasting model.
This implementation follows FengWu: Pushing the Skillful Global Medium-range Weather Forecast beyond 10 Days Lead.
- Parameters:
img_size (tuple[int, int], optional, default=(721, 1440)) – Spatial resolution \((H, W)\) of all input and output fields.
pressure_level (int, optional, default=37) – Number of pressure levels \(L\).
embed_dim (int, optional, default=192) – Embedding channel size used in encoder/decoder/fuser blocks.
patch_size (tuple[int, int], optional, default=(4, 4)) – Patch size \((p_h, p_w)\) used by the hierarchical encoder/decoder.
num_heads (tuple[int, int, int, int], optional, default=(6, 12, 12, 6)) – Number of attention heads used at each stage.
window_size (tuple[int, int, int], optional, default=(2, 6, 12)) – Window size used by the transformer blocks.
- Forward:
x (torch.Tensor) – Input tensor of shape \((B, C_{in}, H, W)\) with \(C_{in} = 4 + 5L\).
- Outputs:
tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor] – Tuple
(surface, z, r, u, v, t)where:surfacehas shape \((B, 4, H, W)\).z, r, u, v, teach have shape \((B, L, H, W)\).
- forward(
- x: Float[Tensor, 'batch channels lat lon'],
Run Fengwu forward prediction.
- Parameters:
x (torch.Tensor) – Concatenated input tensor of shape \((B, 4 + 5L, H, W)\).
- Returns:
Output tuple
(surface, z, r, u, v, t)wheresurfacehas shape \((B, 4, H, W)\) and the other outputs have shape \((B, L, H, W)\).- Return type:
tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
- prepare_input(
- surface: Float[Tensor, 'batch c_surface lat lon'],
- z: Float[Tensor, 'batch c_pressure lat lon'],
- r: Float[Tensor, 'batch c_pressure lat lon'],
- u: Float[Tensor, 'batch c_pressure lat lon'],
- v: Float[Tensor, 'batch c_pressure lat lon'],
- t: Float[Tensor, 'batch c_pressure lat lon'],
Prepare input fields by concatenating all variables along channels.
- Parameters:
surface (torch.Tensor) – Surface tensor of shape \((B, 4, H, W)\).
z (torch.Tensor) – Geopotential tensor of shape \((B, L, H, W)\).
r (torch.Tensor) – Relative humidity tensor of shape \((B, L, H, W)\).
u (torch.Tensor) – U-wind tensor of shape \((B, L, H, W)\).
v (torch.Tensor) – V-wind tensor of shape \((B, L, H, W)\).
t (torch.Tensor) – Temperature tensor of shape \((B, L, H, W)\).
- Returns:
Concatenated tensor of shape \((B, 4 + 5L, H, W)\).
- Return type:
torch.Tensor
- class physicsnemo.models.pangu.pangu.Pangu(*args, **kwargs)[source]#
Bases:
ModulePangu weather forecasting model.
This implementation follows Pangu-Weather: A 3D High-Resolution Model for Fast and Accurate Global Weather Forecast.
- Parameters:
img_size (tuple[int, int], optional, default=(721, 1440)) – Spatial resolution \((H, W)\) of the latitude-longitude grid.
patch_size (tuple[int, int, int], optional, default=(2, 4, 4)) – Patch size \((p_l, p_h, p_w)\) for pressure-level and spatial axes.
embed_dim (int, optional, default=192) – Embedding channel size used throughout the transformer hierarchy.
num_heads (tuple[int, int, int, int], optional, default=(6, 12, 12, 6)) – Number of attention heads used at each stage.
window_size (tuple[int, int, int], optional, default=(2, 6, 12)) – Window size used by the transformer blocks.
- Forward:
x (torch.Tensor) – Input tensor of shape \((B, 72, H, W)\) where channels are arranged as
surface(7) + upper_air(5*13).- Outputs:
tuple[torch.Tensor, torch.Tensor] – Tuple
(surface, upper_air)wheresurfacehas shape \((B, 4, H, W)\) andupper_airhas shape \((B, 5, 13, H, W)\).
- forward(
- x: Float[Tensor, 'batch channels lat lon'],
Run Pangu forward prediction.
- Parameters:
x (torch.Tensor) – Concatenated input tensor of shape \((B, 72, H, W)\).
- Returns:
Output tuple
(surface, upper_air)with shapes \((B, 4, H, W)\) and \((B, 5, 13, H, W)\).- Return type:
tuple[torch.Tensor, torch.Tensor]
- prepare_input(
- surface: Float[Tensor, 'batch c_surface lat lon'],
- surface_mask: Float[Tensor, 'c_mask lat lon'] | Float[Tensor, 'batch c_mask lat lon'],
- upper_air: Float[Tensor, 'batch c_upper levels lat lon'],
Prepare input by combining surface, static masks, and upper-air fields.
- Parameters:
surface (torch.Tensor) – Surface tensor of shape \((B, 4, H, W)\).
surface_mask (torch.Tensor) – Static mask tensor of shape \((3, H, W)\) or \((B, 3, H, W)\).
upper_air (torch.Tensor) – Upper-air tensor of shape \((B, 5, 13, H, W)\).
- Returns:
Concatenated tensor of shape \((B, 72, H, W)\).
- Return type:
torch.Tensor
- class physicsnemo.models.swinvrnn.swinvrnn.SwinRNN(*args, **kwargs)[source]#
Bases:
ModuleSwinRNN weather forecasting model.
This implementation follows SwinRNN.
- Parameters:
img_size (tuple[int, int, int], optional, default=(2, 721, 1440)) – Input size as \((T, H, W)\), where \(T\) is the number of input timesteps.
patch_size (tuple[int, int, int], optional, default=(2, 4, 4)) – Patch size as \((p_t, p_h, p_w)\) for cube embedding.
in_chans (int, optional, default=70) – Number of input channels.
out_chans (int, optional, default=70) – Number of output channels.
embed_dim (int, optional, default=1536) – Embedding channel size used by Swin blocks.
num_groups (int, optional, default=32) – Number of channel groups for convolutional blocks.
num_heads (int, optional, default=8) – Number of attention heads.
window_size (int, optional, default=7) – Local window size of Swin transformer blocks.
- Forward:
x (torch.Tensor) – Input tensor of shape \((B, C_{in}, T, H, W)\).
- Outputs:
torch.Tensor – Predicted tensor of shape \((B, C_{out}, H, W)\).
- forward(
- x: Float[Tensor, 'batch in_chans time lat lon'],
Run SwinRNN forward prediction.
- Parameters:
x (torch.Tensor) – Input tensor of shape \((B, C_{in}, T, H, W)\).
- Returns:
Prediction tensor of shape \((B, C_{out}, H, W)\).
- Return type:
torch.Tensor