bridge.diffusion.models.common.dit_embeddings#
Module Contents#
Classes#
ParallelTimestepEmbedding is a subclass of TimestepEmbedding that initializes the embedding layers with an optional random seed for syncronization. |
Data#
API#
- bridge.diffusion.models.common.dit_embeddings.log#
‘getLogger(…)’
- class bridge.diffusion.models.common.dit_embeddings.ParallelTimestepEmbedding(
- in_channels: int,
- time_embed_dim: int,
- seed=None,
Bases:
diffusers.models.embeddings.TimestepEmbeddingParallelTimestepEmbedding is a subclass of TimestepEmbedding that initializes the embedding layers with an optional random seed for syncronization.
- Parameters:
in_channels (int) – Number of input channels.
time_embed_dim (int) – Dimension of the time embedding.
seed (int, optional) – Random seed for initializing the embedding layers. If None, no specific seed is set.
.. attribute:: linear_1
First linear layer for the embedding.
- Type:
nn.Module
.. attribute:: linear_2
Second linear layer for the embedding.
- Type:
nn.Module
.. method:: init(in_channels, time_embed_dim, seed=None)
Initializes the embedding layers.
Initialization
- forward(x: torch.Tensor) torch.Tensor#
Computes the positional embeddings for the input tensor.
- Parameters:
x (torch.Tensor) – Input tensor of shape (B, T, H, W, C).
- Returns:
Positional embeddings of shape (B, T, H, W, C).
- Return type:
torch.Tensor