Lightning
DataStep = Callable[[Iterator[DataT]], DataT]
module-attribute
Batches together an iterator of individual examples.
Necessary for compatability with Megatron. This function type is similiar to the collate function of PyTorch.
A DataStep
function takes an iterator over individual examples. Each example may be a tensor, sequence of tensors,
or a set of named tensors (provided as a dict
mapping str
names to each Tensor
). Each iteration must
yield the same type.
The output of this function will mirror the same structure of each yielded example. It will be a concatenation of all of the examples in the iterator.
ForwardStep = Callable[[MegatronModelType, DataT], DataT]
module-attribute
Megatron-compatible forward pass function.
BionemoLightningModule
Bases: Generic[MegatronModelType, MegatronLossType]
, LightningModule
, IOMixin
, ConnectorMixin
, LightningPassthroughPredictionMixin
Reusable PyTorch Lightning module for Megatron models that is compatible with NeMo's conventions.
Source code in bionemo/llm/lightning.py
184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 |
|
__init__(config, forward_step, data_step, optimizer, model_transform=None, **model_construct_args)
Constructor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
BionemoTrainableModelConfig[MegatronModelType, MegatronLossType]
|
Serializable configuration object that allows one to construct a new model instance and loss function. Necessary for Megatron-based training as the model itself cannot be serialized and distributed to nodes. Instead, we serialize the procedure for making the model and distribute that. |
required |
forward_step
|
ForwardStep
|
Performs forward pass using the model and a batch of data. |
required |
data_step
|
DataStep
|
Custom batch-creating function for the model. |
required |
optimizer
|
MegatronOptimizerModule
|
Megatron-compatible distributed optimizer instance. Defaults to using ADAM with a 1e-4 learning rate. |
required |
model_construct_args
|
Optional. Any arguments necessary to construct the model in the |
{}
|
|
model_transform
|
Optional[Callable[[MegatronModelType], MegatronModelType]]
|
Optional. The model transform function. |
None
|
**model_construct_args
|
Optional. Arguments necessary for the supplied model configuration's
|
{}
|
Source code in bionemo/llm/lightning.py
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
|
configure_model()
Updates internal state: instantiates the model from the object's config, assigns to model
attribute.
NOTE: this method is idempotent; successive calls have no effect. The model is only initialized once.
Source code in bionemo/llm/lightning.py
232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 |
|
forward(*args, **kwargs)
Call the forward method of the underlying model, and return whatever it outputs.
Source code in bionemo/llm/lightning.py
250 251 252 253 254 255 256 |
|
forward_step(batch)
Megatron-required: the training forward step for the model, which is required to produce the loss.
Normally, the forward pass of a model means its inference. Loss is computed using the predictions from the forward pass against labels. Megatron unfortunately conflates these two different concepts and instead has models "forward" method produce the loss. See the Megatron docs for details: https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/pipeline_parallel/schedules.py#L170
To get actual predictions, use the :func:forward
method instead.
Source code in bionemo/llm/lightning.py
261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
|
predict_step(batch, batch_idx=None)
Alias for forward_step.
Source code in bionemo/llm/lightning.py
284 285 286 |
|
training_loss_reduction()
This is the function that takes batch['loss_mask'] and the logits output by the model and reduces the loss.
Source code in bionemo/llm/lightning.py
288 289 290 |
|
training_step(batch, batch_idx=None)
In mcore the loss-function is part of the forward-pass when labels are provided.
Source code in bionemo/llm/lightning.py
276 277 278 |
|
validation_step(batch, batch_idx=None)
In mcore the loss-function is part of the forward-pass when labels are provided.
Source code in bionemo/llm/lightning.py
280 281 282 |
|
LightningPassthroughPredictionMixin
A mixin that allows your model to do inference on the predict step by hijacking nemo's loss reduction mechanism.
Source code in bionemo/llm/lightning.py
158 159 160 161 162 163 |
|
predict_loss_reduction()
For the predict step, pass through the forward pass output.
Source code in bionemo/llm/lightning.py
161 162 163 |
|
PassthroughLossReduction
Bases: MegatronLossReduction
, Generic[DataT]
A workaround for nemo/megatron to perform inference.
Internally in NeMo2.0 the forward step is always expected to return a loss reduction class, and forward is expected to return a loss. This class hijacks that mechanism to instead pass through the forward output unperturbed as the loss (to enable inference in the predict step), and then the reduce method is used to collate the batch of forward outputs into a single batch. This supports the model forward output being a tensor, dict, tuple, or list of tensors. The inner type must always be a Tensor.
Source code in bionemo/llm/lightning.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
|
forward(batch, forward_out)
Passes through the forward_out
value as the 2nd tuple element.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch
|
DataT
|
The batch of data that was passed through the model to generate output. NOTE: this value is ignored. |
required |
forward_out
|
DataT
|
The output from your model's forward pass. |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, DataT]
|
A tuple containing the loss tensor (dummy in this case) and the forward output (unmodified). |
Source code in bionemo/llm/lightning.py
140 141 142 143 144 145 146 147 148 149 150 151 |
|
reduce(forward_out)
Collates list of model's outputs into a single output.
Source code in bionemo/llm/lightning.py
153 154 155 |
|
PerplexityLoggingCallback
Bases: Callback
, CallbackMethods
Megatron Callback to log perplexity in validation and optionally training.
NeMo2.0 checks whether a callback is an instance of {LightningModule,LightningDataModule,Callback} but only megatron_hooks are useful.
Source code in bionemo/llm/lightning.py
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 |
|
__init__(log_train=False, log_val=True)
Initialize PerplexityLoggingCallback.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
log_train
|
bool
|
whether to log train perplexity. Defaults to False. |
False
|
log_val
|
bool
|
whether to log validation perplexity. Defaults to True. |
True
|
Source code in bionemo/llm/lightning.py
312 313 314 315 316 317 318 319 320 321 |
|
on_megatron_reduce_microbatches_end(step, microbatch_outputs, loss_reduction, reduced)
Log after MegatronReductionLoss.reduce is called.
Expected microbatch_outputs to be a list of dicts with the following keys
- batch: dict of tensors with the following keys:
- labels: [b s]
- loss_mask: [b s]; 1 means included 0 means ignored
- forward_out: dict of tensors with the following keys:
- token_logits: [b s vocab]
Source code in bionemo/llm/lightning.py
346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 |
|
batch_collator(batches)
Takes a sequence of batches and collates them into a single batch. This is distinct from the standard pytorch default_collator since it does not add the batch dimension, it's assumed the batch dimension is already present in the input, as would be the case when parallelizing across minibatches.
IMPORTANT: The underlying data primitive must be a torch Tensor. The input to this function is a recurisve type, there can be any amount of nesting between dictionaries, tuples, and lists, as long as the inner type is a n-d Tensor.
Examples:
Outer container = Dict: [{'a': Tensor([1]), 'b': Tensor([2])}, {'a': Tensor([2]), 'b': Tensor([3])}] -> {'a': Tensor([1, 2]), 'b': Tensor([2, 3])} Outer container = List: [[Tensor([1]), Tensor([2])], [Tensor([2]), Tensor([3])]] -> [Tensor([1, 2]), Tensor([2, 3])] Outer container = Tuple: ([Tensor([1]), Tensor([2])], [Tensor([2]), Tensor([3])]) -> (Tensor([1, 2]), Tensor([2, 3]))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batches
|
Optional[Sequence[ReductionT]]
|
sequence of batches to collate into a single batch. |
required |
Returns:
Type | Description |
---|---|
Optional[ReductionT]
|
A single batch of the same type as the elements of your input sequence. |
Source code in bionemo/llm/lightning.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
default_megatron_optimizer()
Default distributed optimizer uses Adam with a 1e-4 learning rate.
Source code in bionemo/llm/lightning.py
299 300 301 302 303 |
|
some_first(seq)
Returns the first non-None value from the sequence or fails
Source code in bionemo/llm/lightning.py
54 55 56 57 58 59 |
|