morpheus.models.dfencoder.autoencoder.AutoEncoder
- class AutoEncoder(*, encoder_layers=None, decoder_layers=None, encoder_dropout=None, decoder_dropout=None, encoder_activations=None, decoder_activations=None, activation='relu', min_cats=10, swap_p=0.15, lr=0.01, batch_size=256, eval_batch_size=1024, optimizer='adam', amsgrad=False, momentum=0, betas=(0.9, 0.999), dampening=0, weight_decay=0, lr_decay=None, nesterov=False, verbose=False, device=None, distributed_training=False, logger='basic', logdir='logdir/', project_embeddings=True, run=None, progress_bar=True, n_megabatches=1, scaler='standard', patience=5, preset_cats=None, preset_numerical_scaler_params=None, binary_feature_list=None, loss_scaler='standard', **kwargs)[source]
Bases:
torch.nn.modules.module.Module
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
compute_baseline_performance
(in_, out_)Baseline performance is computed by generating a strong
compute_loss_from_targets
(num, bin, cat, ...)Computes the loss from targets.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
decode_outputs_to_df
(num, bin, cat)Converts the model outputs of the numerical, binary, and categorical features back into a pandas dataframe.
df_predict
(df)Runs end-to-end model. Interprets output and creates a dataframe. Outputs dataframe with same shape as input containing model predictions.
double
()Casts all floating point parameters and buffers to
double
datatype.encode_input
(df)Handles raw df inputs.
eval
()Sets the module in evaluation mode.
Set the extra representation of the module
fit
(train_data[, epochs, val_data, ...])Does training in the specified mode (indicated by self.distrivuted_training).
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(*input)Defines the computation performed at every call.
Returns a per-row loss of the input dataframe.
Run the input dataframe
df
through the autoencoder to get the recovery losses by feature type (numerical/boolean/categorical).get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.records and outputs all internal representations of input df as row-wise vectors.
Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_representation
(df[, layer])Computes latent feature vector from hidden layer
get_results_from_dataset
(dataset, preloaded_df)Returns a pandas dataframe of inference results and losses for a given dataset.
get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
prepare_df
(df)Does data preparation on copy of input dataframe.
preprocess_data
(df, shuffle_rows_in_batch, ...)Preprocesses a pandas dataframe
df
for input into the autoencoder model.preprocess_train_data
(df[, ...])Wrapper function round
self.preprocess_data
feeding in the args suitable for a training set.preprocess_validation_data
(df[, ...])Wrapper function round
self.preprocess_data
feeding in the args suitable for a validation set.register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
Registers a forward pre-hook on the module.
Registers a backward hook on the module.
Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within thestate_dict
.See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
train_epoch
(n_updates, input_df, df[, pbar])Run regular epoch.
train_megabatch_epoch
(n_updates, df)Run epoch doing 'megabatch' updates, preprocessing data in large chunks.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
build_input_tensor
compute_loss
compute_targets
create_binary_col_max
create_categorical_col_max
create_numerical_col_max
do_backward
get_anomaly_score_with_losses
get_results
get_scaler
get_variable_importance
return_feature_names
scale_losses
- add_module(name, module)[source]
Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
- Args:
- name (str): name of the child module. The child module can be
accessed from this module using the given name
module (Module): child module to be added to the module.
- apply(fn)[source]
Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).- Args:
- Returns:
fn (
Module
-> None): function to be applied to each submoduleModule: self
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
- bfloat16()[source]
Casts all floating point parameters and buffers to
bfloat16
datatype.NoteThis method modifies the module in-place.
- Returns:
Module: self
- buffers(recurse=True)[source]
Returns an iterator over module buffers.
- Args:
- recurse (bool): if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
torch.Tensor: module buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- children()[source]
Returns an iterator over immediate children modules.
- Yields:
Module: a child module
- compute_baseline_performance(in_, out_)[source]
- Baseline performance is computed by generating a strong
- This should be roughly the loss we expect when the encoder degenerates
- Returns net loss on baseline performance computation
prediction for the identity function (predicting input==output) with a swapped (noisy) input, and computing the loss against the unaltered original data.
into the identity function solution.
(sum of all losses)
- compute_loss_from_targets(num, bin, cat, num_target, bin_target, cat_target, should_log=True, _id=False)[source]
Computes the loss from targets.
- Parameters
- numtorch.Tensor
- bintorch.Tensor
- catList[torch.Tensor]
- num_targettorch.Tensor
- bin_targettorch.Tensor
- cat_targetList[torch.Tensor]
- should_logbool, optional
- _idbool, optional
numerical data tensor
binary data tensor
list of categorical data tensors
target numerical data tensor
target binary data tensor
list of target categorical data tensors
whether to log the loss in self.logger, by default True
whether the current step is an id validation step (for logging), by default False
- Returns
- Tuple[Union[float, List[float]]]
A tuple containing the mean mse/bce losses, list of mean cce losses, and mean net loss
- cpu()[source]
Moves all model parameters and buffers to the CPU.
NoteThis method modifies the module in-place.
- Returns:
Module: self
- cuda(device=None)[source]
Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
NoteThis method modifies the module in-place.
- Args:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- decode_outputs_to_df(num, bin, cat)[source]
Converts the model outputs of the numerical, binary, and categorical features back into a pandas dataframe.
- df_predict(df)[source]
Runs end-to-end model. Interprets output and creates a dataframe. Outputs dataframe with same shape as input
containing model predictions.
- double()[source]
Casts all floating point parameters and buffers to
double
datatype.NoteThis method modifies the module in-place.
- Returns:
Module: self
- encode_input(df)[source]
Handles raw df inputs. Passes categories through embedding layers.
- eval()[source]
Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See locally-disable-grad-doc for a comparison between
eval()
and several similar mechanisms that may be confused with it.- Returns:
Module: self
- extra_repr()[source]
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- fit(train_data, epochs=1, val_data=None, run_validation=False, use_val_for_loss_stats=False, rank=None, world_size=None)[source]
Does training in the specified mode (indicated by self.distrivuted_training).
- Parameters
- train_datapandas.DataFrame (centralized) or torch.utils.data.DataLoader (distributed)
- epochsint, optional
- val_datapandas.DataFrame (centralized) or torch.utils.data.DataLoader (distributed), optional
- run_validationbool, optional
- use_val_for_loss_statsbool, optional
- rankint, optional
- world_sizeint, optional
Data for training.
Number of epochs to run training, by default 1.
Data for validation and computing loss stats, by default None.
Whether to collect validation loss for each epoch during training, by default False.
whether to use the validation set for loss statistics collection (for z score calculation), by default False.
The rank of the current process, by default None. Required for distributed training.
The total number of processes, by default None. Required for distributed training.
- Raises
- TypeError
- ValueError
- TypeError
If train_data is not a pandas dataframe in centralized training mode.
If rank and world_size not provided in distributed training mode.
If train_data is not a pandas dataframe or a torch.utils.data.DataLoader or a torch.utils.data.Dataset in distributed training mode.
- float()[source]
Casts all floating point parameters and buffers to
float
datatype.NoteThis method modifies the module in-place.
- Returns:
Module: self
- forward(*input)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
- get_anomaly_score(df)[source]
Returns a per-row loss of the input dataframe. Does not corrupt inputs.
- get_anomaly_score_losses(df)[source]
Run the input dataframe
df
through the autoencoder to get the recovery losses by feature type (numerical/boolean/categorical).- get_buffer(target)[source]
Returns the buffer given by
target
if it exists, otherwise throws an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Args:
- target: The fully-qualified string name of the buffer
to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not a buffer
torch.Tensor: The buffer referenced by
target
- get_deep_stack_features(df)[source]
records and outputs all internal representations of input df as row-wise vectors. Output is 2-d array with len() == len(df)
- get_extra_state()[source]
Returns any extra state to include in the module’s state_dict. Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’sstate_dict()
.Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns:
object: Any extra state to store in the module’s state_dict
- get_parameter(target)[source]
Returns the parameter given by
target
if it exists, otherwise throws an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Args:
- target: The fully-qualified string name of the Parameter
to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not an
nn.Parameter
torch.nn.Parameter: The Parameter referenced by
target
- get_representation(df, layer=0)[source]
- Computes latent feature vector from hidden layer
given input dataframe.
argument layer (int) specifies which layer to get. by default (layer=0), returns the “encoding” layer.
layer < 0 counts layers back from encoding layer. layer > 0 counts layers forward from encoding layer.
- get_results_from_dataset(dataset, preloaded_df, return_abs=False)[source]
Returns a pandas dataframe of inference results and losses for a given dataset. Note. this function requires the whole inference set to be in loaded into memory as a pandas df
- Parameters
- datasettorch.utils.data.Dataset
- preloaded_dfpd.DataFrame
- return_absbool, optional
dataset for inference
a pandas dataframe that contains the original data
whether the absolute value of the loss scalers should be returned, by default False
- Returns
- pd.DataFrame
inference result with losses of each feature
- get_submodule(target)[source]
Returns the submodule given by
target
if it exists, otherwise throws an error.For example, let’s say you have an
nn.Module
A
that looks like this:A( (net_b): Module( (net_c): Module( (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2)) ) (linear): Linear(in_features=100, out_features=200, bias=True) ) )
(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Args:
- target: The fully-qualified string name of the submodule
to look for. (See above example for how to specify a fully-qualified string.)
- Returns:
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not an
nn.Module
torch.nn.Module: The submodule referenced by
target
- half()[source]
Casts all floating point parameters and buffers to
half
datatype.NoteThis method modifies the module in-place.
- Returns:
Module: self
- ipu(device=None)[source]
Moves all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.
NoteThis method modifies the module in-place.
- Arguments:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- load_state_dict(state_dict, strict=True)[source]
Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.- Args:
- state_dict (dict): a dict containing parameters and
- strict (bool, optional): whether to strictly enforce that the keys
persistent buffers.
in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
- Returns:
NamedTuple
withmissing_keys
andunexpected_keys
fields:missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys
- Note:
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- modules()[source]
Returns an iterator over all modules in the network.
- Yields:
- Note:
Module: a module in the network
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): ... print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
- named_buffers(prefix='', recurse=True)[source]
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Args:
- Yields:
prefix (str): prefix to prepend to all buffer names. recurse (bool): if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that are direct members of this module.
(str, torch.Tensor): Tuple containing the name and buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
- named_children()[source]
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields:
(str, Module): Tuple containing a name and child module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
- named_modules(memo=None, prefix='', remove_duplicate=True)[source]
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Args:
- Yields:
- Note:
memo: a memo to store the set of modules already added to the result prefix: a prefix that will be added to the name of the module remove_duplicate: whether to remove the duplicated module instances in the result
or not
(str, Module): Tuple of name and module
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): ... print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
- named_parameters(prefix='', recurse=True)[source]
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Args:
- Yields:
prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
(str, Parameter): Tuple containing the name and parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
- parameters(recurse=True)[source]
Returns an iterator over module parameters.
This is typically passed to an optimizer.
- Args:
- recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter: module parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- prepare_df(df)[source]
Does data preparation on copy of input dataframe.
- Parameters
- dfpandas.DataFrame
The pandas dataframe to process
- Returns
- pandas.DataFrame
A processed copy of df.
- preprocess_data(df, shuffle_rows_in_batch, include_original_input_tensor, include_swapped_input_by_feature_type)[source]
Preprocesses a pandas dataframe
df
for input into the autoencoder model.- Parameters
- dfpandas.DataFrame
- shuffle_rows_in_batchbool
- include_original_input_tensorbool
- include_swapped_input_by_feature_typebool
The input dataframe to preprocess.
Whether to shuffle the rows of the dataframe before processing.
Whether to process the df into an input tensor without swapping and include it in the returned data dict. Note. Training required only the swapped input tensor while validation can use both.
Whether to process the swapped df into num/bin/cat feature tensors and include them in the returned data dict. This is useful for baseline performance evaluation for validation.
- Returns
- Dict[str, Union[int, torch.Tensor]]
A dict containing the preprocessed input data and targets by feature type.
- preprocess_train_data(df, shuffle_rows_in_batch=True)[source]
Wrapper function round
self.preprocess_data
feeding in the args suitable for a training set.- preprocess_validation_data(df, shuffle_rows_in_batch=False)[source]
Wrapper function round
self.preprocess_data
feeding in the args suitable for a validation set.- register_backward_hook(hook)[source]
Registers a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns:
torch.utils.hooks.RemovableHandle
:
a handle that can be used to remove the added hook by calling
handle.remove()
- register_buffer(name, tensor, persistent=True)[source]
Adds a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Args:
- name (str): name of the buffer. The buffer can be accessed
- tensor (Tensor or None): buffer to be registered. If
None
, then operations - persistent (bool): whether the buffer is part of this module’s
from this module using the given name
that run on buffers, such as
cuda
, are ignored. IfNone
, the buffer is not included in the module’sstate_dict
.
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> self.register_buffer('running_mean', torch.zeros(num_features))
- register_forward_hook(hook)[source]
Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called.- Returns:
torch.utils.hooks.RemovableHandle
:
a handle that can be used to remove the added hook by calling
handle.remove()
- register_forward_pre_hook(hook)[source]
Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).- Returns:
torch.utils.hooks.RemovableHandle
:
a handle that can be used to remove the added hook by calling
handle.remove()
- register_full_backward_hook(hook)[source]
Registers a backward hook on the module.
The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
WarningModifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Returns:
torch.utils.hooks.RemovableHandle
:
a handle that can be used to remove the added hook by calling
handle.remove()
- register_load_state_dict_post_hook(hook)[source]
Registers a post hook to be run after module’s
load_state_dict
is called.- It should have the following signature::
hook(module, incompatible_keys) -> None
The
module
argument is the current module that this hook is registered on, and theincompatible_keys
argument is aNamedTuple
consisting of attributesmissing_keys
andunexpected_keys
.missing_keys
is alist
ofstr
containing the missing keys andunexpected_keys
is alist
ofstr
containing the unexpected keys.The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling
load_state_dict()
withstrict=True
are affected by modifications the hook makes tomissing_keys
orunexpected_keys
, as expected. Additions to either set of keys will result in an error being thrown whenstrict=True
, and clearning out both missing and unexpected keys will avoid an error.- Returns:
torch.utils.hooks.RemovableHandle
:
a handle that can be used to remove the added hook by calling
handle.remove()
- register_module(name, module)[source]
Alias for
add_module()
.- register_parameter(name, param)[source]
Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Args:
- name (str): name of the parameter. The parameter can be accessed
- param (Parameter or None): parameter to be added to the module. If
from this module using the given name
None
, then operations that run on parameters, such ascuda
, are ignored. IfNone
, the parameter is not included in the module’sstate_dict
.
- requires_grad_(requires_grad=True)[source]
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between
requires_grad_()
and several similar mechanisms that may be confused with it.- Args:
- requires_grad (bool): whether autograd should record operations on
parameters in this module. Default:
True
.- Returns:
Module: self
- set_extra_state(state)[source]
This function is called from
load_state_dict()
to handle any extra state found within thestate_dict
. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within itsstate_dict
.- Args:
state (dict): Extra state from the
state_dict
See
torch.Tensor.share_memory_()
- state_dict(*args, destination=None, prefix='', keep_vars=False)[source]
Returns a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.NoteThe returned object is a shallow copy. It contains references to the module’s parameters and buffers.
WarningCurrently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.WarningPlease avoid the use of argument
destination
as it is not designed for end-users.- Args:
- destination (dict, optional): If provided, the state of module will
- prefix (str, optional): a prefix added to parameter and buffer
- keep_vars (bool, optional): by default the
Tensor
s
be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.names to compose the keys in state_dict. Default:
''
.returned in the state dict are detached from autograd. If it’s set to
True
, detaching will not be performed. Default:False
.- Returns:
- dict:
a dictionary containing a whole state of the module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- to(*args, **kwargs)[source]
Moves and/or casts the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)[source]
- to(dtype, non_blocking=False)[source]
- to(tensor, non_blocking=False)[source]
- to(memory_format=torch.channels_last)[source]
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
NoteThis method modifies the module in-place.
- Args:
- device (
torch.device
): the desired device of the parameters - dtype (
torch.dtype
): the desired floating point or complex dtype of - tensor (torch.Tensor): Tensor whose dtype and device are the desired
- memory_format (
torch.memory_format
): the desired memory
and buffers in this module
the parameters and buffers in this module
dtype and device for all parameters and buffers in this module
format for 4D parameters and buffers in this module (keyword only argument)
- device (
- Returns:
Module: self
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_empty(*, device)[source]
Moves the parameters and buffers to the specified device without copying storage.
- Args:
- device (
torch.device
): The desired device of the parameters
and buffers in this module.
- device (
- Returns:
Module: self
- train(mode=True)[source]
Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation
mode (
False
). Default:True
.- mode (bool): whether to set training mode (
- Returns:
Module: self
- train_epoch(n_updates, input_df, df, pbar=None)[source]
Run regular epoch.
- train_megabatch_epoch(n_updates, df)[source]
Run epoch doing ‘megabatch’ updates, preprocessing data in large chunks.
- type(dst_type)[source]
Casts all parameters and buffers to
dst_type
.NoteThis method modifies the module in-place.
- Args:
- Returns:
dst_type (type or string): the desired type
Module: self
- xpu(device=None)[source]
Moves all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
NoteThis method modifies the module in-place.
- Arguments:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- zero_grad(set_to_none=False)[source]
Sets gradients of all model parameters to zero. See similar function under
torch.optim.Optimizer
for more context.- Args:
- set_to_none (bool): instead of setting to zero, set the grads to None.
See
torch.optim.Optimizer.zero_grad()
for details.