NVIDIA Clara Train 4.1

medl.apps package

class EngineSpec

Bases: object

abort()

Call to terminate the current running train / validate job. Returns:

close()

Call to terminate and close the engine. Returns:

evaluate()

Call to evaluate the current model. Returns:

train()

Call the engine to train model. Returns:

validate()

Call to validate the current model. Returns:

class EvalConfiger(mmar_root: str, wf_config_file_name=None, env_config_file_name=None, log_config_file_name=None, kv_list=None, debug_pre_transform=False, logging_config=True)

Bases: dlmed.utils.clara_conf.ClaraConfiger

close()
finalize_config(config_ctx: dlmed.utils.wfconf.ConfigContext)
process_args(args: dict)
process_config_element(config_ctx: dlmed.utils.wfconf.ConfigContext, node: dlmed.utils.json_scanner.Node)
start_config(config_ctx: dlmed.utils.wfconf.ConfigContext)
class StoreShape(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)

Bases: argparse.Action

main()
to_numpy(tensor)
class ExportConfiger(args)

Bases: dlmed.utils.clara_conf.ClaraConfiger

finalize_config(config_ctx: dlmed.utils.wfconf.ConfigContext)
process_config_element(config_ctx: dlmed.utils.wfconf.ConfigContext, node: dlmed.utils.json_scanner.Node)
class MMAREvaluator(mmar_root: str, wf_config_file_name=None, env_config_file_name=None, log_config_file_name=None, kv_list=None, debug_pre_transform=False, logging_config=True)

Bases: medl.apps.engine_spec.EngineSpec

close()

Call to terminate and close the engine. Returns:

configure()
evaluate() → Dict

Call to evaluate the current model. Returns:

class MMARTrainer(mmar_root: str, wf_config_file_name=None, env_config_file_name=None, log_config_file_name=None, kv_list=None, debug_pre_transform=False, logging_config=True)

Bases: medl.apps.engine_spec.EngineSpec

abort()

Call to terminate the current running train / validate job. Returns:

close()

Call to terminate and close the engine. Returns:

configure()
full_local_train() → Dict
train()

Call the engine to train model. Returns:

validate()

Call to validate the current model. Returns:

evaluate_mmar(args)
train_mmar(args)
class TrainConfiger(mmar_root: str, wf_config_file_name=None, env_config_file_name=None, log_config_file_name=None, kv_list=None, debug_pre_transform=False, base_pkgs=['medl', 'monai', 'ignite.metrics', 'torch.optim', 'torch.nn'], module_names=['.'], logging_config=True)

Bases: dlmed.utils.clara_conf.ClaraConfiger

close()
create_element_from_ref(refs, element)
finalize_config(config_ctx: dlmed.utils.wfconf.ConfigContext)
process_args(args: dict)
process_config_element(config_ctx: dlmed.utils.wfconf.ConfigContext, node: dlmed.utils.json_scanner.Node)
process_first_pass(node: dlmed.utils.json_scanner.Node)
process_second_pass(node: dlmed.utils.json_scanner.Node)
start_config(config_ctx: dlmed.utils.wfconf.ConfigContext)
class ComposePrepareBatch(prepare_batch: Sequence)

Bases: object

Utility class to compose a list of prepare_batch components as a callable object.

class NetworkSummary(network, kwargs: Optional[dict] = None)

Bases: object

Prints network summary.

Use Torchinfo: https://github.com/TylerYep/torchinfo

network: PyTorch network module. kwargs: arguments to be passed into TorchInfo summary

class TrainJSONConfig(config: Optional[Dict] = None)

Bases: object

Record train config of the JSON file as state dict object, then we can save it in the checkpoint.

load_state_dict(state_dict: Dict) → None
state_dict()
add_custom_pythonpath(mmar_root: str, pathname: str = 'custom')

Add the path of BYOC custom folder to PYTHONPATH.

sample_weights_by_classes(items_list, label_key: str = 'label')

Calculates sample weights based on item count per class.

set_determinism_benchmark(seed: Optional[int] = None, benchmark: Optional[bool] = None, use_deterministic_algorithms: Optional[int] = None)

Utility to set determinism parameters and torch.backends.cudnn.benchmark. If seed is not None, will enable determinism, otherwise, will disable determinism. Note that benchmark=True can’t work together with determinism. If benchmark is None, will not set value to torch.backends.cudnn.benchmark.

set_tensorboard_writer(log_dir, existing_writers)

Search existing writers with same log directory, if can’t find, create a new writer.

set_tf32(tf32: bool)

Utility to enable/disable TF32 on Ampere GPUs, it’s supported from PyTorch 1.7. It will set value to both torch.backends.cuda.matmul.allow_tf32 and torch.backends.cudnn.allow_tf32. For more details, please check: https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices

© Copyright 2021, NVIDIA. Last updated on Feb 2, 2023.