nemo_evaluator.core.input#

Module Contents#

Functions#

check_adapter_config

check_required_default_missing

check_task_invocation

Checks if task invocation is formatted correctly and a harness or task is available:

check_type_compatibility

get_available_evaluations

get_evaluation

Infers harness information from evaluation config and wraps it into Evaluation

get_framework_evaluations

load_run_config

Load the run configuration from the YAML file.

merge_dicts

parse_cli_args

Parse CLI arguments into the run configuration format.

parse_override_params

prepare_output_directory

validate_configuration

Validates requested task through a dataclass. Additionally, handles creation of task folowing the logic:

Data#

API#

nemo_evaluator.core.input.check_adapter_config(run_config)[source]#
nemo_evaluator.core.input.check_required_default_missing(run_config: dict)[source]#
nemo_evaluator.core.input.check_task_invocation(run_config: dict)[source]#

Checks if task invocation is formatted correctly and a harness or task is available:

Args: run_config (dict): description

Raises: MisconfigurationError: if eval type does not follow specified format MisconfigurationError: if provided framework is not available MisconfigurationError: if provided task is not available

nemo_evaluator.core.input.check_type_compatibility(
evaluation: nemo_evaluator.api.api_dataclasses.Evaluation,
)[source]#
nemo_evaluator.core.input.get_available_evaluations() tuple[dict[str, dict[str, nemo_evaluator.api.api_dataclasses.Evaluation]], dict[str, nemo_evaluator.api.api_dataclasses.Evaluation], dict][source]#
nemo_evaluator.core.input.get_evaluation(
evaluation_config: nemo_evaluator.api.api_dataclasses.EvaluationConfig,
target_config: nemo_evaluator.api.api_dataclasses.EvaluationTarget,
) nemo_evaluator.api.api_dataclasses.Evaluation[source]#

Infers harness information from evaluation config and wraps it into Evaluation

Args: evaluation_config (EvaluationConfig): description

Returns: Evaluation: EvalConfig

nemo_evaluator.core.input.get_framework_evaluations(
filepath: str,
) tuple[str, dict, dict[str, nemo_evaluator.api.api_dataclasses.Evaluation]][source]#
nemo_evaluator.core.input.load_run_config(yaml_file: str) dict[source]#

Load the run configuration from the YAML file.

NOTE: The YAML config allows to override all the run configuration parameters.

nemo_evaluator.core.input.logger#

‘get_logger(…)’

nemo_evaluator.core.input.merge_dicts(dict1, dict2)[source]#
nemo_evaluator.core.input.parse_cli_args(args) dict[source]#

Parse CLI arguments into the run configuration format.

NOTE: The CLI args allow to override a subset of the run configuration parameters.

nemo_evaluator.core.input.parse_override_params(
override_params_str: str | None = None,
) dict[source]#
nemo_evaluator.core.input.prepare_output_directory(
evaluation: nemo_evaluator.api.api_dataclasses.Evaluation,
)[source]#
nemo_evaluator.core.input.validate_configuration(
run_config: dict,
) nemo_evaluator.api.api_dataclasses.Evaluation[source]#

Validates requested task through a dataclass. Additionally, handles creation of task folowing the logic:

  • evaluation type can be either ‘framework.task’ or ‘task’

  • FDF stands for Framework Definition File

Args: run_config_cli_overrides (dict): run configuration merged from config file and CLI

Raises: