nemo_gym.benchmarks#
Benchmark discovery and preparation utilities.
Module Contents#
Classes#
Prepare benchmark data by running the benchmark’s prepare.py script. |
Functions#
CLI command: list available benchmarks. |
|
CLI command: prepare benchmark data. |
Data#
API#
- nemo_gym.benchmarks.BENCHMARKS_DIR#
None
- class nemo_gym.benchmarks.BenchmarkConfig(/, **data: typing.Any)[source]#
Bases:
pydantic.BaseModel- name: str#
None
- path: pathlib.Path#
None
- agent_name: str#
None
- num_repeats: int#
None
- dataset: nemo_gym.config_types.BenchmarkDatasetConfig#
None
- classmethod from_config_path(
- config_path: pathlib.Path,
- classmethod from_initial_config_dict(
- path: pathlib.Path,
- initial_config_dict: omegaconf.DictConfig,
- nemo_gym.benchmarks._load_benchmarks_from_config_paths(
- config_paths: List[pathlib.Path],
- class nemo_gym.benchmarks.PrepareBenchmarkConfig(/, **data: typing.Any)[source]#
Bases:
nemo_gym.config_types.BaseNeMoGymCLIConfigPrepare benchmark data by running the benchmark’s prepare.py script.
The benchmark is identified from a config_paths entry pointing to a benchmarks/*/config.yaml file.
Examples:
ng_prepare_benchmark "+config_paths=[benchmarks/aime24/config.yaml]"
Initialization
Create a new model by parsing and validating input data from keyword arguments.
Raises [
ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.selfis explicitly positional-only to allowselfas a field name.- use_cached_prepared_benchmarks: bool#
‘Field(…)’
- num_prepare_benchmark_processes: int#
‘Field(…)’