nemo_eval.utils.base#

Module Contents#

Functions#

check_health

Check the health of the PyTriton (via FAstAPI) and Ray server.

check_endpoint

Check if the endpoint is responsive and ready to accept requests.

wait_for_fastapi_server

Wait for FastAPI server and model to be ready.

_iter_namespace

list_available_evaluations

Finds all pre-defined evaluation configs across all installed evaluation frameworks.

find_framework

Find framework for executing the evaluation eval_task.

Data#

API#

nemo_eval.utils.base.logger#

‘getLogger(…)’

nemo_eval.utils.base.check_health(
health_url: str,
max_retries: int = 600,
retry_interval: int = 2,
) bool[source]#

Check the health of the PyTriton (via FAstAPI) and Ray server.

nemo_eval.utils.base.check_endpoint(
endpoint_url: str,
endpoint_type: str,
model_name: str,
max_retries: int = 600,
retry_interval: int = 2,
) bool[source]#

Check if the endpoint is responsive and ready to accept requests.

nemo_eval.utils.base.wait_for_fastapi_server(
base_url: str = 'http://0.0.0.0:8080',
model_name: str = 'megatron_model',
max_retries: int = 600,
retry_interval: int = 10,
)[source]#

Wait for FastAPI server and model to be ready.

Parameters:
  • base_url (str) – The URL to the FastAPI server (e.g., “http://0.0.0.0:8080”).

  • model_name (str) – The name of the deployed model.

  • max_retries (int) – Maximum number of retries before giving up.

  • retry_interval (int) – Time in seconds to wait between retries.

Returns:

True if both the server and model are ready within the retries, False otherwise.

Return type:

bool

nemo_eval.utils.base._iter_namespace(ns_pkg)[source]#
nemo_eval.utils.base.list_available_evaluations() dict[str, list[str]][source]#

Finds all pre-defined evaluation configs across all installed evaluation frameworks.

Returns:

Dictionary of available evaluations, where key is evaluation framework and value is list of available tasks.

Return type:

dict[str, list[str]]

nemo_eval.utils.base.find_framework(eval_task: str) str[source]#

Find framework for executing the evaluation eval_task.

This function searches for framework (module) that defines a task with given name and returns the framework name.