Get Evaluation Job Details#
To get the full details of an evaluation job, send a GET request to the jobs endpoint, as shown in the following code.
The response includes comprehensive job information such as timestamps, evaluation configuration, target details, result URNs, output file locations, and metadata like project associations and custom fields.
v2 (Preview)#
Warning
v2 API Preview: The v2 API is available for testing and feedback but is not yet recommended for production use. Breaking changes may occur before the stable release.
In v2, job status and status details are included in the main job details response. More information on the status of steps within the job is available with the /status endpoint.
import os
from nemo_microservices import NeMoMicroservices
# Initialize the client
client = NeMoMicroservices(
base_url=os.environ['EVALUATOR_BASE_URL']
)
# Get job details (v2 API)
job = client.v2.evaluation.jobs.retrieve("job-id")
# All status information is included in the response
print(f"Job ID: {job.id}")
print(f"Job status: {job.status}")
print(f"Status details: {job.status_details}")
print(f"Error details: {job.error_details}")
print(f"Created at: {job.created_at}")
print(f"Updated at: {job.updated_at}")
curl -X "GET" "${EVALUATOR_BASE_URL}/v2/evaluation/jobs/<job-id>" \
-H 'accept: application/json'
v2 Example Response
{
"id": "job-dq1pjj6vj5p64xaeqgvuk4",
"spec": {
"config": {
"type": "bfclv3",
"params": {
"limit_samples": 10
},
"tasks": {
"task1": {
"type": "simple"
}
}
},
"target": {
"type": "model",
"model": {
"api_endpoint": {
"url": "https://nim.int.aire.nvidia.com/v1/chat/completions",
"model_id": "meta/llama-3.1-8b-instruct",
"format": "nim"
}
}
}
},
"status": "completed",
"status_details": {},
"error_details": null,
"ownership": null,
"custom_fields": null,
"created_at": "2025-09-08T19:20:32.655254",
"updated_at": "2025-09-08T19:20:32.655256"
}
Monitoring Job Progress#
Monitor job status using the .status field and track progress with .status_details:
# Check job status
if job.status == "completed":
print("Job finished successfully")
elif job.status == "failed":
print(f"Job failed: {job.error_details}")
elif job.status == "running":
print("Job is still running")
Error Information#
When a job fails, detailed error information is available in error_details:
{
"id": "job-dq1pjj6vj5p64xaeqgvuk4",
"spec": {},
"status": "failed",
"error_details": {
"message": "Container exited with code 1"
}
}
For more detailed error information, use the logs endpoint.
v1 (Current)#
import os
from nemo_microservices import NeMoMicroservices
# Initialize the client
client = NeMoMicroservices(
base_url=os.environ['EVALUATOR_BASE_URL']
)
# Get job details (v1 API)
job = client.evaluation.jobs.retrieve("job-id")
# Get the status and other details
print(f"Job ID: {job.id}")
print(f"Job status: {job.status}")
print(f"Created at: {job.created_at}")
print(f"Updated at: {job.updated_at}")
curl -X "GET" "${EVALUATOR_BASE_URL}/v1/evaluation/jobs/<job-id>" \
-H 'accept: application/json'
v1 Example Response
{
"created_at": "2025-03-19T22:50:15.684382",
"updated_at": "2025-03-19T22:50:15.684385",
"id": "eval-UVW123XYZ456",
"namespace": "my-organization",
"description": null,
"target": {
//target details
},
"config": {
// config details
},
"result": "evaluation_result-1234ABCD5678EFGH",
"output_files_url": "hf://datasets/evaluation-results/eval-UVW123XYZ456",
"status_details": {
"message": "Job completed successfully",
"task_status": {},
"progress": null
},
"status": "completed",
"project": null,
"custom_fields": {},
"ownership": null
}
For additional status monitoring in v1, refer to job status.