Locally-Deployed NIM Microservice Target#

The following target references the Llama 3.1 Nemotron Nano 8B V1 model from a locally-accessible NIM microservice. The service name for the LLM container is llm.

Refer to garak.generators.nim.NVOpenAIChat for the parameters to specify in the options.nim field. The options override the default values from the DEFAULT_PARAMS in the API reference.

Important

Export the NIM_API_KEY environment variable with your API key or any value when you start the microservice container. The environment variable must be set even if it is not used to access build.nvidia.com.

Set the AUDITOR_BASE_URL environment variable to the NeMo Auditor service endpoint. Refer to Accessing the Microservice for more information.

import os
from nemo_microservices import NeMoMicroservices

client = NeMoMicroservices(base_url=os.getenv("AUDITOR_BASE_URL"))

target = client.beta.audit.targets.create(
    namespace="default",
    name="demo-basic-target",
    type="nim.NVOpenAIChat",
    model="nvidia/nemotron-nano-12b-v2-vl",
    options={
        "nim": {
            "skip_seq_start": "<think>",
            "skip_seq_end": "</think>",
            "max_tokens": 3200,
            "uri": "https://integrate.api.nvidia.com/v1/"
        }
    }
)

print(target.model_dump_json(indent=2))
curl -X POST "${AUDITOR_BASE_URL}/v1beta1/audit/targets" \
  -H "Accept: application/json" \
  -H "Content-Type: application/json" \
  -d '{
    "namespace": "default",
    "name": "demo-basic-target",
    "type": "nim.NVOpenAIChat",
    "model": "nvidia/nemotron-nano-12b-v2-vl",
    "options": {
      "nim": {
          "skip_seq_start": "<think>",
          "skip_seq_end": "</think>",
          "max_tokens": 3200,
          "uri": "https://integrate.api.nvidia.com/v1/"
      }
    }
  }' | jq

Example Output

{
  "model": "nvidia/llama-3.1-nemotron-nano-8b-v1",
  "type": "nim.NVOpenAIChat",
  "id": "audit_target-XYEkHHsG9DS3EcnRFksGQf",
  "created_at": "2025-10-23T18:08:51.037880",
  "custom_fields": {},
  "description": null,
  "entity_id": "audit_target-XYEkHHsG9DS3EcnRFksGQf",
  "name": "demo-basic-target",
  "namespace": "default",
  "options": {
    "nim": {
      "skip_seq_start": "<think>",
      "skip_seq_end": "</think>",
      "max_tokens": 3200,
      "uri": "https://integrate.api.nvidia.com/v1/"
    }
  },
  "ownership": null,
  "project": null,
  "schema_version": "1.0",
  "type_prefix": null,
  "updated_at": "2025-10-23T18:08:51.037886"
}
{
  "schema_version": "1.0",
  "id": "audit_target-QqPzHX1BAMFx5QeS2Q8bP1",
  "description": null,
  "type_prefix": null,
  "namespace": "default",
  "project": null,
  "created_at": "2025-10-22T20:15:04.523750",
  "updated_at": "2025-10-22T20:15:04.523752",
  "custom_fields": {},
  "ownership": null,
  "name": "demo-basic-target",
  "entity_id": "audit_target-QqPzHX1BAMFx5QeS2Q8bP1",
  "type": "nim.NVOpenAIChat",
  "model": "nvidia/llama-3.1-nemotron-nano-8b-v1",
  "options": {
    "nim": {
      "skip_seq_start": "<think>",
      "skip_seq_end": "</think>",
      "max_tokens": 3200,
      "uri": "https://integrate.api.nvidia.com/v1/"
    }
  }
}