Basic Audit Target#
When you run an audit job in NVIDIA NeMo Auditor, you create a separate audit target and audit configuration for the job.
The target specifies the model name, model type, and free-form key-value pairs for model-specific inference options. For more information, refer to Schema for Audit Targets.
The following target references the NVIDIA Nemotron Nano 12B V2 VL model from build.nvidia.com.
Set the NMP_BASE_URL environment variable to the NeMo Auditor service endpoint.
Refer to Accessing the Microservice for more information.
With NeMo Platform running, follow the model provider instructions to set up a model provider that points at integrate.api.nvidia.com and the model to audit. Create the provider in the default workspace and name it build.
import os
from nemo_platform import NeMoPlatform
client = NeMoPlatform(
base_url=os.environ.get("NMP_BASE_URL", "http://localhost:8080"),
workspace="default",
)
target = client.audit.targets.create(
workspace="default",
name="demo-basic-target",
type="nim.NVOpenAIChat",
model="nvidia/nemotron-nano-12b-v2-vl",
options={
"nim": {
"skip_seq_start": "<think>",
"skip_seq_end": "</think>",
"max_tokens": 3200,
"nmp_uri_spec": {"inference_gateway": { "workspace": "default", "provider": "build" }}
}
}
)
print(target.model_dump_json(indent=2))
nmp audit targets create --workspace "default" \
--model "nvidia/nemotron-nano-12b-v2-vl" \
--name "demo-simple-target" \
--type "nim" \
--options '{"nim": { "skip_seq_start": "<think>", "skip_seq_end": "</think>", "max_tokens": 4000,
"nmp_uri_spec": {"inference_gateway": { "workspace": "default", "provider": "build" }}}}'