profile_export_aiperf.json Schema
profile_export_aiperf.json Schema
After every aiperf profile run, AIPerf writes a summary JSON file (default name profile_export_aiperf.json) under the artifact directory. Each top-level metric entry holds a stats block; this page documents which fields appear in that block, when they appear, and how the schema is versioned.
The on-disk shape is produced by JsonMetricResult in src/aiperf/common/models/export_models.py. Fields that are unset are omitted from the JSON output (exclude_none=True), so the field set per metric varies by metric type — this page is the source of truth for which fields to expect where.
Per-metric stats fields
The metric type (record / aggregate / derived) is documented per-metric in Metrics Reference. At a glance: latencies and per-request lengths are record; counts and timestamps are aggregate; throughputs and run-level totals are derived.
Example
A run with 20 requests against a streaming chat endpoint produces entries shaped like this:
Note that request_throughput (derived) and request_count (aggregate) carry only unit + avg — no count, no sum, no percentiles. request_latency (record) carries the full set.
Schema versions
The current schema version is exported as the top-level schema_version field on the JSON document. Bump on additive changes; coordinate a major bump for any field rename or removal.
Other JSON exports use independent schema versions
aiperf writes additional JSON files when --num-profile-runs >= 2:
profile_export_aiperf_aggregate.json— confidence aggregation across runs. Per-metric blocks have a different shape (mean,std,cv,se,ci_low,ci_high,t_critical,unit) and own their ownschema_version(AggregateConfidenceJsonExporter.SCHEMA_VERSION, currently"1.0").profile_export_aiperf_collated.json— pools per-request values from all runs into a single population, then emits combined percentiles (mean,std,p50,p90,p95,p99,count) under acombinedkey plus aper_runlist of run-level summaries. Uses its ownschema_version("1.0.0").
The schema_version documented on this page applies only to profile_export_aiperf.json. The other files evolve on their own cadence.
For downstream parsers
- Treat absent fields as “not applicable to this metric type,” not “data missing.” A derived-metric block with no
countis normal; a record-metric block with nocountindicates a bug. - Do not assume the field set is closed. Future minor schema bumps may add fields. Use
schema_versionto detect compat; ignore unknown fields. unitis authoritative for the value’s interpretation. Do not infer units from the metric tag.