Add Custom Validation#

The Custom Validator feature allows you to integrate domain-specific validation logic into Data Designer’s synthetic data generation workflow. Instead of relying on generic validation rules, you can deploy your own validation service that evaluates generated data according to your specific requirements.

When to Use Custom Validators#

Custom validators address scenarios where standard validation approaches are insufficient:

  • Code Generation: Validate that generated code is syntactically correct and executable

  • Logical Consistency: Verify that related fields maintain semantic relationships (for example, job titles align with described responsibilities)

  • Quality Scoring: Apply custom metrics to rank or filter generated data based on your criteria

How It Works#

The validation process integrates seamlessly into your data generation pipeline:

  1. Data Generation: Data Designer generates synthetic records according to your configuration

  2. Batch Processing: Records are grouped and sent to your validator endpoint in configurable batch sizes

  3. Validation Response: Your service returns validation results and optional metadata for each record

  4. Pipeline Integration: Validation results become available as columns for downstream processing, filtering, or conditional logic

Your validator service operates as an independent API endpoint accessible from Data Designer’s network environment.

Setting Up a Custom Validator#

Configure the Validation Column#

Add a ValidationWithRemoteEndpointColumn to your Data Designer configuration:

builder.add_column(
    C.ValidationWithRemoteEndpointColumn(
        name="consistency_check",
        target_columns=["topic", "generated_text"],
        validator="http://localhost:8001/validate/",
        batch_size=10,
        timeout=30.0,
    )
)

Configuration Parameters:

  • name: Column name for validation results in your dataset

  • target_columns: Existing columns to include in validation requests

  • validator: URL endpoint of your validation service

  • batch_size: Number of records processed per request (optimize for latency vs throughput)

  • timeout: Maximum wait time for validator response

Create Your Validator Service#

Your validator must implement a specific API contract. Here’s a functional example that validates whether a topic appears in generated text:

import pandas as pd
import uvicorn
from fastapi import FastAPI
from pydantic import BaseModel, ConfigDict
from typing import Optional

app = FastAPI()

class ExpectedInput(BaseModel):
    data: list[dict[str, str]]

class OutputItem(BaseModel):
    is_valid: Optional[bool]
    model_config = ConfigDict(extra="allow")

class ExpectedOutput(BaseModel):
    data: list[OutputItem]

@app.post("/validate/")
async def validate(input: ExpectedInput):
    df = pd.DataFrame(input.data)
    
    # Core validation logic
    is_valid = df.apply(lambda x: x.topic in x.text, axis=1).tolist()
    lengths = df.text.str.len().astype(str).tolist()
    
    # Return results with optional metadata
    return ExpectedOutput(data=[
        OutputItem(is_valid=valid, length=length) 
        for valid, length in zip(is_valid, lengths)
    ])

if __name__ == "__main__":
    uvicorn.run("validator:app", host="0.0.0.0", port=8001)

Use Validation Results#

Validation results integrate into your pipeline as standard columns:

# Filter records based on validation
builder.add_column(
    C.ExpressionColumn(
        name="length_ratio",
        expr="{{ consistency_check.length | float / 1000 if consistency_check.length else 0.0 }}",
    )
)

# Generate dataset with validation
result = ndd.create(builder, num_records=50, wait_until_done=True)
dataset = result.load_dataset()

API Contract Specification#

Your validator service must implement this exact interface:

Request Format#

{
  "data": [
    {"topic": "Finance", "text": "Finance encompasses various aspects..."},
    {"topic": "Healthcare", "text": "Healthcare systems require..."}
  ]
}

Response Format#

{
  "data": [
    {"is_valid": true, "confidence": "0.95"},
    {"is_valid": false, "error_reason": "topic_mismatch"}
  ]
}

Required Response Fields#

  • is_valid: Boolean or null indicating validation status

  • Additional fields: Include any metadata useful for downstream processing

Error Handling#

Your service should return appropriate HTTP status codes:

  • 413 Content Too Large: Batch size exceeds processing capacity

  • 422 Unprocessable Content: Input format errors

  • 500 Internal Server Error: Validation logic failures

When errors occur, Data Designer sets validation results to null and logs the error for debugging.

Deployment Requirements#

  • Network Access: Deploy your validator within the same network as Data Designer, as authentication is not currently supported

  • Performance Considerations: Optimize for low latency to maintain pipeline throughput. Large batch sizes improve efficiency but may cause request size issues

  • Scalability: Design your validation logic to handle the expected volume of generated records without becoming a pipeline bottleneck

Complete Example#

This example demonstrates the full workflow from configuration to dataset generation:

from nemo_data_designer.config import columns as C, params as P
from nemo_data_designer.config.builder import DataDesignerConfigBuilder

# Configure data generation with validation
builder = DataDesignerConfigBuilder(model_configs=[...])

# Generate base data
builder.add_column(C.SamplerColumn(
    name="topic",
    type=P.SamplerType.CATEGORY,
    params=P.CategorySamplerParams(values=["Healthcare", "Finance", "Technology"])
))

builder.add_column(C.LLMTextColumn(
    name="text",
    model_alias="llama",
    prompt="Write a paragraph about {{ topic }}."
))

# Add validation
builder.add_column(C.ValidationWithRemoteEndpointColumn(
    name="topic_consistency",
    target_columns=["topic", "text"],
    validator="http://localhost:8001/validate/",
    batch_size=10
))

# Use validation results
builder.add_column(C.ExpressionColumn(
    name="is_high_quality",
    expr="{{ topic_consistency.is_valid and topic_consistency.length | int > 100 }}"
))

# Generate validated dataset
ndd = DataDesignerClient(client=NeMoMicroservices(base_url="http://localhost:8080"))
result = ndd.create(builder, num_records=100, wait_until_done=True)