Synthetic Data Generation#
Learn about the core concepts for generating synthetic data to support AI model training, testing, and augmentation workflows.
What is Synthetic Data?#
Synthetic data is artificially generated data that mimics the statistical properties and patterns of real-world data without containing actual sensitive information. It’s created using machine learning models trained on original datasets to produce new, realistic data points that preserve the underlying data structure and relationships.
Key Benefits#
Privacy Protection: Generate realistic datasets without exposing personally identifiable information (PII).
Data Augmentation: Expand limited datasets to improve model training.
Testing and Development: Create safe test data for application development.
Compliance: Meet regulatory requirements while maintaining data utility.
Cost Efficiency: Reduce data acquisition costs and accelerate development cycles.
NeMo Data Designer Overview#
The NeMo Data Designer microservice provides a programmatic way to generate synthetic data through configurable schemas and AI-powered generation models. It’s designed to integrate seamlessly into your AI development workflow.
Architecture#
NeMo Data Designer follows a configuration-driven approach:
Configuration: Define your data schema including column types, constraints, and relationships.
Generation: Use AI models to generate synthetic data based on your configuration.
Validation: Validate generated data against your schema.
Export: Download results in multiple formats (CSV, JSON, Parquet).
Deployment#
NeMo Data Designer uses a simplified deployment approach:
Docker Compose: Deployed as standalone containers for development and testing.
Container Registry: Available through NVIDIA NGC Catalog.
Dependencies: Includes data storage backend and artifact management.
Scalability: Horizontal scaling through container orchestration.
Development-First: Optimized for quick setup and iteration.
Current Architecture#
NeMo Data Designer operates as a standalone microservice with:
Independent Deployment: Deployed via Docker Compose for development and testing.
RESTful API: Full HTTP API for programmatic access and integration.
Batch Processing: Asynchronous job processing for large-scale data generation.
Flexible Storage: Configurable artifact storage for generated datasets.
Model Selection#
NeMo Data Designer allows customers to use any model of their choice for synthetic data generation through flexible model configuration.
Model Aliases#
NeMo Data Designer provides default model aliases for common use cases:
text: Default model for text generation tasks.
code: Optimized model for code generation.
structured: Specialized model for structured data generation (JSON, schemas).
judge: Specialized model for data quality evaluation.
reasoning: Model optimized for reasoning and logic tasks.
You can also define custom model aliases with specific inference parameters, model endpoints, and generation settings to match your requirements.
Selection Criteria#
Choose model aliases based on:
Data Types: Text, numerical, categorical, code, or mixed modalities.
Task Complexity: Simple generation vs. complex reasoning requirements.
Licensing Requirements: Commercial use, attribution, or open-source constraints.
Performance Needs: Generation speed vs. quality trade-offs.
Scale Requirements: Dataset size and complexity.
Data Generation Concepts#
Column Types#
NeMo Data Designer supports multiple column types for comprehensive data generation:
Sampling-based Columns#
Generate data through statistical and probabilistic methods:
Category: Select from predefined values with optional probability weights.
Subcategory: Generate hierarchical categorical data conditioned on parent categories.
Uniform: Generate numeric values from uniform distribution.
Gaussian: Generate values from normal distribution.
Bernoulli: Generate binary outcomes with specified probability.
Bernoulli Mixture: Generate values from a mixture of Bernoulli distributions.
Binomial: Generate number of successes in n trials.
Poisson: Generate count data from Poisson distribution.
DateTime: Generate dates within specified ranges.
Timedelta: Generate time intervals relative to datetime columns.
Person: Generate realistic person entities with demographics.
UUID: Generate unique identifiers with optional formatting.
SciPy: Access to any scipy.stats distribution for advanced statistical sampling.
LLM-based Columns#
Use large language models for intelligent content generation:
LLM Text: Generate contextual text content using prompts.
LLM Code: Generate programming code in specified languages.
LLM Structured: Generate JSON data matching defined schemas.
LLM Judge: Evaluate data quality using model-based scoring.
Expression Columns#
Compute values using dynamic expressions:
Expression: Compute values using Jinja2 expressions based on other columns.
For detailed examples and parameters for each column type, see Column Types.
Template Variables#
Enable complex data relationships using template variables with Jinja2 syntax:
{
"name": "customer_age",
"type": "uniform",
"params": {"low": 18, "high": 80}
},
{
"name": "product_recommendation",
"output_type": "text",
"model_alias": "text",
"prompt": "Recommend a product for a {{customer_age}}-year-old customer"
}
NeMo Data Designer uses Jinja2 templating to reference other columns in prompts and expressions. This enables dynamic content generation based on previously generated data. For comprehensive examples and advanced templating features, see Using Jinja Templates.
Conditional Parameters#
Create sophisticated data generation logic using conditional parameters that change based on other column values:
{
"name": "income",
"type": "uniform",
"params": {"low": 30000, "high": 80000},
"conditional_params": {
"education == 'PhD'": {"low": 80000, "high": 150000},
"education == 'Masters'": {"low": 60000, "high": 120000},
"age > 50": {"low": 50000, "high": 100000}
}
}
Conditional parameters support:
Comparison Operators:
==
,!=
,>
,>=
,<
,<=
Multiple Conditions: Combine conditions using logical operators.
Parameter Override: Different sampling parameters for each condition.
Default Fallback: Base parameters used when no conditions match.
Statistical Distributions#
Control data realism with comprehensive statistical distributions:
Basic Distributions#
Uniform: Equal probability across the range.
Gaussian: Normal distribution with mean and standard deviation.
Bernoulli: Binary outcomes with specified probability.
Advanced Distributions#
Poisson: Discrete distribution for count data.
Binomial: Number of successes in n trials.
Bernoulli Mixture: Mixture distribution combining Bernoulli with continuous distributions.
SciPy: Access to any scipy.stats distribution for advanced cases.
For additional distributions like log-normal, beta, gamma, or exponential, use the scipy
sampler type which provides access to the full scipy.stats library with complete parameter control.
Data Quality and Privacy#
Quality Assurance#
NeMo Data Designer includes built-in quality controls:
Schema Validation: Ensure generated data matches your defined schema.
Relationship Consistency: Maintain logical relationships between dependent columns.
Distribution Fidelity: Preserve statistical properties of the original data patterns.
Privacy Protection#
Synthetic data generation provides privacy through:
No Direct Copying: Generated data doesn’t reproduce exact records from training data.
Data Transformation: Remove or transform sensitive identifiers during generation.
Statistical Privacy: Preserve statistical properties while protecting individual records.
Note
Privacy Claims: While NeMo Data Designer is designed with privacy in mind, users should independently evaluate privacy guarantees for their specific use cases and regulatory requirements. Consider additional privacy-preserving techniques when working with sensitive data.
Data Governance#
Consider these governance aspects:
Data Lineage: Track the source and generation process of synthetic data.
Compliance: Ensure synthetic data meets regulatory requirements.
Audit Trail: Maintain records of data generation configurations and parameters.
Use Cases#
Coding Assistants#
Generate realistic, diverse, and complex synthetic code datasets (SQL, Python) that mirror real-world coding scenarios—enhancing AI models’ reasoning and performance:
Diverse Programming Patterns: Create examples covering different coding styles and approaches.
Real-world Complexity: Generate code that reflects actual development challenges.
Multi-language Support: Produce datasets across various programming languages.
Enhanced Model Training: Improve AI coding assistants’ understanding and generation capabilities.
Conversational AI#
Create domain-specific synthetic dialogues to fine-tune AI for conversational agents, virtual assistants, and interactive learning systems, ensuring natural and context-aware responses:
Domain-specific Conversations: Generate dialogues tailored to specific industries or use cases.
Natural Language Patterns: Create realistic conversation flows and responses.
Context Awareness: Develop datasets that maintain conversational context.
Interactive Learning: Support training for educational and support applications.
Synthetic Documents#
Design high-fidelity synthetic datasets for large-scale AI model training in tax form validation, mortgage approvals, and other structured data applications:
Document Structure Preservation: Maintain realistic document layouts and formatting.
Regulatory Compliance: Generate documents that meet industry standards.
Large-scale Training: Create extensive datasets for enterprise AI model training.
Form Validation: Support automated document processing and validation systems.
Evaluation & Benchmarks#
Build evaluation and benchmark datasets (such as question-answer pairs) to improve RAG systems or evaluate multiple models on a use case:
Model Comparison: Create standardized datasets for comparing model performance.
RAG System Enhancement: Generate question-answer pairs for retrieval-augmented generation.
Custom Benchmarks: Develop evaluation sets tailored to specific business requirements.
Performance Metrics: Enable comprehensive model evaluation across different scenarios.
Best Practices#
Configuration Design#
Start Simple: Begin with basic configurations and gradually add complexity.
Test Early: Use preview mode to validate configurations before large-scale generation.
Document Dependencies: Clearly define column relationships and dependencies.
Quality Validation#
Statistical Comparison: Compare generated data distributions with original data.
Domain Validation: Ensure generated data makes sense in your business context.
Iterative Refinement: Continuously improve configurations based on quality metrics.
Performance Optimization#
Batch Processing: Generate large datasets in smaller batches for better performance.
Format Selection: Choose appropriate output formats for your use case.
Resource Management: Monitor generation jobs and optimize resource usage.
API Reference#
Core Endpoints#
NeMo Data Designer provides a comprehensive REST API for programmatic access:
Preview Generation#
POST /v1beta1/data-designer/preview
: Generate small datasets for configuration testing.Returns streaming JSONL response with generated records.
Ideal for validating column configurations before large-scale generation.
Batch Job Management#
POST /v1beta1/data-designer/jobs
: Create large-scale generation jobs.GET /v1beta1/data-designer/jobs
: List all generation jobs.GET /v1beta1/data-designer/jobs/{job_id}
: Get specific job status and details.GET /v1beta1/data-designer/jobs/{job_id}/logs
: Stream job execution logs.
Result Management#
GET /v1beta1/data-designer/jobs/{job_id}/results
: List available job results.GET /v1beta1/data-designer/jobs/{job_id}/results/{result_id}
: Get result metadata.GET /v1beta1/data-designer/jobs/{job_id}/results/{result_id}/download
: Download generated datasets.
Health and Status#
GET /health
: Service health check.GET /docs
: Interactive OpenAPI documentation.
Configuration Schema#
NeMo Data Designer accepts rich configuration objects supporting:
Model Configurations: Define custom model aliases and inference parameters.
Column Definitions: Specify column types, parameters, and dependencies.
Conditional Parameters: Set different parameters based on other column values.
Constraints: Apply validation rules and data quality checks.
Output Formats: Configure CSV, JSON, or Parquet export options.
Integration Patterns#
Batch Processing#
# Example: Generate multiple related datasets
configs = [
customer_config,
transaction_config,
product_config
]
for config in configs:
job = client.data_designer.jobs.create(
config=config,
rows=10000,
name=f"batch-{config['name']}"
)
monitor_job(job.id)
Pipeline Integration#
Integrate NeMo Data Designer into your data pipeline:
Data Preparation: Generate synthetic training data.
Configuration Testing: Use preview endpoint to validate schemas.
Batch Generation: Create large datasets via job API.
Result Processing: Download and integrate generated data.
API Integration#
Use the REST API for programmatic integration:
Asynchronous Processing: Submit jobs and poll for completion.
Status Monitoring: Track job progress and resource usage.
Error Handling: Implement retry logic and graceful error recovery.
Result Management: Download and process generated datasets programmatically.
Limitations and Considerations#
Current Limitations#
Beta Status: NeMo Data Designer is in beta and subject to API changes.
Standalone Deployment: Currently requires to use Docker Compose.
Integration Dependencies: Manual integration required with other NeMo microservices.
Storage Requirements: Stores generated datasets locally.
Performance Considerations#
Generation Speed: Varies based on model complexity and column types.
Memory Usage: Large datasets may require significant memory for processing.
LLM Dependencies: Generation speed limited by model endpoint availability and latency.
Concurrent Jobs: Limited by available system resources and configured limits.
Troubleshooting Tips#
Asset Storage: Configure
NEMO_MICROSERVICES_DATA_DESIGNER_ASSETS_STORAGE
for person data generation.Model Endpoints: Ensure LLM endpoints are accessible and properly configured.
Container Resources: Allocate sufficient memory and CPU for large-scale generation.
API Timeouts: Use job-based generation for large datasets rather than preview endpoint.
Next Steps#
Learn how to define column types for your data schema.
Explore advanced Jinja templating for complex data relationships.
Review the API reference for detailed endpoint documentation.
Check out the Python SDK guide for programmatic usage.
See the deployment guide for Docker Compose setup instructions.