Curate AudioProcess Data

Text Integration for Audio Data

View as Markdown

Convert processed audio data from AudioBatch to DocumentBatch format using the built-in AudioToDocumentStage. This enables you to export audio processing results or integrate with custom text processing workflows.

How it Works

The AudioToDocumentStage provides straightforward format conversion between NeMo Curator’s audio and text data structures:

  1. Format Conversion: Transform AudioBatch objects to DocumentBatch format
  2. Metadata Preservation: All fields from the audio data are preserved in the conversion
  3. Export Ready: Convert audio processing results to pandas DataFrame format for analysis or export

Common use cases:

  • Export ASR results and quality metrics for analysis
  • Save filtered audio datasets with transcriptions
  • Integrate audio processing outputs with downstream text workflows

Basic Conversion

AudioBatch to DocumentBatch

Use AudioToDocumentStage to convert audio processing results to document format:

1from nemo_curator.stages.audio.io.convert import AudioToDocumentStage
2from nemo_curator.tasks import AudioBatch
3
4# Convert audio data to DocumentBatch format
5converter = AudioToDocumentStage()
6
7# Input: AudioBatch with audio processing results
8audio_batch = AudioBatch(data=[
9 {
10 "audio_filepath": "/data/audio/sample.wav",
11 "text": "ground truth text",
12 "pred_text": "asr predicted text",
13 "wer": 12.5,
14 "duration": 3.2
15 }
16])
17
18# Output: DocumentBatch with pandas DataFrame
19document_batches = converter.process(audio_batch)
20document_batch = document_batches[0]
21
22# Access the converted data
23print(f"Converted {len(document_batch.data)} audio records to DocumentBatch")

Parameters:

  • AudioToDocumentStage() has no configuration parameters; it performs direct format conversion

Returns:

  • List of DocumentBatch objects containing a pandas DataFrame with all original audio fields

What Gets Preserved

The conversion preserves all fields from your audio processing pipeline:

1# All audio processing results are maintained:
2# - audio_filepath: Original audio file reference
3# - text: Ground truth transcription (if available)
4# - pred_text: ASR prediction
5# - wer: Word Error Rate (if calculated)
6# - duration: Audio duration (if calculated)
7# - Any other metadata fields you've added

Field names and values are preserved exactly as they appear in the AudioBatch. No data transformation or cleaning is performed during conversion.

Integration in Pipelines

Complete Audio Processing with Export

The most common use case is adding AudioToDocumentStage at the end of your audio pipeline to enable result export:

1from nemo_curator.pipeline import Pipeline
2from nemo_curator.backends.xenna import XennaExecutor
3from nemo_curator.stages.audio.datasets.fleurs.create_initial_manifest import CreateInitialManifestFleursStage
4from nemo_curator.stages.audio.inference.asr_nemo import InferenceAsrNemoStage
5from nemo_curator.stages.audio.metrics.get_wer import GetPairwiseWerStage
6from nemo_curator.stages.audio.common import GetAudioDurationStage
7from nemo_curator.stages.audio.io.convert import AudioToDocumentStage
8from nemo_curator.stages.text.io.writer import JsonlWriter
9from nemo_curator.stages.resources import Resources
10
11# Create pipeline that processes audio and exports results
12pipeline = Pipeline(name="audio_processing_with_export")
13
14# 1. Load audio data
15pipeline.add_stage(CreateInitialManifestFleursStage(
16 lang="en_us",
17 split="test",
18 raw_data_dir="./audio_data"
19).with_(batch_size=8))
20
21# 2. Run ASR inference
22pipeline.add_stage(InferenceAsrNemoStage(
23 model_name="nvidia/stt_en_fastconformer_hybrid_large_pc",
24 pred_text_key="pred_text"
25).with_(resources=Resources(gpus=1.0)))
26
27# 3. Calculate quality metrics
28pipeline.add_stage(GetPairwiseWerStage(
29 text_key="text",
30 pred_text_key="pred_text",
31 wer_key="wer"
32))
33pipeline.add_stage(GetAudioDurationStage(
34 audio_filepath_key="audio_filepath",
35 duration_key="duration"
36))
37
38# 4. Convert to DocumentBatch for export
39pipeline.add_stage(AudioToDocumentStage())
40
41# 5. Export to JSONL format
42pipeline.add_stage(JsonlWriter(path="/output/processed_audio_results"))
43
44# Execute pipeline
45executor = XennaExecutor()
46pipeline.run(executor)

Output format: The JsonlWriter creates a JSONL file where each line contains one audio sample with all fields:

1{"audio_filepath": "/data/audio/sample1.wav", "text": "hello world", "pred_text": "hello world", "wer": 0.0, "duration": 1.5}
2{"audio_filepath": "/data/audio/sample2.wav", "text": "test audio", "pred_text": "test odio", "wer": 50.0, "duration": 2.1}

Custom Integration

While AudioToDocumentStage converts audio data to DocumentBatch format, NeMo Curator’s built-in text processing stages (filters, classifiers, etc.) are designed for text documents, not audio transcriptions. For audio-specific text processing, implement custom stages that operate on the converted DocumentBatch data.

Example: Custom Text Processing

1from nemo_curator.stages.function_decorators import processing_stage
2from nemo_curator.tasks import DocumentBatch
3import pandas as pd
4
5@processing_stage(name="custom_transcription_filter")
6def filter_transcriptions(document_batch: DocumentBatch) -> DocumentBatch:
7 """Custom filtering of ASR transcriptions."""
8
9 # Access the pandas DataFrame
10 df = document_batch.data
11
12 # Example: Filter by transcription length
13 df = df[df['pred_text'].str.len() >10] # Keep transcriptions >10 chars
14
15 # Example: Filter by WER if available
16 if 'wer' in df.columns:
17 df = df[df['wer'] < 50.0] # Keep WER < 50%
18
19 return DocumentBatch(
20 data=df,
21 task_id=document_batch.task_id,
22 dataset_name=document_batch.dataset_name
23 )

Output Format

After conversion, your data will be in DocumentBatch format with a pandas DataFrame:

1# Example output structure
2document_batch.data # pandas DataFrame with columns:
3# - audio_filepath: "/path/to/audio.wav"
4# - text: "ground truth transcription"
5# - pred_text: "asr prediction"
6# - wer: 15.2
7# - duration: 3.4
8# - [any other fields from your audio processing]

Limitations

Text Processing Integration: NeMo Curator’s text processing stages are designed for DocumentBatch inputs (text documents such as articles, web pages), but they are not designed for audio-derived transcriptions. You should implement custom processing stages for audio-specific workflows.

Reasons for incompatibility:

  • Text filters assume document-level content (e.g., paragraph structure, word count thresholds designed for articles)
  • ASR transcriptions have different characteristics (shorter, can contain recognition errors, conversational language)
  • Audio-specific metrics (WER, duration, speech rate) require custom filtering logic

Recommendation: Use PreserveByValueStage for audio quality filtering, or create custom stages for transcription-specific processing.