Datasets#

NeMo has scripts to convert several common ASR datasets into the format expected by the nemo_asr collection. You can get started with those datasets by following the instructions to run those scripts in the section appropriate to each dataset below.

If the user has their own data and want to preprocess it to use with NeMo ASR models, refer to the Preparing Custom ASR Data section.

If the user already has a dataset that you want to convert to a tarred format, refer to the Tarred Datasets section.

LibriSpeech#

Run the following scripts to download the LibriSpeech data and convert it into the format expected by nemo_asr. At least 250GB free space is required.

# install sox
sudo apt-get install sox
mkdir data
python get_librispeech_data.py --data_root=data --data_set=ALL

After this, the data folder should contain wav files and .json manifests for NeMo ASR datalayer.

Each line is a training example. audio_filepath contains the path to the wav file, duration is the duration in seconds, and text is the transcript:

{"audio_filepath": "<absolute_path_to>/1355-39947-0000.wav", "duration": 11.3, "text": "psychotherapy and the community both the physician and the patient find their place in the community the life interests of which are superior to the interests of the individual"}
{"audio_filepath": "<absolute_path_to>/1355-39947-0001.wav", "duration": 15.905, "text": "it is an unavoidable question how far from the higher point of view of the social mind the psychotherapeutic efforts should be encouraged or suppressed are there any conditions which suggest suspicion of or direct opposition to such curative work"}

Fisher English Training Speech#

Run these scripts to convert the Fisher English Training Speech data into a format expected by the nemo_asr collection.

In brief, the following scripts convert the .sph files to .wav, slices those files into smaller audio samples, matches the smaller slices with their corresponding transcripts, and splits the resulting audio segments into train, validation, and test sets (with one manifest each).

Note

  • 106 GB of space is required to run the .wav conversion

  • additional 105 GB is required for the slicing and matching

  • sph2pipe is required in order to run the .wav conversion

Instructions

The following scripts assume that you already have the Fisher dataset from the Linguistic Data Consortium, with a directory structure that looks similar to the following:

FisherEnglishTrainingSpeech/
├── LDC2004S13-Part1
│   ├── fe_03_p1_transcripts
│   ├── fisher_eng_tr_sp_d1
│   ├── fisher_eng_tr_sp_d2
│   ├── fisher_eng_tr_sp_d3
│   └── ...
└── LDC2005S13-Part2
    ├── fe_03_p2_transcripts
    ├── fe_03_p2_sph1
    ├── fe_03_p2_sph2
    ├── fe_03_p2_sph3
    └── ...

The transcripts that will be used are located in the fe_03_p<1,2>_transcripts/data/trans directory. The audio files (.sph) are located in the remaining directories in an audio subdirectory.

  1. Convert the audio files from .sph to .wav by running:

    cd <nemo_root>/scripts/dataset_processing
    python fisher_audio_to_wav.py \
      --data_root=<fisher_root> --dest_root=<conversion_target_dir>
    

    This will place the unsliced .wav files in <conversion_target_dir>/LDC200[4,5]S13-Part[1,2]/audio-wav/. It will take several minutes to run.

  2. Process the transcripts and slice the audio data.

    python process_fisher_data.py \
      --audio_root=<conversion_target_dir> --transcript_root=<fisher_root> \
      --dest_root=<processing_target_dir> \
      --remove_noises
    

    This script splits the full dataset into train, validation, test sets, and places the audio slices in the corresponding folders in the destination directory. One manifest is written out per set, which includes each slice’s transcript, duration, and path.

    This will likely take around 20 minutes to run. Once finished, delete the 10 minute long .wav files.

2000 HUB5 English Evaluation Speech#

Run the following script to convert the HUB5 data into a format expected by the nemo_asr collection.

Similarly, to the Fisher dataset processing scripts, this script converts the .sph files to .wav, slices the audio files and transcripts into utterances, and combines them into segments of some minimum length (default is 10 seconds). The resulting segments are all written out to an audio directory and the corresponding transcripts are written to a manifest JSON file.

Note

  • 5 GB of free space is required to run this script

  • sph2pipe is also required to be installed

This script assumes you already have the 2000 HUB5 dataset from the Linguistic Data Consortium.

Run the following command to process the 2000 HUB5 English Evaluation Speech samples:

python process_hub5_data.py \
  --data_root=<path_to_HUB5_data> \
  --dest_root=<target_dir>

You can optionally include --min_slice_duration=<num_seconds> if you would like to change the minimum audio segment duration.

AN4 Dataset#

This is a small dataset recorded and distributed by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, etc. Information about this dataset can be found on the official CMU site.

  1. Download and extract the dataset (which is labeled “NIST’s Sphere audio (.sph) format (64M)”.

  2. Convert the .sph files to .wav using sox, and build one training and one test manifest.

    python process_an4_data.py --data_root=<path_to_extracted_data>
    

After the script finishes, the train_manifest.json and test_manifest.json can be found in the <data_root>/an4/ directory.

Aishell-1#

To download the Aishell-1 data and convert it into a format expected by nemo_asr, run:

# install sox
sudo apt-get install sox
mkdir data
python get_aishell_data.py --data_root=data

After the script finishes, the data folder should contain a data_aishell folder which contains a wav file, a transcript folder, and related .json and vocab.txt files.

Aishell-2#

To process the AIShell-2 dataset, in the command below, set the data folder of AIShell-2 using --audio_folder and where to push these files using --dest_folder. In order to generate files in the supported format of nemo_asr, run:

python process_aishell2_data.py --audio_folder=<data directory> --dest_folder=<destination directory>

After the script finishes, the train.json, dev.json, test.json, and vocab.txt files can be found in the dest_folder directory.

Preparing Custom ASR Data#

The nemo_asr collection expects each dataset to consist of a set of utterances in individual audio files plus a manifest that describes the dataset, with information about one utterance per line (.json). The audio files can be of any format supported by Pydub, though we recommend WAV files as they are the default and have been most thoroughly tested.

There should be one manifest file per dataset that will be passed in, therefore, if the user wants separate training and validation datasets, they should also have separate manifests. Otherwise, thay will be loading validation data with their training data and vice versa.

Each line of the manifest should be in the following format:

{"audio_filepath": "/path/to/audio.wav", "text": "the transcription of the utterance", "duration": 23.147}

The audio_filepath field should provide an absolute path to the .wav file corresponding to the utterance. The text field should contain the full transcript for the utterance, and the duration field should reflect the duration of the utterance in seconds.

Each entry in the manifest (describing one audio file) should be bordered by ‘{’ and ‘}’ and must be contained on one line. The fields that describe the file should be separated by commas, and have the form "field_name": value, as shown above. There should be no extra lines in the manifest, i.e. there should be exactly as many lines in the manifest as there are audio files in the dataset.

Since the manifest specifies the path for each utterance, the audio files do not have to be located in the same directory as the manifest, or even in any specific directory structure.

Once there is a manifest that describes each audio file in the dataset, use the dataset by passing in the manifest file path in the experiment config file, e.g. as training_ds.manifest_filepath=<path/to/manifest,json>.

Tarred Datasets#

If experiments are run on a cluster with datasets stored on a distributed file system, the user will likely want to avoid constantly reading multiple small files and would prefer tarring their audio files. There are tarred versions of some NeMo ASR dataset classes for this case, such as the TarredAudioToCharDataset (corresponding to the AudioToCharDataset) and the TarredAudioToBPEDataset (corresponding to the AudioToBPEDataset). The tarred audio dataset classes in NeMo use WebDataset.

To use an existing tarred dataset instead of a non-tarred dataset, set is_tarred: true in the experiment config file. Then, pass in the paths to all of the audio tarballs in tarred_audio_filepaths, either as a list of filepaths, e.g. ['/data/shard1.tar', '/data/shard2.tar'], or in a single brace-expandable string, e.g. '/data/shard_{1..64}.tar' or '/data/shard__OP_1..64_CL_' (recommended, see note below).

Note

For brace expansion, there may be cases where {x..y} syntax cannot be used due to shell interference. This occurs most commonly inside SLURM scripts. Therefore, we provide a few equivalent replacements. Supported opening braces (equivalent to {) are (, [, < and the special tag _OP_. Supported closing braces (equivalent to }) are ), ], > and the special tag _CL_. For SLURM based tasks, we suggest the use of the special tags for ease of use.

As with non-tarred datasets, the manifest file should be passed in manifest_filepath. The dataloader assumes that the length of the manifest after filtering is the correct size of the dataset for reporting training progress.

The tarred_shard_strategy field of the config file can be set if you have multiple shards and are running an experiment with multiple workers. It defaults to scatter, which preallocates a set of shards per worker which do not change during runtime.

For more information about the individual tarred datasets and the parameters available, including shuffling options, see the corresponding class APIs in the Datasets section.

Warning

If using multiple workers, the number of shards should be divisible by the world size to ensure an even split among workers. If it is not divisible, logging will give a warning but training will proceed, but likely hang at the last epoch. In addition, if using distributed processing, each shard must have the same number of entries after filtering is applied such that each worker ends up with the same number of files. We currently do not check for this in any dataloader, but the user’s program may hang if the shards are uneven.

Conversion to Tarred Datasets#

You can easily convert your existing NeMo-compatible ASR datasets using the conversion script here.

python convert_to_tarred_audio_dataset.py \
  --manifest_path=<path to the manifest file> \
  --target_dir=<path to output directory> \
  --num_shards=<number of tarfiles that will contain the audio>
  --max_duration=<float representing maximum duration of audio samples> \
  --min_duration=<float representing minimum duration of audio samples> \
  --shuffle --shuffle_seed=0

This script shuffles the entries in the given manifest (if --shuffle is set, which we recommend), filter audio files according to min_duration and max_duration, and tar the remaining audio files to the directory --target_dir in n shards, along with separate manifest and metadata files.

The files in the target directory should look similar to the following:

target_dir/
├── audio_1.tar
├── audio_2.tar
├── ...
├── metadata.yaml
└── tarred_audio_manifest.json

Note that file structures are flattened such that all audio files are at the top level in each tarball. This ensures that filenames are unique in the tarred dataset and the filepaths do not contain “-sub” and forward slashes in each audio_filepath are simply converted to underscores. For example, a manifest entry for /data/directory1/file.wav would be _data_directory1_file.wav in the tarred dataset manifest, and /data/directory2/file.wav would be converted to _data_directory2_file.wav.

Bucketing Datasets#

For training ASR models, audios with different lengths may be grouped into a batch. It would make it necessary to use paddings to make all the same length. These extra paddings is a significant source of computation waste. Splitting the training samples into buckets with different lengths and sampling from the same bucket for each batch would increase the computation efficicncy. It may result into training speeedup of more than 2X. To enable and use the bucketing feature, you need to create the bucketing version of the dataset by using conversion script here. You may use –buckets_num to specify the number of buckets (Recommened to use 4 to 8 buckets). It creates multiple tarred datasets, one per bucket, based on the audio durations. The range of [min_duration, max_duration) is split into equal sized buckets.

To enable the bucketing feature in the dataset section of the config files, you need to pass the multiple tarred datasets as a list of lists. If user passes just a list of strings, then the datasets would simply get concatenated which would be different from bucketing. Here is an example for 4 buckets and 512 shards:

python speech_to_text_bpe.py
...
model.train_ds.manifest_filepath=[[PATH_TO_TARS/bucket1/tarred_audio_manifest.json],
[PATH_TO_TARS/bucket2/tarred_audio_manifest.json],
[PATH_TO_TARS/bucket3/tarred_audio_manifest.json],
[PATH_TO_TARS/bucket4/tarred_audio_manifest.json]]
model.train_ds.tarred_audio_filepaths=[[PATH_TO_TARS/bucket1/audio__OP_0..511_CL_.tar],
[PATH_TO_TARS/bucket2/audio__OP_0..511_CL_.tar],
[PATH_TO_TARS/bucket3/audio__OP_0..511_CL_.tar],
[PATH_TO_TARS/bucket4/audio__OP_0..511_CL_.tar]]

When bucketing is enabled, in each epoch, first all GPUs would use the first bucket, then go to the second bucket, and so on. It guarantees that all GPUs are using the same bucket at the same time. It reduces the number of paddings in each batch and speedup the training significantly without hurting the accuracy significantly.

There are two types of batching:

  • Fixed-size bucketing: all batches would have the same number of samples specified by train_ds.batch_size

  • Adaptive-size bucketing: uses different batch sizes for each bucket.

Adaptive-size bucketing helps to increase the GPU utilization and speedup the training. Batches sampled from buckets with smaller audio lengths can be larger which would increase the GPU utilization and speedup the training. You may use train_ds.bucketing_batch_size to enable the adaptive batching and specify the batch sizes for the buckets. When bucketing_batch_size is not set, train_ds.batch_size is going to be used for all buckets (fixed-size bucketing).

bucketing_batch_size can be set as an integer or a list of integers to explicitly specify the batch size for each bucket. if bucketing_batch_size is set to be an integer, then linear scaling is being used to scale-up the batch sizes for batches with shorted audio size. For example, setting train_ds.bucketing_batch_size=8 for 4 buckets would use these sizes [32,24,16,8] for different buckets. When bucketing_batch_size is set, traind_ds.batch_size need to be set to 1.

Training an ASR model on audios sorted based on length may affect the accuracy of the model. We introduced some strategies to mitigate it. We support three types of bucketing strategies:

  • fixed_order: the same order of buckets are used for all epochs

  • syned_randomized (default): each epoch would have a different order of buckets. Order of the buckets is shuffled every epoch.

  • fully_randomized: similar to synced_randomized but each GPU has its own random order. So GPUs would not be synced.

Tha parameter train_ds.bucketing_strategy can be set to specify one of these strategies. The recommended strategy is syned_randomized which gives the highest training speedup. The fully_randomized strategy would have lower speedup than synced_randomized but may give better accuracy.

Bucketing may improve the training speed more than 2x but may affect the final accuracy of the model slightly. Training for more epochs and using ‘synced_randomized’ strategy help to fill this gap. Currently bucketing feature is just supported for tarred datasets.