Voice Command Training¶
Voice command training involves training a keyword detection model for a fixed set of keywords defined by the user.
A microphone is required to record the audio clips. Supported microphones are:
- Any headphones with built-in microphones
- Built-in computer microphones
- Multichannel microphone arrays. These should have the capability to provide post-processed, mixed, or mono channel data.
The microphone should be placed correctly to capture good quality recordings in a clean environment. Please follow the best practices of recording audio data, e.g., described here
Any audio recording software can be used to generate the audio clips, such as Audacity.
To record an audio clip using audacity:
- Before opening Audacity, plug in the microphone.
- Open the Preferences Dialog Box: Edit > Preferences (or Ctrl + P)
- On the Devices tab, under Recording, select the microphone from the pull-down menu, and on the Channels menu select 1 (Mono).
- On the Quality tab, check that the Default sample rate is 16000 Hz, and change Default sample format to 16-bit.
- Press the Record button to start recording.
- Press the Stop button to stop recording after speaking out the keyword.
- Export the audio clip as WAV (Microsoft) signed 16-bit PCM.
Training and Validation Dataset Generation¶
Setup your microphone as explained in the above sections, and finalize the list of keywords to be trained. Record one keyword at a time as separate audio clips, at least ten clips per keyword, per speaker. Record at least 20 speakers.
Generate audio clips for Unknown keywords class. Please refer to the Best Practices for details. Segment the generated clips into Training (90%) and Validation (10%) datasets. Both training and validation datasets should follow the directory structure <dataset_root>/<keyword>/<audio_clip>.wav.
Audio clips of a specific keyword from all speakers should be placed in a directory named after the keyword in the respective dataset directory. Unknown keyword class should always be named unknownkeywords.
All audio clips must be 16khz, 16-bit mono wav files. No spaces or special characters (except hyphen and underscore) are allowed in naming of the keyword directories or audio clips.
Noise Profile Generation¶
You must provide noise profiles for the targeted deployment environments. The profiles should include audio recordings of different background noises present in the target deployment environment, for example, fan noise, different machines sounds, background people chatter, music etc.
Some pre-recorded noise profiles can be found in UrbanSound dataset.
Record the targeted noises using Audacity or any other recording softwares as wav files. Each noise clip should be at least one minute long. All noise clips must be 16khz, 16-bit mono wav files. No spaces or special characters (except hyphen and underscore) are allowed in naming of the noise clips.
The dataset should contain recordings from speakers with diverse accents in both training and validation datasets. A larger dataset in terms of both number of speakers and number of audio clips per speaker improves the accuracy of the trained model.
Training and validation datasets should be from different speakers to get reliable validation accuracy. The same audio clips should not be repeated as they will not add any improvement. The duration of audio clips should be more than the configured keyword duration for better accuracy. Silence or any other speech has to be removed from the beginning of audio clips.
The keyword clips should not contain any other word for more than 30% of the configured keyword duration. Audio clips provided for the Unknown keyword class can include:
- Words that are not part of the target detection keyword set.
- Random speech clips which do not contain the target keywords.
- Any other sounds, which are expected in the target deployment environment.
Ideally, the Unknown keyword class in the training dataset should be at least as big as the rest of the keyword classes combined together. If n noise profiles/clips are provided, the unknown keyword class in the validation dataset should be at least \((2*n+30)\) times bigger than the rest of the keyword classes combined together.
Data pre-processing involves data augmentation and feature extraction. The input dataset is first modified with multiple augmentations like:
- Mixing input noise profiles at varying intensities
- Time stretching
- Pitch shifting
- Dynamic range compression (DRC)
These augmentations help in generalizing the dataset for different environments and speaking styles. The spectral features of this augmented data are then extracted. The extracted feature set includes:
- Mel-Frequency Cepstral Coefficients (MFCC)
- First order Delta of MFCCs
- Second order Delta of MFCCs
This process outputs extracted features and normalizing coefficients. The normalizing coefficients are the mean and sigma (standard deviation) values of the extracted features of training dataset. These are used to normalize the data input to the model.
A DL architecture is designed to map extracted features to keyword probabilities for each keyword in the dataset. The training phase constitutes training the keyword detection network using the features extracted in the data pre-processing stage. The network will be trained to converge for the best validation dataset accuracy.
The Voice Command Training application outputs the Keyword Detection Model and Metadata files upon successful execution. These two files are used by the Voice Command Detection feature for recognizing the commands.
Model training has the following limitations:
- The DL architecture used can detect up to 20 keywords with high accuracy. Accuracy might decrease as the number of keywords increase.
- For better performance, the keywords should be of approximately equal length. Large variations in keyword lengths degrades performance.
- Minimum keyword duration is 100 ms and maximum is 1000 ms for reliable detection. This duration should be larger than a single audio packet duration.
- A microphone which captures audio data at high SNR is required for reliable detection.
The training application can be triggered by running the following command from the Isaac SDK root directory.
bob@desktop:~/isaac$ bazel run apps/samples/voice_command_detection:training -- <training_options>
The below list of training options are supported by the application.
|-t TRAIN_DATASET_PATH, --train_dataset_path TRAIN_DATASET_PATH|
|Absolute path to the training dataset.|
|Absolute path to the validation dataset.|
|Enable noise augmentation for training. Default: disabled|
|Absolute path to the noise profiles (wav files).|
|Path to a directory where the processed data and checkpoints are temporarily stored. Default: /tmp|
|Path to directoy where training logs are stored for Tensorboard usage. Default: <tmpdir>/logs|
|-o MODEL_OUTPUT_PATH, --model_output_path MODEL_OUTPUT_PATH|
|Path to directory where the trained model and metadata are stored.|
|-k KEYWORDS_LIST, --keywords_list KEYWORDS_LIST|
|List of keywords to be detected. Keywords can be separated by a comma in the list. Eg.: -k carter,look,stop, -k carter look -k stop|
|Duration of keywords in seconds in the range [0.1, 1]. Default: 0.5|
|Number of epochs to run the training. Default: 100|
|Batch size used for training. Default: 32|
|Minimum noise gain applied during noise augmentation. Default: 0.1|
|Maximum noise gain applied during noise augmentation. Default: 0.4|
|--learning_rate LEARNING_RATE, --lr LEARNING_RATE|
|Learning rate used for Adamax optimizer. Default: 1e-5|
|Dropout value used for training the network. Default: 0.3|
|Keras checkpoint to be loaded to continue training. This assumes that the extracted features are available <tmpdir>/features/. Defaults to not loading checkpoints and starting fresh.|
|-e EPOCH_NUMBER, --epoch_number EPOCH_NUMBER|
|Epoch at which to start training when resuming from checkpoint. Default: 0|
|Specifies a limit for the usage of GPU memory in the range [0, 1]. Default: 0 (no limit.)|
|Path to load a JSON file with all the configuration parameters. However, Command line arguments take the priority.|
|-h, --help||Show help message and exit.|
Model and Metadata¶
The training application generates the model and its corresponding metadata file in the specified output folder. To use these for Voice Command Detection, update the configuration of the application to point to this model and use the metadata file as a secondary configuration file.
The metadata file provides placeholders for the node names of each of the 3 codelets: Voice Command Feature Extraction, Tensorflow Inference and Voice Command Construction. Update these placeholders with the corresponding node names.
Note that if two or all three of these codelets share the same node, merge them under a single node name. Providing the same node name separately for each of these codelets causes a mismatch in configuration.