Punctuation and Capitalization Model

Automatic Speech Recognition (ASR) systems typically generate text with no punctuation and capitalization of the words. There are two issues with non-punctuated ASR output:

  • it could be difficult to read and understand;

  • models for some downstream tasks such as named entity recognition, machine translation or text-to-speech are usually trained on punctuated datasets and using raw ASR output as the input to these models could deteriorate their performance.

Quick Start

from nemo.collections.nlp.models import PunctuationCapitalizationModel

# to get the list of pre-trained models
PunctuationCapitalizationModel.list_available_models()

# Download and load the pre-trained BERT-based model
model = PunctuationCapitalizationModel.from_pretrained("punctuation_en_bert")

# try the model on a few examples
model.add_punctuation_capitalization(['how are you', 'great how about you'])

Model Description

For each word in the input text, the Punctuation and Capitalization model:

  1. predicts a punctuation mark that should follow the word (if any). By default, the model supports commas, periods and question marks.

  2. predicts if the word should be capitalized or not.

In the Punctuation and Capitalization Model, we are jointly training two token-level classifiers on top of a pre-trained language model, such as BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [NLP-PUNCT1].

Note

We recommend you try this model in a Jupyter notebook (can run on Google’s Colab.): NeMo/tutorials/nlp/Punctuation_and_Capitalization.ipynb.

Connect to an instance with a GPU (Runtime -> Change runtime type -> select “GPU” for hardware accelerator)

An example script on how to train the model could be found here: NeMo/examples/nlp/token_classification/punctuation_capitalization_train.py.

An example script on how to run evaluation and inference could be found here: NeMo/examples/nlp/token_classification/punctuation_capitalization_evaluate.py.

The default configuration file for the model could be found at: NeMo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml.

Raw Data Format

The Punctuation and Capitalization model can work with any text dataset, although it is recommended to balance the data, especially for the punctuation task. Before pre-processing the data to the format expected by the model, the data should be split into train.txt and dev.txt (and optionally test.txt). Each line in the train.txt/dev.txt/test.txt should represent one or more full and/or truncated sentences.

Example of the train.txt/dev.txt file:

When is the next flight to New York?
The next flight is ...
....

The source_data_dir structure should look like this:

.
|--sourced_data_dir
  |-- dev.txt
  |-- train.txt

NeMo Data Format

The punctuation and capitalization model expects the data in the following format:

The training and evaluation data is divided into 2 files: text.txt and labels.txt. Each line of the text.txt file contains text sequences, where words are separated with spaces, i.e.

[WORD] [SPACE] [WORD] [SPACE] [WORD], for example:

when is the next flight to new york
the next flight is ...
...

The labels.txt file contains corresponding labels for each word in text.txt, the labels are separated with spaces. Each label in labels.txt file consists of 2 symbols:

  • the first symbol of the label indicates what punctuation mark should follow the word (where O means no punctuation needed);

  • the second symbol determines if a word needs to be capitalized or not (where U indicates that the word should be upper cased, and O - no capitalization needed.)

By default the following punctuation marks are considered: commas, periods, and question marks; the rest punctuation marks were removed from the data. This can be changed by introducing new labels in the labels.txt files

Each line of the labels.txt should follow the format: [LABEL] [SPACE] [LABEL] [SPACE] [LABEL] (for labels.txt). For example, labels for the above text.txt file should be:

OU OO OO OO OO OO OU ?U
OU OO OO OO ...
...

The complete list of all possible labels for this task used in this tutorial is: OO, ,O, .O, ?O, OU, ,U, .U, ?U.

Converting Raw Data to NeMo Format

To pre-process the raw text data, stored under sourced_data_dir (see the Raw Data Format section), run the following command:

python examples/nlp/token_classification/data/prepare_data_for_punctuation_capitalization.py \
       -s <PATH_TO_THE_SOURCE_FILE>
       -o <PATH_TO_THE_OUTPUT_DIRECTORY>

Required Argument for Dataset Conversion

  • -s or --source_file: path to the raw file

  • -o or --output_dir - path to the directory to store the converted files

After the conversion, the output_dir should contain labels_*.txt and text_*.txt files. The default names for the training and evaluation in the conf/punctuation_capitalization_config.yaml are the following:

.
|--output_dir
  |-- labels_dev.txt
  |-- labels_train.txt
  |-- text_dev.txt
  |-- text_train.txt

Training Punctuation and Capitalization Model

The language model is initialized with the pre-trained model from HuggingFace Transformers, unless the user provides a pre-trained checkpoint for the language model, t Example of model configuration file for training the model could be found at: NeMo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml.

The specification can be roughly grouped into the following categories:

  • Parameters that describe the training process: trainer

  • Parameters that describe the datasets: model.dataset, model.train_ds, model.validation_ds

  • Parameters that describe the model: model

More details about parameters in the config file could be found below and in the model’s config file:

Parameter

Data Type

Description

pretrained_model

string

Path to the pre-trained model .nemo file or pre-trained model name

model.dataset.data_dir

string

Path to the data converted to the specified above format

model.punct_head.punct_num_fc_layers

integer

Number of fully connected layers

model.punct_head.fc_dropout

float

Activation to use between fully connected layers

model.punct_head.activation

string

Dropout to apply to the input hidden states

model.punct_head.use_transrormer_init

bool

Whether to initialize the weights of the classifier head with the same approach used in Transformer

model.capit_head.punct_num_fc_layers

integer

Number of fully connected layers

model.capit_head.fc_dropout

float

Dropout to apply to the input hidden states

model.capit_head.activation

string

Activation function to use between fully connected layers

model.capit_head.use_transrormer_init

bool

Whether to initialize the weights of the classifier head with the same approach used in Transformer

training_ds.text_file

string

Name of the text training file located at data_dir

training_ds.labels_file

string

Name of the labels training file located at data_dir, such as labels_train.txt

training_ds.num_samples

integer

Number of samples to use from the training dataset, -1 - to use all

validation_ds.text_file

string

Name of the text file for evaluation, located at data_dir

validation_ds.labels_file

string

Name of the labels dev file located at data_dir, such as labels_dev.txt

validation_ds.num_samples

integer

Number of samples to use from the dev set, -1 mean all

See also Model NLP.

To train the model from scratch, run:

python examples/nlp/token_classification/punctuation_and_capitalization_train.py \
       model.dataset.data_dir=<PATH/TO/DATA_DIR> \
       trainer.gpus=[0,1] \
       optim.name=adam \
       optim.lr=0.0001 \
       model.nemo_path=<PATH/TO/SAVE/.nemo>

The above command will start model training onGPUSs 0 and 1 with Adam optimizer and learning rate of 0.0001; and the trained model will be store under <PATH/TO/SAVE/.nemo> specified.

To train from the pre-trained model, use:

python examples/nlp/token_classification/punctuation_and_capitalization_train.py \
       model.dataset.data_dir=<PATH/TO/DATA_DIR> \
       pretrained_model=<PATH/TO/SAVE/.nemo>

Required Arguments for Training

  • model.dataset.data_dir: Path to the data_dir with the pre-processed data files.

Note

All parameters defined in the configuration file could be changed with command arguments. For example, the sample config file mentioned above has validation_ds.batch_size set to 64. However, if you see that the GPU utilization can be optimized further by using a larger batch size, you may override to the desired value, by adding the field validation_ds.batch_size=128 over the command line. You may repeat this with any of the parameters defined in the sample configuration file.

Inference

An example script on how to run inference on a few examples, could be found at examples/nlp/token_classification/punctuation_capitalization_evaluate.py.

To run inference with the pre-trained model on a few examples, run:

python punctuation_capitalization_evaluate.py \
       pretrained_model=<PRETRAINED_MODEL>

Model Evaluation

An example script on how to evaluate the pre-trained model, could be found at examples/nlp/token_classification/punctuation_capitalization_evaluate.py.

To run evaluation of the pre-trained model, run:

python punctuation_capitalization_evaluate.py \
       model.dataset.data_dir=<PATH/TO/DATA/DIR>  \
       pretrained_model=punctuation_en_bert \
       model.test_ds.text_file=<text_dev.txt> \
       model.test_ds.labels_file=<labels_dev.txt>

Required Arguments

  • pretrained_model: pretrained PunctuationCapitalization model from list_available_models() or path to a .nemo file, for example: punctuation_en_bert or your_model.nemo

  • model.dataset.data_dir: Path to the directory that containes model.test_ds.text_file and model.test_ds.labels_file.

During evaluation of the test_ds, the script generates two classification reports: one for capitalization task and another one for punctuation task. This classification reports include the following metrics:

  • Precision

  • Recall

  • F1

More details about these metrics could be found here.

References

NLP-PUNCT1

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.