Overview

BERT, or Bidirectional Encoder Representations from Transformers, is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. This model is based on the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper. NVIDIA’s implementation of BERT is an optimized version of the Hugging Face implementation, leveraging mixed precision arithmetic and Tensor Cores on Volta V100 and Ampere A100 GPUs for faster training times while maintaining target accuracy.

This tutorial contains scripts to interactively launch data download, training, benchmarking and inference routines for both pre-training and fine-tuning for tasks such as question answering. The major differences between the original implementation of the paper and this version of BERT are as follows:

  • Scripts to download Wikipedia and BookCorpus datasets

  • Fused LAMB optimizer to support training with larger batches

  • Fused Adam optimizer for fine tuning tasks

  • Fused CUDA kernels for better performance LayerNorm

  • Automatic mixed precision (AMP) training support

  • Scripts to launch on multiple number of nodes

This model trains with mixed precision Tensor Cores and provides a push-button solution to pretraining on a corpus of choice. As a result, researchers can get results 4x faster than training without Tensor Cores.

The architecture of the BERT model is almost identical to the Transformer model that was first introduced in the Attention Is All You Need paper. The main innovation of BERT lies in the pre-training step, where the model is trained on two unsupervised prediction tasks using a large text corpus. Training on these unsupervised tasks produces a generic language model, which can then be quickly fine-tuned to achieve state-of-the-art performance on language processing tasks.

For more information and the open source code: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT

© Copyright 2022-2023, NVIDIA. Last updated on Jan 10, 2023.