Class PreprocessNLPStage

Base Type

  • public mrc::pymrc::PythonNode< std::shared_ptr< MultiMessage >, std::shared_ptr< MultiInferenceMessage > >

class PreprocessNLPStage : public mrc::pymrc::PythonNode<std::shared_ptr<MultiMessage>, std::shared_ptr<MultiInferenceMessage>>

NLP input data for inference.

Public Types

using base_t = mrc::pymrc::PythonNode<std::shared_ptr<MultiMessage>, std::shared_ptr<MultiInferenceMessage>>

Public Functions

PreprocessNLPStage(std::string vocab_hash_file, uint32_t sequence_length, bool truncation, bool do_lower_case, bool add_special_token, int stride = -1, std::string column = "data")

Construct a new Preprocess NLP Stage object.

Parameters
  • vocab_hash_file – : Path to hash file containing vocabulary of words with token-ids. This can be created from the raw vocabulary using the cudf.utils.hash_vocab_utils.hash_vocab function.

  • sequence_length – : Sequence Length to use (We add to special tokens for NER classification job).

  • truncation – : If set to true, strings will be truncated and padded to max_length. Each input string will result in exactly one output sequence. If set to false, there may be multiple output sequences when the max_length is smaller than generated tokens.

  • do_lower_case – : If set to true, original text will be lowercased before encoding.

  • add_special_token – : Whether or not to encode the sequences with the special tokens of the BERT classification model.

  • stride – : If truncation == False and the tokenized string is larger than max_length, the sequences containing the overflowing token-ids can contain duplicated token-ids from the main sequence. If max_length is equal to stride there are no duplicated-id tokens. If stride is 80% of max_length, 20% of the first sequence will be repeated on the second sequence and so on until the entire sentence is encoded.

  • column – : Name of the string column to operate on, defaults to “data”.

Previous Class PreprocessFILStage
Next Class ResponseMemory
© Copyright 2023, NVIDIA. Last updated on Feb 2, 2024.