stages.text.models.tokenizer
#
Module Contents#
Classes#
Tokenizer stage for Hugging Face models. |
API#
- class stages.text.models.tokenizer.TokenizerStage(
- model_identifier: str,
- cache_dir: str | None = None,
- hf_token: str | None = None,
- text_field: str = 'text',
- max_chars: int | None = None,
- max_seq_length: int | None = None,
- padding_side: Literal[left, right] = 'right',
- sort_by_length: bool = True,
- unk_token: bool = False,
Bases:
nemo_curator.stages.base.ProcessingStage
[nemo_curator.tasks.DocumentBatch
,nemo_curator.tasks.DocumentBatch
]Tokenizer stage for Hugging Face models.
Args: model_identifier: The identifier of the Hugging Face model. cache_dir: The Hugging Face cache directory. Defaults to None. hf_token: Hugging Face token for downloading the model, if needed. Defaults to None. text_field: The name of the text field in the input data. Defaults to “text”. max_chars: Limits the total number of characters that can be fed to the tokenizer. If None, text will not be truncated. Defaults to None. max_seq_length: Limits the total sequence returned by the tokenizer so that it has a maximum length. If None, the tokenizer’s model_max_length is used. Defaults to None. padding_side: The side to pad the input tokens. Defaults to “right”. sort_by_length: Whether to sort the input data by the length of the input tokens. Sorting is encouraged to improve the performance of the inference model. Defaults to True. unk_token: If True, set the pad_token to the tokenizer’s unk_token. Defaults to False.
Initialization
- inputs() tuple[list[str], list[str]] #
Define stage input requirements.
Returns (tuple[list[str], list[str]]): Tuple of (required_attributes, required_columns) where: - required_top_level_attributes: List of task attributes that must be present - required_data_attributes: List of attributes within the data that must be present
- load_cfg(local_files_only: bool = True) transformers.AutoConfig #
- outputs() tuple[list[str], list[str]] #
Define stage output specification.
Returns (tuple[list[str], list[str]]): Tuple of (output_attributes, output_columns) where: - output_top_level_attributes: List of task attributes this stage adds/modifies - output_data_attributes: List of attributes within the data that this stage adds/modifies
- process(
- batch: nemo_curator.tasks.DocumentBatch,
Process a task and return the result. Args: task (X): Input task to process Returns (Y | list[Y]): - Single task: For 1-to-1 transformations - List of tasks: For 1-to-many transformations (e.g., readers) - None: If the task should be filtered out
- ray_stage_spec() dict[str, Any] #
Get Ray configuration for this stage. Note : This is only used for Ray Data which is an experimental backend. The keys are defined in RayStageSpecKeys in backends/experimental/ray_data/utils.py
Returns (dict[str, Any]): Dictionary containing Ray-specific configuration
- setup(
- _: nemo_curator.backends.base.WorkerMetadata | None = None,
Setup method called once before processing begins. Override this method to perform any initialization that should happen once per worker. Args: worker_metadata (WorkerMetadata, optional): Information about the worker (provided by some backends)
- setup_on_node(
- _node_info: nemo_curator.backends.base.NodeInfo | None = None,
- _worker_metadata: nemo_curator.backends.base.WorkerMetadata = None,
Setup method called once per node in distributed settings. Override this method to perform node-level initialization. Args: node_info (NodeInfo, optional): Information about the node (provided by some backends) worker_metadata (WorkerMetadata, optional): Information about the worker (provided by some backends)