nemo_deploy.llm.hf_deployable#
Module Contents#
Classes#
A Triton inference server compatible wrapper for HuggingFace models. |
Data#
API#
- nemo_deploy.llm.hf_deployable.LOGGER = 'getLogger(...)'#
- nemo_deploy.llm.hf_deployable.SUPPORTED_TASKS = ['text-generation']#
- class nemo_deploy.llm.hf_deployable.HuggingFaceLLMDeploy(
- hf_model_id_path: Optional[str] = None,
- hf_peft_model_id_path: Optional[str] = None,
- tokenizer_id_path: Optional[str] = None,
- model: Optional[transformers.AutoModel] = None,
- tokenizer: Optional[transformers.AutoTokenizer] = None,
- tokenizer_padding=True,
- tokenizer_truncation=True,
- tokenizer_padding_side='left',
- task: Optional[str] = 'text-generation',
- torch_dtype: Optional[torch.dtype] = 'auto',
- device_map: Optional[str] = 'auto',
- **hf_kwargs,
Bases:
nemo_deploy.ITritonDeployableA Triton inference server compatible wrapper for HuggingFace models.
This class provides a standardized interface for deploying HuggingFace models in Triton inference server. It supports various NLP tasks and handles model loading, inference, and deployment configurations.
- Parameters:
hf_model_id_path (Optional[str]) – Path to the HuggingFace model or model identifier. Can be a local path or a model ID from HuggingFace Hub.
hf_peft_model_id_path (Optional[str]) – Path to the PEFT model or model identifier. Can be a local path or a model ID from HuggingFace Hub.
tokenizer_id_path (Optional[str]) – Path to the tokenizer or tokenizer identifier. If None, will use the same path as hf_model_id_path.
model (Optional[AutoModel]) – Pre-loaded HuggingFace model.
tokenizer (Optional[AutoTokenizer]) – Pre-loaded HuggingFace tokenizer.
tokenizer_padding (bool) – Whether to enable padding in tokenizer. Defaults to True.
tokenizer_truncation (bool) – Whether to enable truncation in tokenizer. Defaults to True.
tokenizer_padding_side (str) – Which side to pad on (‘left’ or ‘right’). Defaults to ‘left’.
task (str) – HuggingFace task type (e.g., “text-generation”). Defaults to “text-generation”.
**hf_kwargs – Additional keyword arguments to pass to HuggingFace model loading.
Initialization
- _load(
- torch_dtype: Optional[torch.dtype] = 'auto',
- device_map: Optional[str] = 'auto',
- **hf_kwargs,
Load the HuggingFace pipeline with the specified model and task.
This method initializes the HuggingFace AutoModel classes using the provided model configuration and task type. It handles the model and tokenizer loading process.
- Parameters:
torch_dtype (torch.dtype) – Data type for the model. Defaults to “auto”.
device_map (str) – Device map for the model. Defaults to “auto”.
**hf_kwargs – Additional keyword arguments to pass to the HuggingFace model loading.
- Raises:
AssertionError – If task is not specified.
- generate(**kwargs: Any) List[str]#
Generate text based on the provided input prompts.
This method processes input prompts through the loaded pipeline and generates text according to the specified parameters.
- Parameters:
**kwargs –
Generation parameters including:
text_inputs: List of input prompts
max_length: Maximum number of tokens to generate
num_return_sequences: Number of sequences to generate per prompt
temperature: Sampling temperature
top_k: Number of highest probability tokens to consider
top_p: Cumulative probability threshold for token sampling
do_sample: Whether to use sampling, default is False for greedy decoding
echo: Whether to return prompt + generated text (True) or just generated text (False)
return_full_text: Whether to return full text or only generated part
- Returns:
List[str]: A list of generated texts, one for each input prompt. If output logits and output scores are True: Dict: A dictionary containing: - sentences: List of generated texts - logits: List of logits - scores: List of scores - input_lengths: List of input token lengths (for echo processing)
- Return type:
If output logits and output scores are False
- Raises:
RuntimeError – If the pipeline is not initialized.
- generate_other_ranks()#
Generate function for ranks other than the rank 0.
- property get_triton_input#
- property get_triton_output#
- triton_infer_fn(**inputs: numpy.ndarray)#
- _compute_logprobs(
- prompts: List[str],
- output_infer: Dict[str, Any],
- compute_logprob: bool,
- n_top_logprobs: int,
- echo: bool,
Compute log probabilities and top log probabilities from model scores. Used by ray_infer_fn to provide OAI API compatible output for evaluations.
This method processes the raw scores from model generation to compute:
Log probabilities for chosen tokens
Top-k log probabilities for each position (if requested)
Handles both prompt tokens (when echo=True) and generated tokens
- Parameters:
prompts – List of input prompts
output_infer – Dictionary containing model outputs including scores, sequences, and input_lengths
compute_logprob – Whether to compute log probabilities
n_top_logprobs – Number of top log probabilities to return (0 to disable)
echo – Whether to include prompt token log probabilities
- Returns:
log_probs_list: List of log probabilities for each sample (None if not computed)
top_logprobs_list: List of top-k log probabilities for each sample (None if not computed)
- Return type:
Tuple[Optional[List], Optional[List]]
- ray_infer_fn(inputs: Dict[Any, Any])#
Perform inference using Ray with dictionary inputs and outputs.
- Parameters:
inputs (Dict[Any, Any]) –
Dictionary containing input parameters:
prompts: List of input prompts
temperature: Sampling temperature (optional)
top_k: Number of highest probability tokens to consider (optional)
top_p: Cumulative probability threshold for token sampling (optional)
max_tokens: Maximum number of tokens to generate (optional)
compute_logprob: Whether to compute log probabilities (optional)
n_top_logprobs: Number of top log probabilities to return (optional)
echo: Whether to echo the prompt in output (optional)
- Returns:
Dictionary containing: - sentences: List of generated texts - log_probs: Optional list of log probabilities if compute_logprob is True - top_logprobs: Optional list of top log probabilities if n_top_logprobs > 0
- Return type:
Dict[str, Any]
- _infer_fn_ray(
- prompts,
- temperature=1.0,
- top_k=1,
- top_p=0.0,
- num_tokens_to_generate=256,
- output_logits=False,
- output_scores=False,
- compute_logprob=False,
- n_top_logprobs=0,
- echo=False,
- cast_output_func=None,
Common internal function for inference operations.
- Parameters:
prompts – List of input prompts
temperature – Sampling temperature
top_k – Number of highest probability tokens to consider
top_p – Cumulative probability threshold for token sampling
num_tokens_to_generate – Maximum number of tokens to generate
output_logits – Whether to output logits
output_scores – Whether to output scores
compute_logprob – Whether to compute log probabilities
n_top_logprobs – Number of top log probabilities to return
echo – Whether to echo the prompt in output
cast_output_func – Optional function to cast output values
- Returns:
Dict containing inference results with raw outputs