services.nemo_client
#
Module Contents#
Classes#
A wrapper around NemoQueryLLM for querying models in synthetic data generation |
API#
- class services.nemo_client.NemoDeployClient(nemo_deploy: NemoQueryLLM)#
Bases:
services.model_client.LLMClient
A wrapper around NemoQueryLLM for querying models in synthetic data generation
Initialization
- query_model(
- *,
- messages: collections.abc.Iterable,
- model: str,
- conversation_formatter: nemo_curator.services.conversation_formatter.ConversationFormatter | None = None,
- max_tokens: int | None = None,
- n: int | None = None,
- seed: int | None = None,
- stop: str | None | list[str] = [],
- stream: bool = False,
- temperature: float | None = None,
- top_k: int | None = None,
- top_p: float | None = None,
- query_reward_model(
- *,
- messages: collections.abc.Iterable,
- model: str,
Prompts an LLM Reward model to score a conversation between a user and assistant Args: messages: The conversation to calculate a score for. Should be formatted like: [{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …] model: The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM. Returns: A mapping of score_name -> score