Important
You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.
LLM Services#
- class nemo_curator.LLMClient#
Interface representing a client connecting to an LLM inference server and making requests synchronously
- class nemo_curator.AsyncLLMClient#
Interface representing a client connecting to an LLM inference server and making requests asynchronously
- class nemo_curator.OpenAIClient(openai_client: openai.OpenAI)#
A wrapper around OpenAI’s Python client for querying models
- query_reward_model(
- *,
- messages: Iterable,
- model: str,
Prompts an LLM Reward model to score a conversation between a user and assistant :param messages: The conversation to calculate a score for.
- Should be formatted like:
[{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …]
- Parameters:
model – The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM.
- Returns:
A mapping of score_name -> score
- class nemo_curator.AsyncOpenAIClient(async_openai_client: openai.AsyncOpenAI)#
A wrapper around OpenAI’s Python async client for querying models
- async query_reward_model(
- *,
- messages: Iterable,
- model: str,
Prompts an LLM Reward model to score a conversation between a user and assistant :param messages: The conversation to calculate a score for.
- Should be formatted like:
[{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …]
- Parameters:
model – The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM.
- Returns:
A mapping of score_name -> score
- class nemo_curator.NemoDeployClient(nemo_deploy: NemoQueryLLM)#
A wrapper around NemoQueryLLM for querying models in synthetic data generation
- query_reward_model(
- *,
- messages: Iterable,
- model: str,
Prompts an LLM Reward model to score a conversation between a user and assistant :param messages: The conversation to calculate a score for.
- Should be formatted like:
[{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …]
- Parameters:
model – The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM.
- Returns:
A mapping of score_name -> score