LLM Services#

class nemo_curator.LLMClient#

Interface representing a client connecting to an LLM inference server and making requests synchronously

class nemo_curator.AsyncLLMClient#

Interface representing a client connecting to an LLM inference server and making requests asynchronously

class nemo_curator.OpenAIClient(openai_client: openai.OpenAI)#

A wrapper around OpenAI’s Python client for querying models

query_reward_model(
*,
messages: Iterable,
model: str,
) dict#

Prompts an LLM Reward model to score a conversation between a user and assistant :param messages: The conversation to calculate a score for.

Should be formatted like:

[{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …]

Parameters:

model – The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM.

Returns:

A mapping of score_name -> score

class nemo_curator.AsyncOpenAIClient(async_openai_client: openai.AsyncOpenAI)#

A wrapper around OpenAI’s Python async client for querying models

async query_reward_model(
*,
messages: Iterable,
model: str,
) dict#

Prompts an LLM Reward model to score a conversation between a user and assistant :param messages: The conversation to calculate a score for.

Should be formatted like:

[{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …]

Parameters:

model – The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM.

Returns:

A mapping of score_name -> score

class nemo_curator.NemoDeployClient(nemo_deploy: NemoQueryLLM)#

A wrapper around NemoQueryLLM for querying models in synthetic data generation

query_reward_model(
*,
messages: Iterable,
model: str,
) dict#

Prompts an LLM Reward model to score a conversation between a user and assistant :param messages: The conversation to calculate a score for.

Should be formatted like:

[{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …]

Parameters:

model – The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM.

Returns:

A mapping of score_name -> score