services.openai_client
#
Module Contents#
Classes#
A wrapper around OpenAI’s Python async client for querying models |
|
A wrapper around OpenAI’s Python client for querying models |
API#
- class services.openai_client.AsyncOpenAIClient(async_openai_client: openai.AsyncOpenAI)#
Bases:
services.model_client.AsyncLLMClient
A wrapper around OpenAI’s Python async client for querying models
Initialization
- async query_model(
- *,
- messages: collections.abc.Iterable,
- model: str,
- conversation_formatter: nemo_curator.services.conversation_formatter.ConversationFormatter | None = None,
- max_tokens: int | None | openai._types.NotGiven = NOT_GIVEN,
- n: int | None | openai._types.NotGiven = NOT_GIVEN,
- seed: int | None | openai._types.NotGiven = NOT_GIVEN,
- stop: str | None | list[str] | openai._types.NotGiven = NOT_GIVEN,
- stream: bool | None | openai._types.NotGiven = False,
- temperature: float | None | openai._types.NotGiven = NOT_GIVEN,
- top_k: int | None = None,
- top_p: float | None | openai._types.NotGiven = NOT_GIVEN,
- async query_reward_model(
- *,
- messages: collections.abc.Iterable,
- model: str,
Prompts an LLM Reward model to score a conversation between a user and assistant Args: messages: The conversation to calculate a score for. Should be formatted like: [{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …] model: The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM. Returns: A mapping of score_name -> score
- class services.openai_client.OpenAIClient(openai_client: openai.OpenAI)#
Bases:
services.model_client.LLMClient
A wrapper around OpenAI’s Python client for querying models
Initialization
- query_model(
- *,
- messages: collections.abc.Iterable,
- model: str,
- conversation_formatter: nemo_curator.services.conversation_formatter.ConversationFormatter | None = None,
- max_tokens: int | None | openai._types.NotGiven = NOT_GIVEN,
- n: int | None | openai._types.NotGiven = NOT_GIVEN,
- seed: int | None | openai._types.NotGiven = NOT_GIVEN,
- stop: str | None | list[str] | openai._types.NotGiven = NOT_GIVEN,
- stream: bool | None | openai._types.NotGiven = False,
- temperature: float | None | openai._types.NotGiven = NOT_GIVEN,
- top_k: int | None = None,
- top_p: float | None | openai._types.NotGiven = NOT_GIVEN,
- query_reward_model(
- *,
- messages: collections.abc.Iterable,
- model: str,
Prompts an LLM Reward model to score a conversation between a user and assistant Args: messages: The conversation to calculate a score for. Should be formatted like: [{“role”: “user”, “content”: “Write a sentence”}, {“role”: “assistant”, “content”: “This is a sentence”}, …] model: The name of the model that should be used to calculate the reward. Must be a reward model, cannot be a regular LLM. Returns: A mapping of score_name -> score