nemo_curator.models.client.openai_client

View as Markdown

Module Contents

Classes

NameDescription
AsyncOpenAIClientA wrapper around OpenAI’s Python async client for querying models
OpenAIClientA wrapper around OpenAI’s Python client for querying models

API

class nemo_curator.models.client.openai_client.AsyncOpenAIClient(
max_concurrent_requests: int = 5,
max_retries: int = 3,
base_delay: float = 1.0,
kwargs = {}
)

Bases: AsyncLLMClient

A wrapper around OpenAI’s Python async client for querying models

timeout
= kwargs.pop('timeout', 120)
nemo_curator.models.client.openai_client.AsyncOpenAIClient._query_model_impl(
messages: collections.abc.Iterable,
model: str,
conversation_formatter: nemo_curator.models.client.llm_client.ConversationFormatter | None = None,
generation_config: nemo_curator.models.client.llm_client.GenerationConfig | dict | None = None
) -> list[str]
async

Internal implementation of query_model without retry/concurrency logic.

nemo_curator.models.client.openai_client.AsyncOpenAIClient.setup() -> None

Setup the client.

class nemo_curator.models.client.openai_client.OpenAIClient(
kwargs = {}
)

Bases: LLMClient

A wrapper around OpenAI’s Python client for querying models

timeout
= kwargs.pop('timeout', 120)
nemo_curator.models.client.openai_client.OpenAIClient.query_model(
messages: collections.abc.Iterable,
model: str,
conversation_formatter: nemo_curator.models.client.llm_client.ConversationFormatter | None = None,
generation_config: nemo_curator.models.client.llm_client.GenerationConfig | dict | None = None
) -> list[str]
nemo_curator.models.client.openai_client.OpenAIClient.setup() -> None

Setup the client.