aiq.profiler.callbacks.langchain_callback_handler#
Attributes#
Classes#
Callback Handler that tracks NIM info. |
Module Contents#
- logger#
- class LangchainProfilerHandler#
Bases:
langchain_core.callbacks.AsyncCallbackHandler
,aiq.profiler.callbacks.base_callback_class.BaseProfilerCallback
Callback Handler that tracks NIM info.
- raise_error = True#
Whether to raise an error if an exception occurs.
- run_inline = False#
Whether to run the callback inline.
- _lock#
- last_call_ts#
- step_manager#
- _state#
- _run_id_to_model_name#
- _run_id_to_llm_input#
- _run_id_to_tool_input#
- _run_id_to_start_time#
- _extract_token_base_model( ) aiq.profiler.callbacks.token_usage_base_model.TokenUsageBaseModel #
- async on_llm_start( ) None #
Run when LLM starts running.
- ATTENTION: This method is called for non-chat models (regular LLMs). If
you’re implementing a handler for a chat model, you should use on_chat_model_start instead.
- Args:
serialized (Dict[str, Any]): The serialized LLM. prompts (List[str]): The prompts. run_id (UUID): The run ID. This is the ID of the current run. parent_run_id (UUID): The parent run ID. This is the ID of the parent run. tags (Optional[List[str]]): The tags. metadata (Optional[Dict[str, Any]]): The metadata. kwargs (Any): Additional keyword arguments.
- async on_chat_model_start(
- serialized: dict[str, Any],
- messages: list[list[langchain_core.messages.BaseMessage]],
- *,
- run_id: uuid.UUID,
- parent_run_id: uuid.UUID | None = None,
- tags: list[str] | None = None,
- metadata: dict[str, Any] | None = None,
- **kwargs: Any,
Run when a chat model starts running.
- ATTENTION: This method is called for chat models. If you’re implementing
a handler for a non-chat model, you should use on_llm_start instead.
- Args:
serialized (Dict[str, Any]): The serialized chat model. messages (List[List[BaseMessage]]): The messages. run_id (UUID): The run ID. This is the ID of the current run. parent_run_id (UUID): The parent run ID. This is the ID of the parent run. tags (Optional[List[str]]): The tags. metadata (Optional[Dict[str, Any]]): The metadata. kwargs (Any): Additional keyword arguments.
- async on_llm_end(
- response: langchain_core.outputs.LLMResult,
- **kwargs: Any,
Collect token usage.
- async on_tool_start(
- serialized: dict[str, Any],
- input_str: str,
- *,
- run_id: uuid.UUID,
- parent_run_id: uuid.UUID | None = None,
- tags: list[str] | None = None,
- metadata: dict[str, Any] | None = None,
- inputs: dict[str, Any] | None = None,
- **kwargs: Any,
Run when the tool starts running.
- Args:
serialized (Dict[str, Any]): The serialized tool. input_str (str): The input string. run_id (UUID): The run ID. This is the ID of the current run. parent_run_id (UUID): The parent run ID. This is the ID of the parent run. tags (Optional[List[str]]): The tags. metadata (Optional[Dict[str, Any]]): The metadata. inputs (Optional[Dict[str, Any]]): The inputs. kwargs (Any): Additional keyword arguments.
- async on_tool_end( ) Any #
Run when the tool ends running.
- Args:
output (Any): The output of the tool. run_id (UUID): The run ID. This is the ID of the current run. parent_run_id (UUID): The parent run ID. This is the ID of the parent run. tags (Optional[List[str]]): The tags. kwargs (Any): Additional keyword arguments.