NVIDIA AgentIQ Guides# Create and Customize Workflows Prerequisites Running a Workflow Using the aiq run Command Using the aiq eval Command Using the aiq serve Command Using the Python API Understanding the Workflow Configuration File Customizing a Workflow Adding Tools to a Workflow Alternate Method Using a Web Search Tool Creating a New Tool and Workflow Customizing the Configuration Object Customizing the Tool Function Creating the Workflow Configuration Understanding pyproject.toml Rebuild with Changes Running the Workflow Share Components Enabling Local and Remote Discovery Package Distribution Building a Wheel Package Publish to a Remote Registry Share Source Code Summary Evaluate Evaluating a Workflow Understanding the Evaluation Configuration Understanding the Dataset Format Understanding the Evaluator Configuration Display all evaluators Ragas Evaluator Trajectory Evaluator Workflow Output Evaluator Output Adding Custom Evaluators Additional Evaluation Options Profiling and Performance Monitoring of AgentIQ Workflows Custom Evaluator Existing Evaluators Extending AgentIQ with Custom Evaluators Evaluator Configuration Understanding EvalInput and EvalOutput Similarity Evaluator Display all evaluators Evaluation configuration Evaluating the workflow Evaluation results Summary Observing a Workflow with Phoenix Step 1: Modify Workflow Configuration Step 2: Start the Phoenix Server Step 3: Run Your Workflow Step 4: View Traces Data in Phoenix Debugging Use User Interface and API Server User Interface Features Generate Non-Streaming Transaction Generate Streaming Transaction Chat Non-Streaming Transaction Chat Streaming Transaction Choosing between Streaming and Non-Streaming Walk-through Start the AgentIQ Server Verify the AgentIQ Server is Running Launch the AgentIQ User Interface Connect the User Interface to the AgentIQ Server Using HTTP API Settings Options Simple Calculator Example Conversation AgentIQ API Server Interaction Guide Profile a Workflow Prerequisites Defining a Workflow Configuring the Workflow Running the Profiler Analyzing the Profiling Results Plotting Prompt vs Completion Tokens for LLMs Analyzing Workflow Runtimes Analyzing Token Efficiency Understanding Where the Models Spend Time Analyzing RAGAS Metrics Conclusion Adding an LLM Provider Provider Types Defining an LLM Provider Registering the Provider LLM Clients Packaging the Provider and Client