Tool Call Parsing (Dynamo)

Connect Dynamo to external tools and services using Dynamo’s built-in tool call parsers

View as Markdown

You can connect Dynamo to external tools and services using tool calling. By providing a list of available functions, Dynamo can choose to output function arguments for the relevant function(s) which you can execute to augment the prompt with relevant external information.

Tool calling is controlled using the tool_choice and tools request parameters.

This page covers parser names for the default Dynamo-native path. If Dynamo does not list a parser for your model, see Tool Call Parsing (Engine Fallback).

Prerequisites

To enable this feature, you should set the following flag while launching the backend worker

  • --dyn-tool-call-parser: select the tool call parser from the supported list below
$# <backend> can be sglang, trtllm, vllm, etc. based on your installation
$python -m dynamo.<backend> --help

If no tool call parser is provided by the user, Dynamo will try to use default tool call parsing based on <TOOLCALL> and <|python_tag|> tool tags.

If your model’s default chat template doesn’t support tool calling, but the model itself does, you can specify a custom chat template per worker with python -m dynamo.<backend> --custom-jinja-template </path/to/template.jinja>.

If your model also emits reasoning content that should be separated from normal output, see Reasoning Parsing (Dynamo) for the supported --dyn-reasoning-parser values.

Supported Tool Call Parsers

The table below lists the currently supported tool call parsers in Dynamo’s registry. The Upstream name column shows where the vLLM or SGLang parser name differs from Dynamo’s — relevant when using --dyn-chat-processor vllm or sglang (see Tool Call Parsing (Engine Fallback)). A blank upstream column means the same name works everywhere. Dynamo-only means no upstream parser exists for this format.

Parser NameModelsUpstream nameNotes
kimi_k2Kimi K2 Instruct/Thinking, Kimi K2.5Pair with --dyn-reasoning-parser kimi or kimi_k25
qwen3_coderQwen3.5, Qwen3-CoderXML <tool_call><function=...>
deepseek_v4DeepSeek V4 Pro / FlashvLLM: deepseek_v4; SGLang: deepseekv4DSML tags (<|DSML|tool_calls>...). Aliases: deepseek-v4, deepseekv4
deepseek_v3DeepSeek V3, DeepSeek R1-0528+SGLang: deepseekv3Special Unicode markers
deepseek_v3_1DeepSeek V3.1Dynamo-onlyJSON separators
deepseek_v3_2DeepSeek V3.2+Dynamo-onlyDSML tags (<|DSML|function_calls>...)
default(fallback)Dynamo-onlyEmpty JSON config (no start/end tokens). Prefer a model-specific parser for production use.
gemma4Google Gemma 4 (thinking models)vLLM: gemma4Custom non-JSON grammar with <|"|> string delimiters and <|tool_call>...<tool_call|> markers. Aliases: gemma-4. Pair with --dyn-reasoning-parser gemma4 and --custom-jinja-template examples/chat_templates/gemma4_tool.jinja
glm47GLM-4.5, GLM-4.7Dynamo-onlyXML <arg_key>/<arg_value>
harmonygpt-oss-20b / -120bDynamo-onlyHarmony channel format
hermesQwen2.5-*, QwQ-32B, Qwen3-Instruct, Qwen3-Think, NousHermes-2/3vLLM: qwen2_5; SGLang: qwen25 (for Qwen models)<tool_call> JSON
jambaJamba 1.5 / 1.6 / 1.7Dynamo-only<tool_calls> JSON
llama3_jsonLlama 3 / 3.1 / 3.2 / 3.3 Instruct<|python_tag|> tool syntax
minimax_m2MiniMax M2 / M2.1vLLM: minimaxXML <minimax:tool_call>
mistralMistral / Mixtral / Mistral-Nemo, Magistral[TOOL_CALLS]...[/TOOL_CALLS]
nemotron_deciNemotron-Super / -Ultra / -Deci, Llama-Nemotron-Ultra / -SuperDynamo-only<TOOLCALL> JSON
nemotron_nanoNemotron-NanoDynamo-onlyAlias for qwen3_coder
phi4Phi-4, Phi-4-mini, Phi-4-mini-reasoningvLLM: phi4_mini_jsonfunctools[...] JSON
pythonicLlama 4 (Scout / Maverick)Python-list tool syntax

For Kimi K2.5 thinking models, pair --dyn-tool-call-parser kimi_k2 with --dyn-reasoning-parser kimi_k25 from Reasoning Parsing (Dynamo) so that both <think> blocks and tool calls are parsed correctly from the same response.

Examples

Launch Dynamo Frontend and Backend

$# launch backend worker
$python -m dynamo.vllm --model openai/gpt-oss-20b --dyn-tool-call-parser harmony
$
$# launch frontend worker
$python -m dynamo.frontend

Tool Calling Request Example

1from openai import OpenAI
2import json
3
4client = OpenAI(base_url="http://localhost:8081/v1", api_key="dummy")
5
6def get_weather(location: str, unit: str):
7 return f"Getting the weather for {location} in {unit}..."
8tool_functions = {"get_weather": get_weather}
9
10tools = [{
11 "type": "function",
12 "function": {
13 "name": "get_weather",
14 "description": "Get the current weather in a given location",
15 "parameters": {
16 "type": "object",
17 "properties": {
18 "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
19 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
20 },
21 "required": ["location", "unit"]
22 }
23 }
24}]
25
26response = client.chat.completions.create(
27 model="openai/gpt-oss-20b",
28 messages=[{"role": "user", "content": "What's the weather like in San Francisco in Celsius?"}],
29 tools=tools,
30 tool_choice="auto",
31 max_tokens=10000
32)
33tool_call = response.choices[0].message.tool_calls[0].function
34print(f"Function called: {tool_call.name}")
35print(f"Arguments: {tool_call.arguments}")
36print(f"Result: {tool_functions[tool_call.name](**json.loads(tool_call.arguments))}")