Model Server#

Model servers provide stateless LLM inference via OpenAI-compatible endpoints. They implement ResponsesAPIModel and expose two endpoints:

Backend Guides#

Guides for OpenAI and Azure OpenAI Responses API models and more are coming soon!

vLLM

Self-hosted inference with vLLM for maximum control.

vLLM Model Server