Introduction
Track where you are and jump quickly to the previous or next section.
Optimized for OpenClaw and OpenAI SDK
OpenAI-compatible, OpenClaw ready, API keys, budget guard, and logs.
SeeLLM is an AI gateway optimized for OpenClaw and OpenAI completion. This page covers our core features: OpenAI/OpenClaw-compatible endpoint, enabled model IDs, budget limits, and real-time logs.
Getting Started
If your app or tool supports a custom base URL and Bearer token, the usual migration is straightforward: point it to SeeLLM, pick an enabled model ID, and send the same request shape you already use with OpenAI-style clients.
Why use SeeLLM?
Fully compatible with OpenClaw & OpenAI SDK traffic.
API keys, budget_limit, and request logs in the dashboard.
Model IDs come from the dashboard or GET /v1/models.
Unified endpoint
Point your client at the SeeLLM gateway and keep the familiar chat/completions request shape.
Dashboard-driven model IDs
Only enabled models are returned by `/v1/models` and accepted by the gateway.
Budget guard + logs
The gateway checks balance and budget_limit before forwarding, then records each request for review.