Documentation Index
Fetch the complete documentation index at: https://docs.gdilabs.io/llms.txt
Use this file to discover all available pages before exploring further.
What it does
- Authenticates requests via Authorization-header API tokens against team policy.
- Persists projects and workflows.
- Enqueues every chat submission as a Redis job before any execution — the queue-first invariant.
- Streams job events to clients over Server-Sent Events.
- Probes models and wakes Ollama on demand.
- Routes non-LLM intents (rules, facts, commands).
Endpoints
| Method | Path | Purpose |
|---|---|---|
POST | /v1/chat | Submit a prompt. Returns job_id. |
GET | /v1/jobs/:id | Current job status. |
POST | /v1/jobs/:id/resume | Resume a job paused on an ask_user interrupt. |
GET | /v1/jobs/:id/stream | SSE: replays Redis backlog, then streams the live pubsub channel until the job finishes or errors. |
GET/POST | /v1/workflows | Workflow graph CRUD. |
GET/POST | /v1/projects | Projects CRUD. |
GET | /v1/models | Available models — local Ollama plus configured Claude / OpenAI-compatible. |
GET/POST | /v1/ollama/* | Ollama health and wake. |
POST | /v1/command/intents | Non-LLM intent routing. |
GET | /healthz | Liveness. |