Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.gdilabs.io/llms.txt

Use this file to discover all available pages before exploring further.

What it does

  • Authenticates requests via Authorization-header API tokens against team policy.
  • Persists projects and workflows.
  • Enqueues every chat submission as a Redis job before any execution — the queue-first invariant.
  • Streams job events to clients over Server-Sent Events.
  • Probes models and wakes Ollama on demand.
  • Routes non-LLM intents (rules, facts, commands).

Endpoints

MethodPathPurpose
POST/v1/chatSubmit a prompt. Returns job_id.
GET/v1/jobs/:idCurrent job status.
POST/v1/jobs/:id/resumeResume a job paused on an ask_user interrupt.
GET/v1/jobs/:id/streamSSE: replays Redis backlog, then streams the live pubsub channel until the job finishes or errors.
GET/POST/v1/workflowsWorkflow graph CRUD.
GET/POST/v1/projectsProjects CRUD.
GET/v1/modelsAvailable models — local Ollama plus configured Claude / OpenAI-compatible.
GET/POST/v1/ollama/*Ollama health and wake.
POST/v1/command/intentsNon-LLM intent routing.
GET/healthzLiveness.

SSE invariant

The stream handler must replay the durable backlog before joining the pubsub stream so that a reconnecting client never misses events.

Errors

Errors include job and request correlation metadata so clients can trace any failure back to the job and the original request.