Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.gdilabs.io/llms.txt

Use this file to discover all available pages before exploring further.

What it does

  • Pulls jobs from Redis.
  • Runs the context engine before routing — pulls knowledge-hub augments and RAG-retrieved snippets into the prompt.
  • Routes through the L1–L4 hierarchy.
  • Tracks escalation state and per-conflict retry counts.
  • Emits typed AgentEvent JSON to a Redis list and pubsub channel.
  • Persists status, cost, and audit trail.
  • Provisions per-project workspaces (clone, scaffold, trigger Vercel deploy).

Model dispatch

Free and paid model dispatch with health-ranked provider fallback.
  • Free providers: Ollama (local or remote), including thinking models.
  • Paid providers: Claude and OpenAI-compatible endpoints.
A failing provider is de-preferred for subsequent jobs until its health recovers.

Hierarchy invariants

  • Sub-job nesting depth is capped at 2.
  • Per-conflict escalation count maxes at 3 retries before the conflict is escalated upward.
  • The L4 executor wraps token streaming with periodic heartbeats so long generations stay alive end-to-end.
  • Knowledge-hub edits hot-reload into running workers via a refresher thread — no worker restart needed.