Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.gdilabs.io/llms.txt

Use this file to discover all available pages before exploring further.

What it is

  • A platform for AI workforce + human governance, not a chat product.
  • Every prompt becomes a Redis-backed job. Every step emits a typed event. Every escalation is logged. Every paid call is cost-tracked.
  • The design centre is throughput, audit, and multi-agent governance.

Components

  • Mother AI — stateless ingest service (Rust/Axum). Authenticates requests, persists projects/workflows, enqueues jobs, streams events.
  • Worker — orchestrator (Python/LangGraph). Pulls jobs, runs the context engine, dispatches through an L1–L4 agent hierarchy, emits typed events.
  • Frontend — dashboard (Next.js / React). Renders live job streams, projects, workflows, and a 3D knowledge atlas.
  • Ingest — knowledge-hub pipeline. Reads markdown, chunks, embeds, upserts into Qdrant.
  • Knowledge Hub MCP — read-only stdio MCP server. Exposes the knowledge hub to any MCP client (Claude Desktop, Claude Code, agents, partner tools).

Core flow

  1. Client submits a prompt to Mother AI.
  2. Mother AI authenticates, persists project metadata, enqueues a job in Redis.
  3. Worker pulls the job, runs the context engine, classifies, dispatches through L1–L4.
  4. Worker emits typed AgentEvent JSON to a Redis stream.
  5. Frontend (or any client) subscribes via Mother AI’s GET /v1/jobs/:id/stream and renders backlog + live events.
  6. Worker persists job state, audit trail, and escalation history.

Hierarchy

  • L1 — Team Lead (free model): orchestration only — classify, plan, delegate.
  • L2 — Managerial roles: Architect, Tech Lead, Release Manager, QA / Security / Adversarial leadership.
  • L3 — Acceptance: free-model verifier; pass/fail with deltas, not rewrites.
  • L4 — Executor: paid-model file-write specialist; emits net-new files and surgical edits.
Routing is policy-driven: housekeeping prompts → free; architecture, implementation, high-risk → paid. Same-level conflicts retry up to 3 times before escalating upward.

Models

  • Free: Ollama-served models (local or remote), woken on demand.
  • Paid: Claude and OpenAI-compatible endpoints. Health-ranked provider fallback de-prefers failing providers automatically.

Integrate

  • Submit a job: POST /v1/chat returns a job_id. Subscribe via GET /v1/jobs/:id/stream.
  • Resume an interrupted job: POST /v1/jobs/:id/resume for jobs paused on an ask_user interrupt.
  • Drive retrieval externally: install the Knowledge Hub MCP server and add it to Claude Desktop or Claude Code.