Skip to main content

r5e

Labs solve intelligence. We solve how to apply it. r5e is a governed substrate for turning intent into outcomes. It provides a declarative API for orchestrating AI agents with cryptographic provenance from the first prompt to the final artifact.

Orchestration Engine

Governed agent sessions with explicit authority, bounded delegation, and mandatory independent review.

AI Gateway

Audit every AI interaction. Enforce policy. Prevent shadow AI. Produce verifiable compliance evidence.

Three planes

Everything in r5e maps to one of three architectural planes:
PlanePurposeExamples
Durable ControlResources and graphs — semantic truthGraph, Node, Task, AgentSession, Policy
Live ExecutionOTP processes — operational stateAgent sessions, harness processes
Historical AuditEvents and artifacts — what happened, provablyHash-chained log, content-addressed store
Everything else — dashboards, CLIs, these docs — is a projection.

Principles

  1. Transparency from the root node. Every action traceable to its origin.
  2. Labs solve intelligence; we solve applying it. Intelligence is pluggable. Governance is ours.
  3. Extensibility is architecture. Everything goes through the same admission pipeline.
  4. Design for symbiosis. Works at every point on the autonomy spectrum.
  5. Contracts, not implementations. The spec says what and why.

Status

r5e is in private alpha. Built on Elixir/OTP. Open source release planned under AGPL.

Read the Primer

What r5e is, how it works, and what your role is in the system.