r5e
Labs solve intelligence. We solve how to apply it. r5e is a governed substrate for turning intent into outcomes. It provides a declarative API for orchestrating AI agents with cryptographic provenance from the first prompt to the final artifact.Orchestration Engine
Governed agent sessions with explicit authority, bounded delegation, and mandatory independent review.
AI Gateway
Audit every AI interaction. Enforce policy. Prevent shadow AI. Produce verifiable compliance evidence.
Three planes
Everything in r5e maps to one of three architectural planes:| Plane | Purpose | Examples |
|---|---|---|
| Durable Control | Resources and graphs — semantic truth | Graph, Node, Task, AgentSession, Policy |
| Live Execution | OTP processes — operational state | Agent sessions, harness processes |
| Historical Audit | Events and artifacts — what happened, provably | Hash-chained log, content-addressed store |
Principles
- Transparency from the root node. Every action traceable to its origin.
- Labs solve intelligence; we solve applying it. Intelligence is pluggable. Governance is ours.
- Extensibility is architecture. Everything goes through the same admission pipeline.
- Design for symbiosis. Works at every point on the autonomy spectrum.
- Contracts, not implementations. The spec says what and why.
Status
r5e is in private alpha. Built on Elixir/OTP. Open source release planned under AGPL.Read the Primer
What r5e is, how it works, and what your role is in the system.