Mission
Open infrastructure for trustworthy AI agents
Section titled “Open infrastructure for trustworthy AI agents”madahub develops the foundational infrastructure that makes autonomous AI agents safe, accountable, and interoperable. We publish open specifications and reference implementations under permissive licenses.
The problem
Section titled “The problem”AI agents are becoming autonomous. They make decisions, invoke tools, coordinate with other agents, and operate on behalf of humans. But the infrastructure layer is missing:
- No standard for agent communication — every framework invents its own wire format, with no security guarantees
- No coordination protocol — multi-agent systems lack formal contracts for task decomposition, progress tracking, and failure recovery
- No audit trail — when an agent acts, there is no cryptographic proof of what happened, who authorized it, or why
- Human oversight is an afterthought — supervision gates are bolted on rather than built into the protocol
Our approach
Section titled “Our approach”We build protocols first, implementations second. Each layer in the madahub ecosystem is formally specified before any code is written:
- WACP (Workspace Agent Coordination Protocol) — defines workspace isolation, task graphs, checkpoint chains, audit trails, and human-in-the-loop gates
- Mirror Frame Protocol — provides cryptographic security for agent-to-agent communication with symmetric frame validation and forward secrecy
- mada-modelkit — offers a composable client library abstracting over cloud, local, and native AI providers
Principles
Section titled “Principles”- Open by default — specifications and implementations are open source
- Protocols over frameworks — define contracts, not lock-in
- Security is structural — frame validation happens before semantic interpretation
- Human control is first-class — supervision gates are part of the protocol, not optional middleware
- Academic rigor — formal specifications, threat models, and security proofs