Work should be legible
AI agent work should be easy to follow. Teams should be able to see what is happening, why it is happening, and what changed.
Superplan Manifesto
We believe AI agents can become dependable software teammates, but only when execution is structured, context is durable, and progress is proven instead of assumed.
AI agent work should be easy to follow. Teams should be able to see what is happening, why it is happening, and what changed.
Important decisions should stay close to the repo so work can survive long sessions, model changes, and handoffs.
Software progress should be backed by verification. Claims of done should come after evidence, not before it.
AI agents are already capable of meaningful engineering work, but capability alone does not create reliable delivery. Without structure, even strong models can drift, lose context, skip steps, and overstate progress.
We think the answer is not more optimism. The answer is a better operating model. Superplan exists because software teams need a disciplined way to turn agent capability into execution they can understand, resume, and trust.
We want developers and teams to trust AI agents more because the workflow is structured, visible, and accountable.
We are building for day to day software delivery, not just short demos, isolated prompts, or temporary experiments.
We reject workflows where the agent forgets the plan, repeats work, or depends on a fragile conversation window.
We reject hidden task flow that leaves teams guessing about what is blocked, what is ready, or what changed.
We reject software delivery that treats confident language as evidence instead of using real verification.
We reject systems that separate the work from the codebase and make it harder for teams to resume or review.
Every meaningful change should start with a clear goal, clear scope, and a concrete path to proof.
The codebase should remain the center of gravity for agent work so context stays durable and reviewable.
Agents do better when work is shaped into bounded tasks instead of loose ambitions that invite drift.
A session ending should not mean the work collapses. Another agent or teammate should be able to continue with confidence.
Checks, reviews, and evidence should be part of the workflow itself so quality is reinforced by the system.
We are building software that helps AI agents behave less like improvising assistants and more like reliable contributors. That means clearer task flow, better recovery from interruptions, and a stronger path from plan to verified result.
This manifesto is not marketing filler. It is the standard we want the product to keep meeting as the workflow grows.
Visit the homepage to explore the install flow, product overview, and the core execution model.