Plan multi-step work
Break a vague ask into a staged plan with checkpoints, owners, and clear verification criteria.
Orange AI is a local-first desktop workspace where specialized agents plan, research, build, and operate together — across documents, data, code, and apps — with explicit human approval on anything that matters.
A sandboxed instance running on an isolated Railway service shows Planner, Coder, Reviewer, and Operator handing work between each other in real time. No download, no signup, no LLM keys.
Orange AI is shaped around real workflows — not chat demos. Plan a launch, write a report, analyze a dataset, refactor a module, or run a long task overnight while keeping the keys to every step.
Break a vague ask into a staged plan with checkpoints, owners, and clear verification criteria.
Documents, spreadsheets, codebases, notes — agents propose changes you review before anything lands.
Verify, fact-check, and test outputs. Cite sources, run smoke checks, surface what still needs you.
Drive terminal, browser, and apps for repetitive workflows — only after you approve the action.
Route work through local GGUFs, Ollama, LM Studio, or any configured cloud provider.
Code, prompts, and files stay on your machine unless you explicitly route them somewhere else.
Each agent has a narrow job and the right tools for it — research, drafting, automation. They share context, hand off cleanly, and stop the moment a step needs your judgment.
See the full workflowTurns vague requests into staged work with checkpoints, owners, and clear verification criteria.
Reads, edits, patches, and runs your project files — code, configs, Markdown, specs — following the conventions in your repo.
Looks for regressions, missing tests, unsafe changes, and risky shell or git operations before anything ships.
Drives terminal, browser, and desktop apps only when you allow it — every action is logged through the approval system.
The four defaults aren’t locked. In Settings → Agents, rename
Coder to Copywriter, retire Operator, or add a
Citation Checker with its own model and tool scope. The orchestration loop, approval
system, and local-first runtime stay the same. Drag a .gguf file onto the chat to
assign a model in plain English: “Load this into Reviewer.”
Every meaningful step is visible. Every irreversible step is gated. You stay in the loop without babysitting the loop.
Ask for a document, analysis, plan, feature, refactor, or whole project — in plain language.
Review the agent path before high-impact commands, file writes, or desktop actions run.
Files, terminal, browser, and app actions stream into a single timeline you can pause anytime.
Receive completed work, verification results, citations, and a short summary of remaining risk.
Open the chat, describe what you want, and Vibe Build runs the requirement-elicitation Q&A,
captures the spec, and returns an LLM-generated build plan for the team to work from. Turn on
VIBE_BUILD_EXECUTE_ENABLED in your config to let it scaffold the project, run the
build commands, and execute tests on top — off by default so nothing happens to your
machine without you asking.
The wizard asks the questions a senior engineer would: stack, constraints, success criteria, deploy target. Skip what doesn’t apply.
You get a human-readable Markdown plan you can edit, share, or pipe into the agent team. No black-box sequence of tool calls.
Flip the execute flag and the Coder + Reviewer agents pick up the plan, scaffold, run tests, and report back. You stay in control of every approval gate.
Toggle Auto Agent in the top menu — or say “activate autonomous mode” — and the agent loop runs without you in the chair, inside the safety envelope you set. Per-permission scopes for desktop input, terminal, file edits, package installs, and browser access. An emergency-stop button never goes away.
Use local models, keep project files under your control, and run automation through explicit approvals instead of streaming everything to a hosted workspace.
Read about the security model127.0.0.1 with localhost-only CORS — nothing is exposed to the network without your config.
Drag a .gguf file onto the chat to assign a model in plain English —
“Load this into Coder”. Or wire any local-runtime endpoint or cloud API via
Settings → Providers. The router decides per-call; you stay in control of the
mix.
Buy v1 today. It runs on your machine, with your models, on your terms. No subscription. No per-seat math. v2 will be a separate purchase — your v1 license keeps working on v1.x indefinitely.
Secure checkout via Stripe · Mac & Windows installers · license key delivered by email within minutes · v2.0 is a separate purchase
Studio & team licenses, education pricing, and on-prem deployments — contact sales.
Native desktop app for macOS and Windows. It runs alongside your editor with an embedded
Python backend bound to 127.0.0.1, so your project files, prompts, and agent
traces stay on your machine unless you explicitly route through a cloud provider you configure.
Two installers ship per release — signed Mac DMG and signed Windows MSI — and your
license key activates either build (or both, up to your 2-device limit).
No. Local models are the default — you can ship without ever sending code to a cloud provider — but you can also wire up Anthropic, OpenAI, or any OpenAI-compatible endpoint. You choose per workspace.
Risky actions — shell commands, git pushes, network calls, file writes outside an allow-list — pause for explicit confirmation. You can pre-approve common actions per project and revoke them at any time.
No. Orange AI is built for any project work — research, writing, analysis, planning, ops, and coding all share the same multi-agent workflow. Engineers will feel right at home; everyone else gets the same primitives without having to think about pipelines or prompts.
After checkout, you’ll get an email with your license key and a download link for the signed installer for your platform — Mac DMG (notarized) or Windows MSI — both bundle the Python runtime and Playwright Chromium so no extra installs required). Run it, launch Orange AI, accept the EULA, and enter your license key plus the email used at checkout. Activation runs once over the network. After that, everything works offline.
Yes. Each license activates up to 2 devices at a time. From the app's Account screen — or
from your customer portal at account.multiagentai.ai — you can deactivate an
old machine and activate a new one. There's no fee for moving devices.
v2.0 is a separate one-time purchase. Your v1 license keeps working on every v1.x release, forever — no kill switches, no expiring keys. You only pay again if you choose to upgrade to a new major version.
14-day money-back guarantee. Email support@multiagentai.ai within 14 days of purchase and we'll refund you in full — no questions, no forms. The license is then deactivated on every device.
Almost none, by default. Activation pings the license server with a stable device ID and your license key. That's it. Crash reports and anonymous version pings are opt-in in Settings. Project files, prompts, and agent traces stay on your machine unless you explicitly route work through a cloud provider you configure.
$149, paid once. Mac and Windows. License works on 2 devices and is yours to keep. v2.0 will be a separate purchase — v1 keeps working forever.
Secure Stripe checkout · instant license delivery · 14-day money-back guarantee.