MultiAgentOS v1.0 · native Mac & Windows desktop app

A private AI team for any kind of work.

MultiAgentOS is a local-first desktop workspace where specialized agents plan, research, build, and operate together — across documents, data, code, and apps — with explicit human approval on anything that matters.

  • One-time purchase · no subscription
  • 1 user, 2 devices
  • 14-day money-back guarantee
Local LLMs Ollama, LM Studio, GGUF, llama.cpp
Human approval Guardrails on shell, git, network
Real tools Files, terminal, browser, apps
Native desktop Stays fast even with long sessions
Live demo

Watch the agents work as a team.

A sandboxed instance running on an isolated Railway service shows Planner, Coder, Reviewer, and Operator handing work between each other in real time. No download, no signup, no LLM keys.

Capabilities

Built for the work you actually ship.

MultiAgentOS is shaped around real workflows — not chat demos. Plan a launch, write a report, analyze a dataset, refactor a module, or run a long task overnight while keeping the keys to every step.

Plan multi-step work

Break a vague ask into a staged plan with checkpoints, owners, and clear verification criteria.

Read & edit your files

Documents, spreadsheets, codebases, notes — agents propose changes you review before anything lands.

Check work before delivery

Verify, fact-check, and test outputs. Cite sources, run smoke checks, surface what still needs you.

Operate the desktop

Drive terminal, browser, and apps for repetitive workflows — only after you approve the action.

Bring your own models

Route work through local GGUFs, Ollama, LM Studio, or any configured cloud provider.

Stay private by default

Code, prompts, and files stay on your machine unless you explicitly route them somewhere else.

Multi-agent orchestration

One workspace. A team of specialists.

Each agent has a narrow job and the right tools for it — research, drafting, automation. They share context, hand off cleanly, and stop the moment a step needs your judgment.

See the full workflow
01

Planner

Turns vague requests into staged work with checkpoints, owners, and clear verification criteria.

  • scope
  • plan
  • milestones
02

Coder

Reads, edits, patches, and runs your project files — code, configs, Markdown, specs — following the conventions in your repo.

  • edit
  • refactor
  • tests
03

Reviewer

Looks for regressions, missing tests, unsafe changes, and risky shell or git operations before anything ships.

  • diff
  • risk
  • fact-check
04

Operator

Drives terminal, browser, and desktop apps only when you allow it — every action is logged through the approval system.

  • shell
  • browser
  • desktop

Don’t ship code? Rename them. Or invent your own.

The four defaults aren’t locked. In Settings → Agents, rename Coder to Copywriter, retire Operator, or add a Citation Checker with its own model and tool scope. The orchestration loop, approval system, and local-first runtime stay the same. Drag a .gguf file onto the chat to assign a model in plain English: “Load this into Reviewer.”

Workflow

From prompt to verified delivery.

Every meaningful step is visible. Every irreversible step is gated. You stay in the loop without babysitting the loop.

  1. 1

    Describe the outcome

    Ask for a document, analysis, plan, feature, refactor, or whole project — in plain language.

  2. 2

    Approve the plan

    Review the agent path before high-impact commands, file writes, or desktop actions run.

  3. 3

    Watch tools execute

    Files, terminal, browser, and app actions stream into a single timeline you can pause anytime.

  4. 4

    Deliver with evidence

    Receive completed work, verification results, citations, and a short summary of remaining risk.

Flagship feature

Vibe Build — describe the outcome, get a build plan.

Open the chat, describe what you want, and Vibe Build runs the requirement-elicitation Q&A, captures the spec, and returns an LLM-generated build plan for the team to work from. Turn on VIBE_BUILD_EXECUTE_ENABLED in your config to let it scaffold the project, run the build commands, and execute tests on top — off by default so nothing happens to your machine without you asking.

01

Interactive Q&A

The wizard asks the questions a senior engineer would: stack, constraints, success criteria, deploy target. Skip what doesn’t apply.

02

Build plan you can read

You get a human-readable Markdown plan you can edit, share, or pipe into the agent team. No black-box sequence of tool calls.

03

Opt-in execution

Flip the execute flag and the Coder + Reviewer agents pick up the plan, scaffold, run tests, and report back. You stay in control of every approval gate.

Auto Agent Mode

Hands-off, when you choose. Hands-on, when it matters.

Toggle Auto Agent in the top menu — or say “activate autonomous mode” — and the agent loop runs without you in the chair, inside the safety envelope you set. Per-permission scopes for desktop input, terminal, file edits, package installs, and browser access. An emergency-stop button never goes away.

  • Scoped permissions — allow shell, file write, browser, or desktop input per workspace, not by default.
  • Approval gates — high-risk steps (network egress, sudo, destructive commands) pause for your OK.
  • Iteration & output limits — runaway loops cap out; tool output is truncated to keep context lean.
  • Live timeline — status, control level, current task, last action, pending approvals — and a single emergency-stop click.
Local-first by design

Your work and context stay on your machine.

Use local models, keep project files under your control, and run automation through explicit approvals instead of streaming everything to a hosted workspace.

Read about the security model
01 Localhost-bound backend The embedded API binds to 127.0.0.1 with localhost-only CORS — nothing is exposed to the network without your config.
02 No analytics SDK No PostHog, Segment, Mixpanel, Sentry, or Datadog in the dependency tree. Zero off-machine telemetry by default.
03 Execution log Every tool call, approval, model call, and run is recorded locally so you can replay or audit any session.
04 Scoped tools Shell, network, browser, and filesystem access are granted per workspace and gated by approvals — not on by default.
Models

Bring your own models. 19 providers in.

Drag a .gguf file onto the chat to assign a model in plain English — “Load this into Coder”. Or wire any local-runtime endpoint or cloud API via Settings → Providers. The router decides per-call; you stay in control of the mix.

Direct local files Drag-and-drop GGUF onto chat · per-agent assignment
Local runtimes × 12 Ollama, LM Studio, llama.cpp server, vLLM, Jan, GPT4All, LocalAI, KoboldCpp, oobabooga, TabbyAPI, ExLlamaV2, HF TGI
Cloud APIs × 10 OpenAI, Anthropic, Google, Groq, Mistral, Together, Fireworks, DeepSeek, Perplexity, OpenRouter
Privacy mode One toggle disables all cloud providers — nothing leaves the machine
Per-agent models Different model per agent — small fast model for Planner, big local model for Coder
OpenAI-compatible custom Point at any OpenAI-compatible endpoint (LiteLLM, Mosaic, your own gateway)
Pricing

Pay once. Yours forever.

Buy v1 today. It runs on your machine, with your models, on your terms. No subscription. No per-seat math. v2 will be a separate purchase — your v1 license keeps working on v1.x indefinitely.

Personal v1.0
$149 USD · one-time incl. all v1.x updates
  • All four agents: Planner, Researcher, Builder, Operator
  • 1 user · activate on up to 2 devices
  • Bring your own models — local + cloud, your keys
  • Full desktop integration: files, terminal, browser, apps
  • Free updates within v1.x — including new agent capabilities
  • 14-day money-back guarantee — no questions

Secure checkout via Stripe · Mac & Windows installers · license key delivered by email within minutes · v2.0 is a separate purchase

Studio & team licenses, education pricing, and on-prem deployments — contact sales.

FAQ

Questions buyers ask first.

Is MultiAgentOS a desktop app or a web product?

Native desktop app for macOS and Windows. It runs alongside your editor with an embedded Python backend bound to 127.0.0.1, so your project files, prompts, and agent traces stay on your machine unless you explicitly route through a cloud provider you configure. Two installers ship per release — signed Mac DMG and signed Windows MSI — and your license key activates either build (or both, up to your 2-device limit).

Do I have to use local models?

No. Local models are the default — you can ship without ever sending code to a cloud provider — but you can also wire up Anthropic, OpenAI, or any OpenAI-compatible endpoint. You choose per workspace.

How do approvals work?

Risky actions — shell commands, git pushes, network calls, file writes outside an allow-list — pause for explicit confirmation. You can pre-approve common actions per project and revoke them at any time.

Is this just for developers?

No. MultiAgentOS is built for any project work — research, writing, analysis, planning, ops, and coding all share the same multi-agent workflow. Engineers will feel right at home; everyone else gets the same primitives without having to think about pipelines or prompts.

How do I install and activate MultiAgentOS?

After checkout, you’ll get an email with your license key and a download link for the signed installer for your platform — Mac DMG (notarized) or Windows MSI — both bundle the Python runtime and Playwright Chromium so no extra installs required). Run it, launch MultiAgentOS, accept the EULA, and enter your license key plus the email used at checkout. Activation runs once over the network. After that, everything works offline.

Can I move it to a new computer?

Yes. Each license activates up to 2 devices at a time. From the app's Account screen — or from your customer portal at account.multiagentai.ai — you can deactivate an old machine and activate a new one. There's no fee for moving devices.

What happens when v2.0 ships? Do I have to buy again?

v2.0 is a separate one-time purchase. Your v1 license keeps working on every v1.x release, forever — no kill switches, no expiring keys. You only pay again if you choose to upgrade to a new major version.

What's the refund policy?

14-day money-back guarantee. Email support@multiagentai.ai within 14 days of purchase and we'll refund you in full — no questions, no forms. The license is then deactivated on every device.

What data leaves my machine?

Almost none, by default. Activation pings the license server with a stable device ID and your license key. That's it. Crash reports and anonymous version pings are opt-in in Settings. Project files, prompts, and agent traces stay on your machine unless you explicitly route work through a cloud provider you configure.

Get MultiAgentOS v1.0

Bring autonomous AI back to your own machine.

$149, paid once. Mac and Windows. License works on 2 devices and is yours to keep. v2.0 will be a separate purchase — v1 keeps working forever.

Secure Stripe checkout · instant license delivery · 14-day money-back guarantee.