A local AI agent —
runs on your machine,
under your control
TARS is a local AI agent runtime that runs as a single Go binary on your machine. From the browser console you can directly inspect and control its work — agent runs, memory, scheduled jobs, Git changes, execution history.
An AI agent
that works on your machine
The name comes from the TARS in Interstellar — practical, direct, dependable when things get complicated. TARS aims for that.
Not an agent running somewhere in the cloud you can't see, but a local AI agent that runs on your machine and can be inspected and controlled directly. Most AI agent tools are CLI-first, or add a thin web UI on top. TARS is designed around the browser console: chat, sub-agents, scheduled jobs, memory review, Git changes, run flow, and pending approvals each get their own page.
Since the agent works with your files and tools, you should be able to see what it's doing and step in when needed — that's the starting premise. Extensions stay lean: skills load only when invoked; plugins and MCP servers are used only when explicitly allowed. The system prompt stays small, and the agent stays focused on the current task.
Where you watch
the agent work
Many local agent tools end at a CLI. TARS uses the browser console as its main interface. Open 127.0.0.1:43180/console and you get screens that actually let you inspect and control the agent — not just status pages.
Mission Control
Pulse, Reflection, plans, runtime runs, cron jobs, disk pressure, sessions, recommended setup actions — all on one screen. See agent state and ongoing work at a glance.
Chat
Dock the panels you need: Sessions, Tasks, Health, Git Inspector, Skill Inbox, Cron, Prior Context. Branch sessions at a specific message. First-turn tier recommendation for the model that fits.
Lineage
Conversation and work flow as a Git-log-style tree. Preview the message where each session branched. Promote insights from a branch into Memory Inbox without touching the parent.
Memory
Review what the agent wants to save as long-term memory before it is stored. Edit stored knowledge as Markdown. Compare Tool path vs Prefetch path recall.
Agent Runtime
List, tree, Gantt, and interactive Flow graph views. Replay scrubber, cost flow, file attention, Git diff timeline, checkpoint restart.
Approvals
Review risky cleanup plans and Git changes before they apply. Approve or reject pending work. The Automation Audit log keeps every decision reviewable.
Analytics
Token use, cost per model, tool and skill call counts. Daily usage and cost flow. Daily budget chip in the header.
Extensions
Build and sandbox-test extensions with Skill Creator and MCP Server Creator. Hub installs surface trust signals: score, last update, passing tests, install count.
Core stays small
The rest is opt-in
TARS doesn't push every feature into the system prompt at once. The base runtime stays small; the rest goes into skills and plugins.
Sub-Agent Orchestration
Spawn read-only sub-agents for research and planning. Per-task model tier routing, allowlist policy, depth control. Parallel and compare modes.
Durable Memory
Markdown memory with semantic search via Gemini embeddings. Daily logs, reviewed experiences, nightly Reflection — stored on disk and auditable. Review-before-store lets you decide what gets remembered.
Pulse Watchdog
A periodic loop that checks runtime health. Detects cron failures, stuck runs, disk pressure, Telegram errors. Calls a narrow LLM only when needed.
Nightly Reflection
Extracts experiences and memory candidates from sessions overnight. Cleans up empty sessions, refreshes memory candidates. Runs as deterministic Go without exposing LLM tools.
Scheduled Jobs
30-second tick scheduler. Cron expressions and @at one-time triggers. Per-job audit history with state caps.
3-Tier LLM Router
Three tiers — Heavy, Standard, Light. Roles bind to tiers; providers and models are managed in config. Pick lighter or stronger models depending on what the work needs.
Skills, Plugins, MCP
Skills are Markdown plus a runnable CLI — loaded only when invoked, so the system prompt stays small. Plugins are gated; MCP is supported as a client.
Multi-Channel I/O
Beyond the browser console: Telegram bidirectional messaging, inbound webhooks, macOS Assistant popup, and a local API for scripts.
Where TARS draws different lines
Two strong projects already exist in this space — OpenClaw and Hermes Agent. Each has its own focus. Here are the points TARS treats as important.
| Dimension | OpenClaw | Hermes Agent | TARS |
|---|---|---|---|
| Language | TypeScript | Python | Go (single binary) |
| Primary UI | CLI | CLI + API | Browser console (CLI/Telegram/webhooks too) |
| Sub-agents | ACP + subagent runtimes, Docker sandbox | ThreadPoolExecutor (max 3), ephemeral prompt | Per-task model tier, allowlist policy, depth control |
| Model routing | Per-agent model override | Per-child override, MoA (4 frontier models) | 3-tier bundles (heavy/standard/light), role→tier mapping |
| Memory | Session transcripts | Honcho/Holographic plugin hooks | Markdown + semantic + review-before-store + nightly reflection |
| Background | — | — | Pulse watchdog (1-min) + nightly reflection batch |
| Scheduling | — | — | Session-bound cron + audit logs |
| Extensibility | Built-in tools | Toolsets | Skills + companion CLIs + gated plugins/MCP |
Comparison is from the TARS perspective and intentionally simplified. Read the source for each project to form your own view.
One binary,
separated tool surfaces
TARS runs as a single binary, but doesn't expose every tool the same way. The tools available in chat are kept separate from the tools used inside the runtime. The ops_, pulse_, and reflection_ families can't be called directly from regular chat — they are reserved for runtime-internal operations. Pulse uses a narrow Go interface and only calls the LLM when needed; Reflection is deterministic.
┌─ cmd/tars (cobra) ──────────────────────────────────────┐
│ serve · service · init · doctor · status · cron · ... │
└──────────────────────────┬──────────────────────────────┘
│
┌──────────────▼──────────────┐
│ tarsserver (127.0.0.1:43180) │
└──┬─────────┬──────────┬──────┘
│ │ │
┌───────▼──┐ ┌────▼─────┐ ┌──▼─────────┐
│ chat │ │ pulse │ │ reflection │
│ agent │ │ watchdog │ │ nightly │
└────┬─────┘ └────┬─────┘ └────┬───────┘
│ │ │
┌────▼────────────▼────────────▼─┐
│ memory · cron · ops · llm │
└────────────────────────────────┘Get started in three steps
On first run, the setup wizard walks you through LLM provider and model tier configuration. Until an LLM is configured, the console runs in setup-only mode.
Install
macOS / Linux — pre-built binary with console
brew tap devlikebear/tap brew install devlikebear/tap/tars
Initialize workspace
tars init
Start the server
Runs in the terminal until Ctrl+C.
tars serve # console at http://127.0.0.1:43180/console