Skip to main content

How It Started

Mar 11, 2026 · 3 min read

No planning phase, no architecture deck. I started with the smallest workable core and tested it in real conditions from day one. The hardest part was not generating code. It was making behavior dependable across chained tool calls.

Even though the project is new, the motivation is old. I have wanted to build my own assistant for a long time, even before I knew exactly what form it should take.

More than a decade ago I built my first assistant with Hubot and even gave a talk about it called “Building my own robot.” Later I built a Twitch assistant with memory that could be invoked from chat. Both projects kept the same core idea alive: an assistant that can do useful work, not just react to commands. I have been circling this idea for years.

There was no long planning phase and no big architecture deck. I started with the smallest workable core, ran it in real conditions, and tightened it through daily use. That was intentional. This kind of tool only proves itself in actual repositories and real workflow pressure. Planning on paper would have told me nothing useful.

The early direction was clear from the first day: this should be a personal AI coding delegate, not just a suggestion tool. I wanted it to handle bounded tasks with a repeatable loop: do the work, verify the result, recover when something fails, and explain what happened.

The first commits reflect that mindset. CLI and backend scaffolding first, then status checks, memory commands, session handling, and a plan-execute-review pipeline. The focus was less about feature count and more about creating a reliable execution loop I could trust in daily use.

A key learning came early: the hardest part is not generating code. It is making behavior dependable across chained tool calls. The speed I could generate code at genuinely surprised me, but reliable behavior across a full lifecycle was a different challenge entirely. That is why policy, guardrails, and verification became core concerns from the start. If behavior is inconsistent, speed does not matter.

I built this through dogfooding from day one. Every iteration came from using it, hitting friction, and then fixing the root cause. That feedback loop has been much more valuable than trying to predict everything up front. Some of the best improvements came from things that annoyed me during actual work, not from features I planned in advance.

What matters now is continuing the same loop that started on day one: use it in real work, find weak spots fast, and tighten the fundamentals one step at a time.

Share

Read next

Meet Acolyte

Acolyte is an open-source AI coding assistant that runs as a headless daemon in your terminal. Multi-provider, self-hosted, and built around lifecycle control, behavioral guards, and persistent memory. The host provides structure. The model does the work.