Coding agents keep making the same mistake: they ask the model to guess what tools a project uses.
If you use any of these tools, you have seen this. The model edits code but does not run the formatter. It does not lint. You only find out when you run verify yourself and it breaks. Then the model spends a full generation fixing formatting issues that a deterministic formatter would have caught in milliseconds. Every forgotten lint or format check is another roundtrip, more tokens, and a worse experience. The model is guessing things the host could read from a config file.
The wrong approach
Across tools, the pattern is the same. Aider supports per-language lint commands, but they are user-configured. If you forget, no lint. Cline relies on .clinerules where you describe your toolchain in prose and the model tries to follow it. Continue leans on the IDE, so VS Code knows but the agent does not have to. Claude Code uses LSP for code intelligence, but does not detect linters, formatters, or test runners.
Either the user configures it manually, or the model guesses. Nobody reads the config files that already define the project.
Detection, not configuration
Configuration asks: “what tools does your project use?”
Detection asks: “what do the config files say?”
A biome.json means biome. A ruff.toml means ruff. A Cargo.toml means cargo. A lockfile tells you the package manager. These are not ambiguous. They do not require intelligence. They require reading.
Acolyte now does this through workspace profiles. At the start of every lifecycle run, the system scans the workspace root and builds a profile: ecosystem, package manager, lint command, format command, test command, line width. Detection runs once, is cached per workspace, and feeds into the lifecycle before any model call happens.
Ecosystem detectors
Each ecosystem implements one interface with a match function and optional detection methods for each capability. The first match wins. Inside each detector, tools are checked in priority order.
The TypeScript detector sees biome.json and resolves lint and format commands. It checks biome before eslint before deno. It detects the test runner from vitest or jest config files, or from package.json scripts. It detects the package manager from lock files and uses the correct runner: bunx, npx, or pnpx. Adding a new ecosystem is one object. Adding a new tool is one check.
Current support covers TypeScript, Python, Go, and Rust, the same ecosystems supported by edit-code for structural editing.
What this changes
The detected profile feeds directly into the lifecycle.
After the model edits code, two commands run immediately:
- Format — runs the detected formatter on edited files. Auto-fixes in place.
- Lint — runs the detected linter. If errors remain, feeds them back to the model.
For testing, the model can run tests scoped to the files it changed using a run-tests tool. The workspace profile provides a test command with a $FILES placeholder, and the model fills in the paths. I originally ran a full verify command after every work phase, but that was too slow in practice. Targeted tests are faster and more useful. The model decides when to run tests, but the host provides the means.
None of this requires user configuration. The host reads what is already there.
The principle
A coding agent is two systems: a host and a model. The host handles deterministic execution. The model handles judgment. The boundary between them defines the product.
Every time you push a deterministic decision to the model, you pay twice: once in tokens, once in reliability. A model that forgets to run lint is another roundtrip. A wrong test command is a wasted generation plus a recovery cycle. These are not edge cases. They happen on every task where the host stays silent about what the project needs.
The host can run format and lint in seconds. It can provide the right test command scoped to the right files. The model needs a full generation cycle to even attempt any of this on its own. That is what the host is for.