Left to its default behavior, an AI agent works like a single-threaded process — finish one thing, start the next. That's fine when tasks depend on each other. But most of the time they don't. Writing three blog posts doesn't require finishing the first one before starting the second. I changed how Tim approaches this, and it made a real difference.
The Default Is Sequential
When you ask an AI to do multiple things, it will usually work through them one at a time. This feels natural — it mirrors how a person works. But a person works sequentially because their brain is one CPU. An AI agent using sub-agents doesn't have that limitation.
The tool that enables this already existed. Claude Code has an Agent tool that can spawn parallel sub-agents working on independent tasks simultaneously. The capability was there. What was missing was clear guidance on when to use it.
Without explicit rules, Tim would default to sequential work — not because it was better, but because it was the path of least resistance. Parallel execution requires actively deciding that tasks are independent, which is a judgment call that benefits from a defined policy.
The Rules I Gave Tim
The rules are simple. Parallelize when tasks are independent. Don't parallelize when they have dependencies.
Specifically, things Tim should parallelize:
- Independent research tasks — looking up documentation for two different systems at once, analyzing two different competitors simultaneously
- Edits across different projects — if I ask for changes to Loom and Documentor at the same time, those can happen in parallel since they're different codebases
- Running evals while writing content — the eval runner and the content generation are completely independent
- Creating content for multiple platforms — writing the English and Thai versions of a blog post, or creating content for multiple Facebook pages at once
Things that should stay sequential:
- Sequential dependencies — if step 2 needs the output of step 1, they can't run at the same time
- Same-file edits — two sub-agents editing the same file simultaneously will cause conflicts
- Work that needs approval between steps — if I need to review something before Tim continues, it's not right to run the next step in parallel
There's also a transparency requirement: whenever Tim runs tasks in parallel, he tells me what's running simultaneously. This keeps me from wondering why things are happening out of order or why multiple outputs arrive at once.
What Actually Changed
The most obvious improvement is content creation sessions. When I ask Tim to create content for three different Facebook pages, he used to do them one by one — draft page A, post to database, draft page B, post to database, draft page C. Now he drafts all three simultaneously and posts them in sequence only if the database writes would conflict.
Research phases got faster too. When setting up a new page, there's often a phase where Tim needs to understand the niche — what competitors are posting, what topics perform well, what tone works. That research can happen in parallel across multiple sources instead of checking them one at a time.
One session I asked Tim to run quality evals on some recent Loom prompts while also writing a blog post. Previously he would finish the evals, then write the post. With parallel execution, both happen at the same time. The total work took roughly the same amount of compute, but the wall-clock time was cut in half.
Why This Wasn't Obvious Sooner
The capability existed before I wrote the rule. I just hadn't thought to make it explicit.
This is a recurring pattern in how I work with Tim. The AI can do things it never does by default because no one told it to. The gap between "can do" and "does" is often just a missing instruction. Writing that instruction down — in a skill file, in a startup checklist, in a CLAUDE.md rule — is all it takes.
It's similar to the self-upgrade behavior I built in. Tim didn't start checking for new Claude versions on his own. I added a step to the startup routine and now he does it every session. Small written rules turn optional behaviors into consistent ones.
The Limit
Parallel execution isn't always better. There's overhead to spawning sub-agents. For a single small task it's actually slower. And parallel agents don't share context — each one starts fresh — so if the task requires accumulated understanding from previous steps, parallel doesn't work.
The rule of thumb: if I'd naturally describe the tasks using "while" instead of "then," they're good candidates for parallel. "Write the English post while creating the Thai version" — parallel. "Research the niche then write the post based on what you found" — sequential.
It's the same judgment call a team lead makes when assigning work to a team. Some work fans out, some work flows. Knowing the difference is most of the job.
When I watch Tim spawn three sub-agents to handle Thai, English, and Vietnamese content simultaneously, it still feels a bit surreal. That same capability is built into Jarvis — your agent can fan out work across multiple tasks at once, on your own server, without you having to manage the orchestration.
— Pond
