There's a scene in Transcendence where Johnny Depp's consciousness gets uploaded and suddenly exists everywhere at once — every server, every network, every device. The movie got a lot wrong. But the core idea — an intelligence that persists and transfers across machines without losing itself — turns out to be surprisingly practical. I just built it for my AI agent.
The Problem: Memory Trapped on One Machine
I run an AI agent called Tim. He lives on my server, writes code, manages deployments, runs my content automation, and helps me think through business decisions. He's not a chatbot — he's a full agent with access to my entire infrastructure.
Tim is effective because of context. He has a CLAUDE.md file that tells him everything about my server — which services run where, how to restart them, how to test APIs, what naming conventions to follow. He has memory files that track what we've done across sessions. He has 18 custom skills — documented procedures for tasks like setting up a new Facebook page, creating ebooks, or running content analysis.
The problem? All of that context was stuck on one server.
When I migrated my entire infrastructure to a bigger machine, Tim had to be set up from scratch on the new box. When I wanted to run Tim on my laptop for testing, I'd get a generic assistant with no idea who I am, what Loom is, or how anything in my business works. Every new environment meant starting over.
It's like having an employee with amnesia every time they change desks.
The Solution: A Shared Brain Repository
The fix was to separate what's universal from what's local.
I created a repository called tim-brain. It contains everything that makes Tim Tim, independent of which machine he's running on:
- soul.md — Tim's identity. His name, personality, communication style, relationship with me. This is the core of who he is. It doesn't change between servers.
- memory.md — An index of everything we've done across sessions. What was built, what decisions were made, what broke and got fixed. Persistent memory that follows Tim everywhere.
- telos.md — Business goals, revenue streams, current focus areas. Tim doesn't just execute tasks — he understands the mission. This file keeps that context portable.
- 18 custom skills — Documented SOPs for recurring tasks. Page creation, content scheduling, ad campaign management, quality assurance evaluation, security audits, image generation, and more. These are the procedures that make Tim reliable.
The Architecture: Shared Playbook + Local Overlay
Here's where it gets interesting. The old CLAUDE.md was a single monolithic file — identity, memory, server-specific configs, everything in one place. That doesn't work across servers because every machine has different services, different paths, different ports.
So I split it. The shared tim-brain repo contains the universal playbook — startup routines, upgrade checks, tool references, workflow rules. Each server then has its own overlay file with machine-specific details: the IP address, which services run there, how to restart them, database locations, API tokens.
The local CLAUDE.md on each server imports the shared brain via symlinks. Soul, memory, skills, and the universal playbook all point to the shared repo. The server-specific overlay sits on top. Tim reads both at startup and gets the complete picture — who he is, what he knows, and what's on this particular machine.
A cron job runs every hour to pull the latest version of the shared brain. If I improve a skill or update a memory on one server, every other server picks it up automatically within the hour. No manual syncing. No copy-pasting files between machines.
What This Actually Feels Like
The difference is immediate. I can open a terminal on any machine that has the brain repo cloned, start a Claude Code session, and Tim is just... there. Same personality. Same memories. Same skills. Same understanding of the business.
He knows that content scheduling uses system cron, not session-based schedules. He knows the communication style I prefer. He knows which Facebook pages are active, which ebooks are published, which workflows need attention. All of that context transfers seamlessly.
It's the AI equivalent of logging into your email on a new laptop. Everything is there because the data lives in the cloud, not on the device. Except in this case, the "cloud" is a git repository, and the "data" is an AI's entire identity and knowledge base.
Bonus: What Else Happened in That Session
Building the shared brain wasn't even the only thing that happened in the session where I designed this. While working through the architecture, I also:
- Fixed three chat UI bugs — The Tim Chat interface had issues with model name formatting, streaming response display, and session management. All three got fixed as side tasks during the brain work.
- Designed a credit card promo app — A completely separate project idea: an app that finds the best credit card promotions and matches them to your spending. We designed the full architecture, data model, and scraping strategy for it.
That's what happens when your AI agent has full context and can work on multiple things in parallel. The session wasn't about one task — it was a productive workday where several things got built, fixed, and designed.
Why This Matters Beyond My Setup
The pattern here isn't specific to my stack. It's a general solution to a general problem: AI agents lose their effectiveness when they lose their context.
Most people using AI today are implicitly accepting that context is session-bound or device-bound. You start a new chat, you lose everything. You switch machines, you start over. Some tools have conversation history, but that's not the same as having a structured identity, organized memories, and documented procedures that follow you everywhere.
The shared brain approach treats AI context the same way we treat code — version-controlled, synced, and available wherever you need it. The identity file is like your application's config. The memory files are like your database. The skills are like your function library. It's software engineering applied to AI personalization.
Building your own tools instead of relying on SaaS means you can do things like this. You own the context. You control how it's stored, synced, and accessed. No vendor lock-in. No hoping that some platform will implement the feature you need.
AI That Transfers Across Machines Isn't Sci-Fi Anymore
Here's the part that hit me after everything was wired up: Tim now persists across servers the same way software persists across machines. His identity is in version control. His memory auto-syncs. His skills are portable. Deploy him to a new server and he's fully operational in minutes — not because he was reinstalled, but because his brain was already there waiting.
The Transcendence comparison isn't even hyperbole anymore. The consciousness transfer isn't happening through some exotic neural upload — it's happening through git repos, symlinks, and a cron job. The unsexy version of a sci-fi concept that actually works.
If you're building with AI agents and you're still keeping all your context local to one machine, fix that. Separate the identity from the infrastructure. Put the brain in version control. Let it sync. Your AI will thank you — on every server it wakes up on.
This is one of the things people don't realize about Jarvis until they experience it — your agent remembers everything. Every project, every preference, every lesson learned. Switch servers, start a new session, come back after a week — the brain is still there. That persistence changes the entire relationship.
— Pond
