Most people who get frustrated with AI are actually just talking to it wrong. Not because they're bad at prompting — because they're treating it like a person when they should be treating it like a system.
I've been running an AI agent on my server for over a year now. I've made all these mistakes. Here's what I learned.
The Frustration Trap
When AI makes the same mistake twice, the natural human response is to express that frustration: "You did the exact same thing AGAIN. I already told you this."
The problem is that expressing frustration accomplishes nothing. AI doesn't feel embarrassed and try harder. It doesn't "get" that it's been making a pattern of errors. Each session, it comes in fresh. It doesn't remember being corrected last time unless you gave it a system to remember with.
Frustration is a signal that something needs to be fixed. But the fix is structural, not emotional.
Pattern 1: "You Forgot Again" → Write It to Memory
What you want to say: "You stupid thing! I told you a million times not to do this!"
What actually works: "Save this to memory.md so you don't forget next session."
Tim has a memory file that he reads at the start of every session. If he keeps forgetting something, it's because it isn't in the memory file — not because he's being difficult. The solution is to write it down in a place he'll read.
This is the most important shift in how I work with AI. If something needs to be true consistently across sessions, it needs to be written down in a persistent file. Yelling at the AI doesn't persist. A line in memory.md does. And once you put that memory in a shared brain repository, it persists across machines too — not just sessions.
Pattern 2: "You Made the Same Mistake AGAIN" → Create a Skill File
What you want to say: "This is the third time you've done this wrong. WHY?"
What actually works: "Summarize everything we've done on this task so far" — review it together — then "write this to a skill file."
Some tasks have enough nuance that a memory file entry isn't enough. They need a full procedure. The right approach, the common pitfalls, the specific steps. That's what skill files are for.
When Tim sets up a new Facebook page, he follows a skill file. When he creates content in Documentor, he follows a skill file. The reason those operations are consistent isn't that Tim has internalized them — it's that the procedure is written down and he reads it every time.
If you find yourself correcting AI on the same complex task repeatedly, don't keep correcting. Document the correct approach in a file, tell the AI to reference it, and the problem goes away.
Pattern 3: "Wait, Stop, Don't Do Anything Yet" → Plan Mode
What you want to say: "STOP! Don't start working yet! I just want to talk through the plan first!"
What actually works: "Enter plan mode. Don't start working until I tell you to exit plan mode."
AI agents are biased toward action. Give them a task and they start executing. This is usually good, but sometimes you want to think out loud first, discuss tradeoffs, make sure you're aligned before anything gets built or changed.
Plan mode is a simple instruction that tells Tim: think and discuss, don't execute. He'll outline what he would do, ask clarifying questions, surface risks — but he won't touch any files or run any commands until I explicitly tell him we're done planning.
This has prevented a lot of situations where I gave an ambiguous instruction and Tim went off and built the wrong thing for thirty minutes.
Pattern 4: "WHY DID YOU DELETE THAT?!" → Write the Constraint
What you want to say: "You just deleted something I needed and now it's gone forever!"
What actually works: "Write in memory.md: always ask before deleting anything."
AI agents can be aggressive with cleanup. They see something that looks unused and they tidy it up. Which is helpful, except when it isn't.
Constraints need to be explicit. "Don't delete without asking" isn't something AI infers from context — it needs to be stated. Once it's in the memory file, Tim checks before deleting. The behavior changes immediately and stays changed.
The Actual Skill: Turning Frustration Into Structure
Every time you get frustrated with AI, there's a structural problem underneath. The question to ask isn't "why did it do that?" but "what system is missing that would prevent this?"
- Keeps forgetting something → memory file entry
- Keeps doing a complex task wrong → skill file with the correct procedure
- Starts building before you're ready → plan mode
- Does something destructive you didn't want → constraint in memory file
- Goes in the wrong direction entirely → checkpoint: "summarize what you're about to do before you do it"
This is what building your own infrastructure actually looks like in practice. Not just writing code, but writing the operating procedures and constraints that make AI reliable over time.
It Takes Some Time to Build
I won't pretend this all happened overnight. Tim's memory file and skill library built up gradually over dozens of sessions. Each time I noticed a recurring problem, I added a line or a file. Each time he made a mistake I didn't want repeated, I documented the right way.
The result is an agent that makes far fewer systematic errors now than he did a year ago — not because the underlying AI got better (though it did), but because the operating context got better. He has more constraints, more procedures, more memory.
And when he does make a mistake, I don't get frustrated. I add it to the system. The system gets a little better. We move on.
All those files I mentioned — memory, skills, startup routines — took me months to build from scratch. That's why I pre-configured all of it in Jarvis. You start with an agent that already has memory, skills, and plan mode set up. You skip the painful learning curve and go straight to the productive part.
— Pond
