If AI feels inconsistent even after you’ve tried the prompt tricks, you’re not doing it wrong—you’re missing a system.
The system problem
Marcus Sheridan saved his pool company during the 2008 recession by doing something his competitors wouldn’t: answering the questions buyers were actually asking. How much does a pool cost? What are the problems? Who shouldn’t buy one?
That approach became They Ask, You Answer, and later evolved into Endless Customers—a framework built on a simple premise: trust comes from systems, not tactics.
Sheridan is explicit about this. When describing Endless Customers, he said he wanted to create “a system, like iOS, that someone can follow. Not just a bunch of theory, not just a bunch of recommendations thrown out there. A really clear system.”
The four pillars of that system: say what others won’t say, show what others won’t show, sell in ways others won’t sell, and be more human than others are willing to be.
These aren’t tips to try. They’re constraints on how a business must behave—consistently, over time—to earn trust.
I’ve been thinking about how the same logic applies to AI.
The prompt plateau
If you’ve been using AI for a while, you’ve probably hit this point.
You’ve tried the prompt techniques. Be specific. Add context. Give examples. Think step by step. The results improved—for a while. Now they’re inconsistent again. What worked last week doesn’t work today. You’re not sure which version of your prompt is “the good one.”
You’ve been optimising, but you’re not building anything.
This is where most AI advice fails you. It focuses on prompts—write them better, add more detail, use clever techniques. That’s the equivalent of marketing tips: try this headline formula, post at this time, use these keywords.
Tips work occasionally. But they only compound if you add structure around them—standards, review, versioning. At which point you’ve quietly built something else entirely.
Prompts are tactics. Protocols are systems.
A prompt is an instruction. It tells AI what to do right now.
A protocol is a system. It defines how work must behave—not once, but every time. It encodes constraints, captures what typically goes wrong, and improves through use.
The parallel to Endless Customers is direct. Sheridan’s pillars aren’t suggestions—they’re non-negotiables that shape every piece of content, every sales conversation, every customer interaction. A protocol does the same thing for AI work.
What a protocol actually contains
A protocol captures what you’ve learned and makes it reusable:
- Constraints — what must always be true about the output
- Failure modes — what typically goes wrong and how to catch it
- Versioning — what changed between iterations and why
- Evaluation criteria — how to judge whether the result is good enough
When output fails a protocol, you know exactly where it failed. When output fails a prompt, you just rewrite the prompt. Again.
Why this matters for trust
Here’s where it connects back to Sheridan’s framework.
One of the Endless Customers pillars is “show what others won’t show”—radical transparency about your process, your pricing, your limitations. The businesses that do this build trust. The ones that hide behind vague claims don’t.
If you’re using AI in your work, your process is part of what you’re selling. Clients may not ask directly, but they’re wondering: Is this consistent? Can I rely on it? Will the quality hold if the person I’m working with is busy, or hands it to someone else?
A protocol is how you answer those questions—not with assurances, but with structure. It’s the difference between “trust me, I’m good at this” and “here’s the system that ensures consistency.”
This is the same shift Sheridan teaches for marketing. Stop relying on your ability to be clever in the moment. Build systems that work whether you’re having a good day or not.
How I built one
The protocol I use for content creation took roughly 15 hours across multiple working sessions to develop. It went through five versions before becoming genuinely useful.
The first version tried to prescribe too much—it wanted complete answers upfront for every editorial decision. It failed because real writing doesn’t work that way. You discover what you’re saying as you write.
The second version swung too far the other way. Too vague to be useful.
Version 1.3 finally got the balance right by separating what must always be true (constraints) from what should be explored during the work itself (questions). That distinction took three rewrites to figure out.
The protocol also includes a section called “Known Failure Modes.” It exists because I kept making the same mistakes: softening language to avoid discomfort, replacing specifics with general advice, producing content that neither attracts nor repels anyone.
Writing those down didn’t stop the failures immediately. But it made them visible. Now when I review a draft, I check against the list. The failures happen less often because they’re named.
The first post I wrote using the finished protocol took half the revision cycles of the one before it. That’s when I knew the 15 hours had paid off.
From prompt to protocol
If you already have a prompt that mostly works, you’re closer than you think.
The shift isn’t starting over. It’s adding structure to what you’ve already learned. A protocol asks: what do I know about this task that isn’t written down yet?
Most people discover they know a lot. They know what bad output looks like. They know which corner cases cause problems. They know what they check for before they trust a result.
None of this is in the prompt. A protocol is where it goes.
A starter move: Take your best working prompt. Add three lines above it:
- Output must… (your three non-negotiable constraints)
- Common failures… (three things that typically go wrong)
- Good enough when… (three checks that tell you it’s ready)
That’s a protocol seed. It won’t be complete, but it’s the start of something that compounds.
You can even ask AI to help with the translation. Share your working prompt and say: “Help me turn this into a structured protocol with explicit constraints, known failure modes, and evaluation criteria.” The AI won’t know your standards—but it can help you articulate them.
The overhead question
Building a protocol takes more time upfront than writing a prompt. That’s true.
For me it took roughly 15 hours across multiple sessions. That surprised me. But it now pays back every time I write—and the protocol keeps improving as I use it.
Prompts rarely compound on their own. The time you spend tweaking them doesn’t accumulate into anything transferable.
Here’s the trade-off worth considering: I’ll use this protocol for every piece of long-form content for the next year or more. The time isn’t a cost—it’s infrastructure.
Who this isn’t for
If you use AI occasionally for one-off tasks—quick research, brainstorming, casual questions—protocols are overkill. Prompts are fine.
If you’re happy with “good enough” results and don’t need to hand your process to anyone else, the ad-hoc approach works.
If you believe the answer is always “write a better prompt,” you’ll find this framing frustrating.
This is for people who’ve hit a ceiling:
- You’re the bottleneck. Quality depends on your mood, your energy, your attention. The system can’t run without you at full capacity.
- You want to delegate. Someone else needs to hit your standard, but you can’t explain what that standard actually is.
- You want traceable improvement. You need to know why output got better, not just that it did.
If that’s not your situation, you don’t need this.
How I use this in client work
This isn’t just how I write for myself. It’s how I approach work for clients.
When I build AI-assisted workflows—for content, for operations, for internal tools—the deliverable isn’t a prompt. It’s a protocol: documented, versioned, testable. Something that works whether I’m involved or not.
That’s what “show what others won’t show” looks like in practice. Not just transparency about price or process, but transparency about how the system actually runs.
If you’re thinking about how to make AI work reliably in your business—not just for one-off tasks, but as infrastructure—that’s the conversation I’m set up to have.
What comes next
This is the first in a series applying Endless Customers thinking to AI work. The starting point is the shift in mindset: not “how do I prompt better?” but “what system would make this reliable?”
That’s the same question Sheridan asks about marketing. It turns out it applies just as well to AI.

