Most people have the same first experience with AI.
You open a new account. You ask a perfectly reasonable question. And what comes back is… odd. Sometimes it’s a confident hallucination. Sometimes it’s a vague summary that avoids your point entirely. Sometimes it’s technically correct but still useless. Sometimes it answers the question you didn’t ask.
It feels like speaking into nothingness — as if your words are falling into a void.
It’s frustrating, especially when you’re a capable professional who knows exactly what you meant. You didn’t ask for nonsense. You didn’t ask for clichés. You didn’t ask for a Wikipedia rehash. You asked for help — and the response feels random, unpredictable, or just wrong.
Everyone assumes the same thing at first:
“I must be prompting incorrectly.”
That’s the common advice. Better prompts. Clever prompts. More descriptive prompts. More structured prompts.
Yes, prompting matters — but prompting alone doesn’t explain the void.
And prompting alone will never fully fix it.
There’s a deeper cause, and once you understand it, your entire relationship with AI starts to change.
The void is real — and you start inside it
Here’s the part almost nobody explains:
When you first sign up to any AI system, it knows absolutely nothing about you.
Nothing about:
- what you value
- your level of expertise
- your tone
- your field
- your expectations
- your preferences
- your definitions of “good”
- your purpose
- your writing style
- your working style
- your projects
- the outcomes you care about
It has a vast general capability — but zero personal context.
Your new account has no identity.
No history.
No shared language.
No established frame of reference.
It is a pure void.
So when you ask your first question, the system has to guess — and not based on your intention, but based on the broadest statistical patterns it has.
In other words:
It gives you the most generic response it can, because it has no idea who you are.
This is why the early answers feel so strange.
They aren’t wrong because the model is flawed.
They’re wrong because you’re starting a conversation with something that doesn’t know you exist yet.
Why the guessing feels so chaotic
Without shared context, an AI system cannot interpret your meaning — only your words. And because your words arrive stripped of identity, background, and purpose, the system fills in the gaps using probability.
That’s where hallucinations, irrelevant angles, and weird tangents come from.
But there’s another layer people often miss:
its guesses are only as good as your prompts.
If your prompts are:
- quick
- shallow
- scattergun
- inconsistent
- written in a rush
- missing context
- lacking a defined purpose
…then the system is forced to guess even harder.
And with no personalisation to anchor those guesses, the results become even more erratic.
This is why early interactions feel so unstable.
You’re sending unstructured signals into an unshaped system.
Two voids communicating with each other.
If you want specialist output, you need specialist prompts
This is where many people accidentally set themselves up for disappointment.
If you want AI to act as a:
- copywriter
- strategist
- HR consultant
- analyst
- developer
- marketer
- coach
- editor
- teacher
- researcher
- or subject-matter specialist
…then your prompts must consistently anchor the system in that role.
If they don’t, the model defaults to generic behaviour — the lowest common denominator.
This is why the first few conversations matter so much: they set direction.
But — and this is the part too many people miss — they don’t lock you in.
You can change direction at any time
One of the most encouraging truths about AI is this:
You’re not stuck with the version of the system you started with.
You can always:
- shift its role
- tighten its focus
- define your expectations
- clarify your voice
- improve your instructions
- restructure your prompts
- add missing context
- correct its assumptions
- give it examples
- establish new working norms
- raise your standards
And the system will adapt.
As your prompts become clearer and more intentional, the void begins to fill.
The system starts learning your patterns.
Your language.
Your style.
Your expectations.
Your purpose.
Prompting matters — but not because of “magic words.”
It matters because each prompt is another step toward creating shared context.
The void isn’t a bug. It’s the beginning.
The early stage — the mismatch, the weird answers, the hallucinations — isn’t a sign that AI is unreliable.
It’s a sign that the relationship is unformed.
You’re speaking from your world.
The system is responding from no world at all.
As you begin to shape it — with clearer purpose, structured prompting, consistent expectations, and more intentional context — the void disappears. And something else replaces it:
A model shaped around you,
responding in your language,
with your standards,
aligned to your goals.
That is the moment when AI stops being an unpredictable tool and starts becoming a reliable partner.
Where this is heading
This article is the first in a series.
Now that we’ve explored the void — the origin state of every AI relationship — the next step is to look at what fills it, and how to fill it well:
- how to encode yourself into an AI system
- how to build shared language
- how to establish roles
- how to create consistency
- how to design co-evolution
- and how to intentionally avoid shaping a “bad” version of the system
Your system can become powerful, consistent, and deeply useful — but only if you understand how to shape it.
And the shaping starts here:
with recognising the void, and choosing what you want to fill it with.

