Essay · 2026
The Model Is Not the Moat
If frontier capability keeps centralising, the durable edge shifts outward into trust, workflow fit, and the surrounding package.
I keep hearing the same assumption underneath AI strategy talk:
the winner will be whoever has the strongest model.
Smarter model wins. Everything else is secondary.
That sounds plausible right up until you look at how people actually choose tools in real life.
If scale laws keep holding, local models probably will not beat the frontier on raw intelligence.
But users do not adopt “raw intelligence.” They adopt something that fits into their day.
They adopt the thing that feels fast enough, private enough, legible enough, reliable enough, cheap enough, and integrated enough that using it becomes natural.
That is the first distinction that matters.
The benchmark measures the visible product.
The moat forms one layer out, in the surrounding package.
The Product And The Seat
This is where a lot of AI debate still feels strangely naive. People compare models as if users experience them as isolated intelligence engines.
Most of the time they do not.
They experience a bundle: interface, defaults, permissions, speed, memory, privacy, cost, and how much the tool asks them to rearrange their behaviour.
That bundle is where trust accumulates.
It is also where switching costs quietly form.
A frontier model can win the benchmark and still lose the seat.
By “seat” I mean the position a product earns inside a user’s actual workflow. The place where their context lives. The place they trust not to embarrass them, leak data, slow them down, or force them to relearn everything.
The product is the thing they evaluate.
The seat is the thing they get used to living in.
And those are not the same asset.
You Can Already See It In Coding Tools
A model that is slightly worse on a public benchmark can still be the one people prefer if it lives inside the editor, sees the repo, responds instantly, keeps sensitive code local, and fits the way they already work.
The model may be weaker in the abstract.
The package is stronger where it counts.
That gap matters because people do not make adoption decisions in the abstract. They make them under workflow pressure. They choose the tool that keeps momentum, feels trustworthy, and does not introduce a new category of risk.
This is also why “just use the best model” is often bad product advice. The best model according to a leaderboard may carry the wrong latency, the wrong privacy posture, the wrong integration burden, or the wrong failure mode for the actual job.
How Moats Actually Form
Apple is the familiar version of this dynamic outside AI. The moat was never just one visible feature. It was the package: ecosystem fit, convenience, defaults, identity, and the low-grade friction of leaving.
The product got attention.
The package became hard to leave.
I think a lot of AI products will work the same way.
Moats usually form in the residue, not in the headline claim.
In habit.
In muscle memory.
In stored context.
In predictable behaviour.
In the feeling that this tool understands how you work and does not make you pay a tax every time you use it.
That is the important shift. A lot of the value is created by side-effects of repeated use, not just by the explicit capability being marketed.
What This Means For Builders
Local models do not need to win the intelligence race to matter.
They can win a different race entirely: trust, control, governance, latency, privacy, and workflow fit.
That is a more durable position than it sounds.
Once a tool becomes the place where your context lives, your defaults settle, and your work starts to flow, a better benchmark somewhere else is not enough to dislodge it.
If I were building in this market, I would treat that as a design rule:
Do not ask only, “How do we make the model look stronger?”
Ask:
-
Where does the user feel risk right now?
-
What part of the workflow still feels awkward or fragile?
-
What context would make the tool more useful after 30 days than on day 1?
-
What would make leaving this product feel expensive in a good way?
That is how you build the seat.
Not by winning one benchmark snapshot, but by creating a package that compounds through use.
The Real Question
This is the distinction I keep coming back to:
The product is what gets measured once.
The seat is what gets harder to leave over time.
That is why packaging matters.
Not because it disguises weakness.
Because it is where practical advantage compounds.
If frontier capability stays centralised, that is the strategic question for everyone else.
What seat are you building that people will not want to leave?
If a single argument here changed what you were about to trust, the highest-leverage move is to subscribe on Substack. One piece a week, no filler.