I Read 3,000 Papers Across 12 Fields. Five Patterns Kept Appearing.

Essay · 2026

I Read 3,000 Papers Across 12 Fields. Five Patterns Kept Appearing.

Every field discovers them independently. Nobody connects them.

Harry Floyd 7 min read

Same AI model. 6.7% accuracy.

Change the interface around it. 68.3%.

The researchers didn’t touch the model. They changed the format it used to express edits. The model had been reasoning correctly the whole time. It just couldn’t express its answers without corrupting them.

The failure looked like stupidity. It was a formatting problem.

I would have ignored this if I hadn’t seen the same pattern in four other fields that week.


I run a research system that cross-references everything it reads. 3,000+ sources across AI, neuroscience, markets, biology, history, and physics. Most of the time it produces noise. But occasionally it surfaces something no single field can see on its own.

Five patterns kept appearing. Same structural dynamic, same failure modes, same counterintuitive outcomes. Independently. Across twelve domains.

This matters now. We’re in the middle of the largest capability explosion in history, and most of the responses to it are making things worse. The patterns governing what happens next are invisible from inside any single field. You have to hold them all in the same frame.

Each one comes with a question you can use immediately.


The Bottleneck Migrates. It Never Disappears.

Most people believe success is straightforward: find the bottleneck, fix it, win.

The bottleneck moves.

The model story above is the cleanest example. The bottleneck wasn’t reasoning capability. It was the verification layer around the model. Fix the interface, and the “dumb” model becomes the best in the benchmark.

In mathematics, proof assistants like Lean matter not because they generate proofs but because they verify them. Proof generation is getting cheaper. Proof verification is the binding constraint. Terence Tao has been making this point for years.

In content, AI drives the cost of producing text toward zero. So the bottleneck migrates from writing to editing. From editing to taste. From taste to distribution. From distribution to trust. Each solution creates the next scarcity.

In markets, information became free decades ago. The bottleneck migrated from access to interpretation, then from interpretation to execution discipline. The binding constraint for most traders isn’t finding an edge. It’s sitting still long enough to let it work.

Whatever just became easy is no longer where the value is. If you’re still optimizing there, you’re solving yesterday’s problem.

Ask: Where is the bottleneck migrating to in your system? Not where it is now. Where it’s going.


Difficulty Is Load-Bearing.

People believe friction is waste. That making something easier always makes it better. That removing the hard parts is progress.

Sometimes it is. But sometimes you’re pulling out a load-bearing wall, and you won’t know until the roof comes down.

Rome’s republic died this way. Military victories brought wealth. Wealth destroyed the citizen-soldier model that had won the wars. External pressure had been producing internal cohesion. Success removed the load-bearing difficulty, and the structure collapsed. It took decades to notice.

The same delay happens everywhere. Students use AI to skip the struggle of working through a problem. The answer arrives faster. Feels like progress. But the struggle was doing two jobs: producing the answer and building the ability to produce future answers. Remove the struggle, you keep the first and destroy the second. You won’t feel it until six months later when you can’t solve a new problem without the tool.

In markets, the discipline to wait through a drawdown is the hardest part of any strategy. It’s also where all the returns come from. Traders who automate their system but don’t understand why the waiting matters override it at exactly the wrong moment.

The hardest version of this to accept: the difficulty might be the thing producing your skill, your judgment, your edge. Remove it because it feels like waste, and you lose the thing you can’t see and can’t measure.

Ask: If I remove this difficulty, what quality-control function disappears with it?


Architecture Outlives Content.

People invest in what they produce. The feature. The post. The deliverable. The thing they can point to and say “I made that.”

Content turns over. The thing that persists is the structure underneath it.

The protein KIBRA sits at brain synapses and doesn’t move. The actual signalling molecule, PKMζ, degrades and gets replaced constantly. But KIBRA maintains the pattern that tells new molecules where to go. Content turns over. Scaffold persists. Function is maintained.

This is how your memories work. And it’s how everything else works too.

In business, the tech stack you use today will be replaced within five years. But the context you accumulate (your understanding of customers, your domain knowledge, your organisational instincts) compounds over time. Your context compounds. Your tools depreciate.

The unsettling implication: most of what you produce this week won’t matter in a year. But the system you build for producing it will. The code doesn’t matter. The architectural judgment does. The post doesn’t matter. The publishing system does.

Ask: Will this work be more valuable in six months? If yes, you’re building scaffold. If no, you’re producing content. Know which one you’re doing.


Knowledge Is Constrained by Instruments, Not Theory.

People believe that when they’re stuck, they need to think harder. Read more. Refine the theory. Understand the problem better.

Almost always wrong. When you’re stuck, you have an observation problem, not a thinking problem.

In insurance, actuaries had sophisticated risk models for decades. They could only price what they could observe. Then telematics arrived. Devices that measure actual driving behaviour. The models didn’t change. What was observable changed. The instrument created the knowledge.

In AI, teams know their models have failure modes. They theorise about what’s going wrong. But without the right evaluation instrument, the theories stay untestable. The measurement science is the constraint. Not the model. Not the theory.

And here’s the twist: building the instrument changes what you’re observing. Start measuring driving behaviour, drivers change their behaviour. Start evaluating a model on a specific benchmark, developers optimise for that benchmark. Observation is not neutral. Every new instrument introduces reflexivity.

Even a bad instrument teaches you more than a perfect theory you can’t test.

Ask: What’s the cheapest experiment that would make one piece of the hidden structure observable?


Capability Without Correct Targeting Makes Things Worse.

This is the law that connects the other four. And the one most people are violating right now.

People believe the problem is “not enough.” Not enough power, not enough data, not enough features, not enough effort. So they add more. Things get worse. They assume they need even more.

The problem is almost never “not enough.” It’s “aimed wrong.”

ADHD is not an attention deficit. People with ADHD can hyperfocus for hours on the right task. The capacity is there. The targeting mechanism is dysregulated. Treat it as a deficit (more stimulation, more alerts, more information) and it gets worse. Treat it as a regulation problem (structured environment, fewer options) and it gets better.

Same structure everywhere. A more powerful model doesn’t help if the verification layer is the bottleneck (Law 1). More data doesn’t help if the evaluation instrument is broken (Law 4). More features don’t help if the architecture is wrong (Law 3). More effort doesn’t help if the difficulty you’re fighting is load-bearing (Law 2).

Right now, most people are adding AI capability to processes aimed at the wrong level of abstraction. Making things faster that shouldn’t be done at all. Upgrading the engine when the steering is broken.

Ask: Do I have enough capability? (Almost always yes.) Is it aimed at the right target?


The Convergence

These five aren’t independent. They’re one system.

The bottleneck migrates upward because the hard layers resist commodification. What persists across those transitions is the scaffold. We can only see any of this because each analysis is itself an instrument. And the whole thing breaks when capability gets added without checking whether the target moved.

One sentence:

Value concentrates wherever resistance to commodification is highest.

That locus migrates as lower layers are solved. The scaffold persists. Knowledge advances through new instruments. Capability without correct targeting makes things worse.

You might reasonably think these are just metaphors. The same words applied to different things. That’s what I thought, until I watched the same structural dynamic produce the same failure modes in fields that have never heard of each other. Neuroscientists and traders and AI engineers, independently, making the same mistake for the same structural reason.

That’s not a metaphor. That’s a pattern.


Five Questions, Under a Minute

Run these whenever you’re planning, stuck, or about to commit to something significant:

Where is the bottleneck migrating? Don’t optimise what just became abundant.

Is this difficulty load-bearing? Before removing friction, check whether it’s the wall.

Am I building scaffold or content? Invest in what compounds.

What instrument am I missing? Build the observation tool, not a better theory.

Am I aimed at the right target? Before adding power, check direction.


This is the first in an occasional series on cross-domain patterns from a research system that reads more papers than I do. If one of these laws describes something you’ve seen in your own field, I want to hear about it.


If a single argument here changed what you were about to trust, the highest-leverage move is to subscribe on Substack. One piece a week, no filler.