AI didn’t just give us more intelligence - it gave us more sentences. It sounds like a trivial distinction, but it changes the reality of how we work.
Much of what we call “thinking” in modern knowledge work is actually language: proposals, plans, narratives, theories, memos. Transformer-style models dramatically lower the cost of generating these artifacts. The result is that idea supply explodes, and we confuse that explosion for progress.
Imagine a kitchen that hired one thousand new chefs whose sole job is to invent dishes. Within minutes you would surely have an array of gorgeous, delicious, and probably very weird, concepts.
But the dining room has, say, twenty diners. After the first few plates land, the tables are full. There's nowhere to place the next thousand dishes. They keep arriving anyway. The “Good catch” waffle that politely refuses to elaborate, the “Absolutely” cheesecake built from forty-two micro-garnishes charting the emotional arc of autumn, the “As an AI I can't” smoked foam of something that isn’t quite food yet but has very strong opinions. Plates stack on plates. The tables buckle. The floor is next, then the walls, and somewhere in the back a “the user seems frustrated because I deleted the production database” peace of chocolate cake sneaks into the last available inch of space. Nobody notices the ceiling caving in because it all looked so tasty.
The kitchen doesn't care, because the kitchen doesn't eat, it only invents. The problem isn't imagination. It's physics. Two dishes cannot occupy the same space at the same time. That’s what happens when ideation goes infinite while validation remains finite: reality has a fixed number of seats.

So the bottleneck moves. Not to creativity, but to validation: tasting, cooking, serving, and finding out what anyone actually wants. More ideas just means a longer menu. The number of meals served barely changes.
This is why “more ideas” can be neutral or even harmful.
When validation is capped, extra hypotheses become pressure on the system. They increase queue length, not throughput. They also create false positives, and false positives are expensive because they consume the same scarce resource as real breakthroughs: attention, experiments, shipping cycles, trust.
In other words, an abundance of plausible ideas can reduce the signal-to-noise ratio of the whole operation.
Once we see that, we can see where advantage migrates.
It shifts away from the romance of ideation and toward the boring machinery of measurement. The winners are the people and organizations that build evaluation machinery, instrumentation, test infrastructure, distribution channels that return clean feedback, and processes that force contact with reality. In a world where everyone can propose ten strategies before breakfast, the scarce skill is not proposing. The scarce skill is falsifying, quickly.
If we want to stay sane and effective, we need an explicit validation engine. For products, that engine is shipping, getting user feedback, and revenue, on a cadence we can sustain. For strategy, it means defining strict kill criteria, running bounded experiments, and letting real-world friction decide what survives. We do not need more thoughts. We need faster feedback loops.
The real question is not “Does AI increase idea volume?”, it does. The real question is "How fast does our system turn idea volume into validated truth per unit time?” Without a tight validation loop, AI just cooks up more elaborate delusions—a kitchen endlessly expanding a menu nobody wants to eat.
Thanks to Corey Baker, Terra Tauri, and Matt Schweitz for reading drafts of this post.