The Shape Is Perfect
It Still Tastes Like Shit
๐ Part of a Series
This is a short companion piece. Part I - Vibe Coding: A Monkey Behind the Controls. Part II - The Open Source Trap.
Scientists secured $10 billion from investors to turn shit into bread.
A year later, investors asked for a progress update.
"Great news - the shape is perfect now. Still tastes like shit though. But we're halfway there. We just need another $10 billion to finish the job."
This is the entire LLM industry in one joke.
The Shape
The output looks right. The code compiles. The analytics report is beautifully formatted. The email is grammatically flawless. The pull request passes CI. The generated architecture diagram has all the right boxes and arrows.
The shape is perfect.
A company recently discovered that in business environments, hallucinated reports and analytics have resulted in flawed decisions and financial losses. In the legal world, it's even more documented: in Mata v. Avianca, a New York attorney used ChatGPT for legal research and submitted a filing citing six court cases that didn't exist. The AI had fabricated case names, citations, and even quotes - all with complete confidence. The judge sanctioned the attorney. The shape was perfect. The content was fiction.
In healthcare, AI transcription tools have inserted fabricated medical terms into patient records. A New York City municipal chatbot gave citizens advice that turned out to be not just wrong, but illegal. A law firm filing contained entirely fictitious case citations generated by AI, resulting in sanctions.
The pattern is always the same: the output looks professional, sounds confident, and is completely fabricated. Nobody notices - because nobody tastes the bread.
Why It Will Always Taste Like Shit
Here's what most people - including many engineers - still don't understand about LLMs:
Hallucination is not a bug. It's how pattern matching works.
When an LLM receives a prompt, it searches for the nearest statistical match in its training data. When an exact match exists - common patterns, popular frameworks, well-documented APIs - the output is convincing. Often genuinely useful.
But when there's no exact match, the system doesn't say "I don't know." It returns the nearest neighborIn pattern matching, when no exact match is found, the system returns the closest statistical approximation - the most similar pattern it has seen. This is mathematically correct behavior for the system, but the user experiences it as a confident lie.? - the most statistically similar pattern - and presents it with full confidence. That's not a malfunction. That's the architecture working exactly as designed.
You cannot fix this with more training data. You cannot fix it with better models. You cannot fix it with larger context windows. The fundamental mechanism - statistical pattern matching that returns the nearest neighbor when no exact match exists - is the system. Remove it and you don't have an LLM anymore. You have a search engine.
The $10 Billion Scaffolding
So what are the LLM companies actually doing with all that money?
They're building scaffolding.
Massive IF-ELSE-FOR guard systemsRetrieval-Augmented Generation (RAG), chain-of-thought validation, output filtering, human feedback loops, constitutional AI, tool use - all of these are essentially conditional logic wrapped around the core model to catch and correct its failures before the user sees them.? around the core model. Every time a hallucination pattern is identified, a new filter gets added. Every time the model generates something embarrassing, a new guardrail goes up. Retrieval-Augmented Generation. Chain-of-thought validation. Output filtering. Human feedback loops. Tool use. Constitutional AI.
Strip away the marketing language, and every single one of these is the same thing: a conditional check that catches a known failure mode before the user sees it.
They call this "progress."
It's not progress. It's an ever-growing list of exceptions bolted onto a system that fundamentally doesn't understand what it's saying. Each new guard catches yesterday's embarrassment. Tomorrow's hallucination - the one nobody's seen yet - sails right through.
The taste never changes.
They just keep adding frosting.
The Business Model Depends on the Lie
Here's the part nobody wants to say out loud:
Every dollar of LLM revenue depends on maintaining the illusion that the next version will solve hallucination.
OpenAI needs you to believe GPT-6 will be different. Anthropic needs you to believe Claude 5 will be reliable. Google needs you to believe Gemini will stop making things up. Because the moment customers accept that hallucination is permanent - that it's a feature of the architecture, not a bug being worked on - the entire value proposition collapses.
Nobody pays $200/month for a tool they have to double-check every time.
So the marketing machine keeps running. New benchmarks. New demos. New capabilities. Look, it can write a compiler now! Look, it can analyze your codebase! Look, 90% of our engineers don't write code anymore!
The shape keeps getting more impressive. The conferences get bigger. The funding rounds get larger.
And somewhere, a VP is making territory decisions based on numbers that don't exist. A CFO is showing the board a deck full of fabricated insights. A junior developer is shipping code they don't understand to production.
Nobody's tasting the bread.
And here's the darkest part: they don't have a choice anymore.
Psychologically, this is a Ponzi schemeA Ponzi scheme is a fraud where returns for earlier investors are paid using capital from newer investors, rather than from legitimate profits. It works as long as new money keeps flowing in. The moment it stops, the whole structure collapses. Named after Charles Ponzi, who ran such a scheme in the 1920s.?. Once you've raised $10 billion on the promise that the next version will fix hallucination, once you've told the world your engineers don't write code anymore, once you've built an entire industry narrative around imminent superhuman intelligence - you can't stop. Admitting the truth means instant bubble collapse. Investors flee. Customers cancel. The stock craters.
Every LLM company today has exactly two options:
1. Keep going. Maintain the illusion. Ship new benchmarks. Announce new capabilities. Collect revenue for as long as the music plays.
2. Admit reality. Acknowledge the architectural ceiling. Watch your valuation evaporate overnight. Close the doors.
Nobody chooses option two. So the cycle continues - more funding, more promises, more scaffolding, more confident press releases about problems that are structurally unsolvable. Not because they believe it will work. Because stopping means dying.
These are tools. Powerful, useful tools - when supervised by someone who understands what they're looking at. I use them every day.
But the moment you treat them as autonomous systems that can be trusted end-to-end - the moment you stop tasting - you get three months of corporate strategy built on hallucinated data.
The question isn't whether LLMs hallucinate.
The question is how long the industry can keep pretending they'll stop.