The Open Source Trap
How LLMs Are Eating the Web Ecosystem Alive
⚠️ Content Warning
This article contains predictions about the collapse of the open web ecosystem, the death of "View Source," and why your framework skills may have an expiration date. Side effects may include reconsidering your tech stack and an existential crisis about your career choices.
📖 Part II
This is the second article in a series. Part I - Vibe Coding: A Monkey Behind the Controls - covers why AI assistants make terrible architects. This article looks at where the web ecosystem is heading as a result.
📝 Updated February 22, 2026
This article has been updated with three new sections: The EME Cartel (how DRM proves the gatekeeping argument), The Java Applet Precedent (historical proof that sandboxed network access worked), and The WASM Network Question (why WASM's missing network access is political, not technical). These strengthen the core thesis about browser vendors as gatekeepers.
📝 Updated February 23, 2026
The companion article The Browser Gatekeeping Problem has been expanded with a case study comparing SRT and Low Latency HLS - a concrete example of the architectural ceiling browsers impose on real-time media - and a rewritten conclusion, The CRUD Ceiling, arguing that the web is fundamentally a CRUD platform and the pretense otherwise is on life support.
📝 Part IV Published — April 2026
The series continues with Part IV - The DOM Was the Mistake: Why the SPA Concept Was Right and the Substrate Was Wrong. Where Part II argues that LLMs are commoditizing the JavaScript ecosystem from above and the companion article shows that browser vendors have capped the web from below, Part IV examines the substrate caught in between — the DOM itself — and argues that twenty years of frontend engineering has been an elaborate exercise in papering over a foundational mismatch.
I. The Ecosystem That Feeds Its Own Killer
The open source movement was built on a beautiful idea: share your work, and the community makes it better. Thousands of developers contributing to shared libraries, frameworks, and tools. Free as in speech, free as in beer.
Then LLMsLarge Language Models - AI systems like ChatGPT and Claude that are trained on massive amounts of text, including open source code, documentation, and tutorials.? showed up and ate the buffet.
Every React component ever published. Every Express.js middleware. Every Vue plugin, every Angular directive, every npm package with more than ten downloads. Scraped, tokenized, ingested. Your weekend project? Training data. Your carefully documented library? Training data. Your Stack Overflow answer from 2019? Training data.
And now the LLM can reproduce your patterns - faster than you can, without attribution, without a license check, without a GitHub star.
LLMs strip-mined it in two years.
The implicit contract was: I share my work, the community benefits, I get reputation, contributors, maybe a job. The contract assumed human participants. Nobody negotiated with a pattern matcherLLMs don't "understand" code - they identify statistical patterns in training data and generate outputs that match those patterns. Like a very sophisticated autocomplete trained on the entire internet.? that can replicate your life's work in 200 milliseconds.
The result is a slow-motion tragedy of the commons. Not because people are taking too much - but because a machine is taking everything, and giving nothing back.
If this sounds familiar, it should. The big LLM companies act suspiciously like a communist party: first they rob everyone blind - scraping, ingesting, appropriating two decades of collective work without asking - then they centralize the means of production, and finally they offer to give back a fraction of what they took, on their terms, at their price. Everyone else is expected to build their workflows around these platforms and happily pay rent to the party bosses. Forever.
The communists always steal first, strip others bare, and then generously allow you a taste of what was yours to begin with.
II. The Open Source Motivation Collapse
Why would you maintain an open source library in 2026?
Seriously. Think about it.
You spend weekends writing code. You document it carefully. You answer issues. You review pull requests from strangers. You fix edge cases at midnight.
And then an AI company scrapes your repo, trains a model on it, and sells a subscription service that generates code "inspired by" your work. Your users don't need your library anymore - they just prompt the LLM. Your GitHub stars flatline. Your contributors disappear. Your motivation evaporates.
But it gets worse. The contributors who do show up are now submitting AI-generated pull requests - code they didn't write, don't understand, and can't explain. Miguel Grinberg, creator of the Flask-SocketIO library, described exactly this: he receives PRs with an unmistakable AI-generated quality, asks the submitters about the weird parts of their code, and they rarely respond. Because they can't - they have no idea what the code does. They just prompted and submitted.
So now you're not just maintaining a library that nobody appreciates. You're also reviewing code that nobody understands - written by a machine, submitted by a human who can't answer basic questions about it. Your role has shifted from "open source maintainer" to "unpaid QA for someone else's AI output."
A Danish psychiatrist named Søren Dinesen ØstergaardØstergaard predicted "AI psychosis" in 2023 and has since warned about "cognitive debt" - the erosion of reasoning abilities when people offload thinking to AI. His research suggests that even Nobel-level scientists might not have reached their breakthroughs if AI tools had been available from the start of their careers.? has a term for what's happening here: cognitive debt. When people stop exercising their reasoning abilities, those abilities erode. The PR submitters who let the LLM do the thinking aren't just producing bad code - they're losing the capacity to produce good code. And the maintainers who burn out reviewing this noise? The ecosystem loses them too.
This isn't hypothetical. It's already happening.
| What's already dying | Why |
|---|---|
| Stack Overflow traffic | LLMs answer faster, even if worse |
| Tutorial blog traffic | Why read when you can prompt? |
| Small utility library downloads | LLMs generate equivalent code inline |
| Documentation contributions | Nobody reads docs anymore - they ask Claude |
| Junior developer contributions to OSS | Juniors are vibe-coding, not learning |
The pipeline that fed the ecosystem - curious developers learning in public, writing tutorials, building small tools, contributing to bigger ones - is drying up. Not because people are lazy. Because the incentive structure has been destroyed.
Why share your code if it only trains the machine that replaces you?
Some projects have already started fighting back. License changes. Training data exclusion clauses. The "AI poison pill"Some open source projects have added license clauses that explicitly prohibit use of their code for AI training. Others have explored techniques to make their code less useful for model training while remaining functional.? in license files. But these are fingers in a dam. The water has already passed.
III. The Web/API Crisis
Here's where it gets specific.
The web development ecosystem - React, Vue, Angular, Express, Next.js, REST APIs, GraphQL - is the most LLM-saturated domain in all of software engineering. And it's heading for a crisis nobody's talking about.
Let's be blunt: the web/API layer is the single most endangered part of the entire software industry.
This is where LLMs are most capable. This is where the training data is richest. And this is where the human value is evaporating fastest.
WordPress and the integration graveyard:
Consider WordPress. Plugins, themes, WooCommerce integrations, REST API customizations - this was once a legitimate freelance market. Thousands of developers made decent livings building WordPress sites and custom integrations.
That market is effectively dead. An LLM can generate a WordPress theme, wire up WooCommerce, configure payment gateways, and build custom plugins - all from a prompt. The output isn't perfect, but it's good enough for a client who was paying €50/hour. When the same result costs €20/month in AI subscriptions, the math is brutal.
And it's not just WordPress. Every "integration" job - connecting Stripe to a backend, wiring up OAuth, building a REST API that talks to a database - is pattern-matching paradise. LLMs have seen these patterns millions of timesStripe integration tutorials alone number in the tens of thousands across YouTube, Medium, Dev.to, and Stack Overflow. Every one of them is in the training data. The LLM doesn't just know how to do it - it knows every variation of how to do it.?. You cannot charge more than a script kiddie's rate for work that an LLM does in seconds.
you're not a developer anymore. You're a prompt away from irrelevant.
Why Web/API is ground zero:
The web stack is the most represented domain in LLM training data by an enormous margin. Millions of React tutorials. Millions of REST API examples. Millions of CSS tricks, JavaScript snippets, Node.js configurations. The LLM has seen every possible CRUD app, every possible todo list, every possible authentication flow.
This means web/API development is the first domain where LLMs are "good enough" to generate most of the code. And that means it's the first domain where the crisis will hit.
| Web/API reality | Consequence |
|---|---|
| Most LLM-represented domain | Easiest to automate with vibe coding |
| Highly standardized patterns | LLMs reproduce them convincingly |
| Low barrier to entry | Flooded with AI-generated "developers" |
| Everything is open and inspectable | View Source → Training data → Commodity |
| Framework churn every 2-3 years | Nobody builds deep expertise anymore |
The "good enough" illusion:
Here's the most dangerous phrase in the industry right now: "good enough."
The LLM output looks good enough. The payment form renders. The API returns JSON. The checkout flow works in the demo. The client sees a working product and thinks: why was I paying a developer €100/hour for this?
But "good enough" is an illusion - and it's an illusion that only holds until real money is on the line.
Consider what happens behind a simple "Buy Now" button: payment card tokenization, PCI DSS compliance, webhook verification, idempotent transaction handling, fraud detection callbacks, refund state machines, VAT calculation across jurisdictions, failed payment retry logic, race conditions between concurrent purchases. Each of these has edge cases that can lose real money, expose real card data, or create real legal liability.
The vibe coder's awareness of these risks? Non-existent.
They don't know what they don't know. The LLM generated a Stripe integration that handles the happy path. Nobody tested what happens when a webhook fires twice. Nobody checked whether the session token is validated server-side. Nobody thought about what happens when a payment succeeds but the order creation fails.
| What the client sees | What's actually happening |
|---|---|
| "Payment works!" | Happy path only - no error handling for double charges |
| "The site looks professional" | No input sanitization, XSS vulnerabilities everywhere |
| "We saved 80% on development" | First PCI audit will cost more than the original build |
| "It handles orders fine" | Race condition will double-charge customers under load |
| "Security? We use HTTPS" | API keys in client-side code, no rate limiting |
This cuts both ways, to be fair. The disruption does eliminate bad developers who were charging premium rates for mediocre work that was never worth the invoice. The market correction there is real and arguably overdue.
But the collateral damage is catastrophic: clients lose the ability to distinguish between a €50/hour WordPress assembly and a €150/hour engineer who understands that your payment flow needs idempotency keys, that your user data needs encryption at rest, that your webhook endpoint needs signature verification. The work looks the same to someone who can't evaluate it.
Then it's a lawsuit, a breach disclosure, and a very expensive rewrite.
The domains that involve fiscal riskAny part of a system where bugs can result in direct financial loss: payment processing, invoicing, tax calculation, subscription billing, refund handling, financial reporting. In these domains, "almost correct" means "loses money."? - payments, billing, financial reporting, tax compliance - are exactly the domains where the gap between "works in demo" and "works in production" is widest. And it's exactly the gap that vibe coders don't know exists, because they've never operated in a world where a mishandled decimal point triggers a regulatory investigation.
The commodification spiral:
When everyone can generate a React frontend in minutes, the value of knowing React drops to zero. When every CRUD API looks the same because they all came from the same training data, there's nothing to differentiate. When the client can "just use AI" to build their MVP, why hire a web developer?
Web development is being commoditized at a speed that makes offshore outsourcing look quaint.
It will be the first domain to face the consequences.
The consequences won't be "developers lose their jobs tomorrow." The consequences will be slower, more insidious: rates drop, quality expectations vanish, codebases become unmaintainable, and the people who actually understand web architecture become impossible to find - because nobody bothered to learn it properly when the LLM could do it "good enough."
IV. The Talent Desert
In my first article, I described how companies are destroying their talent pipeline by replacing juniors with AI. In the web ecosystem, this effect is amplified tenfold.
Here's the cycle:
A Danish psychiatrist named Søren Dinesen ØstergaardØstergaard predicted "AI psychosis" in 2023 and has since warned about "cognitive debt" - the erosion of reasoning abilities when people offload thinking to AI. His work has been published in Acta Psychiatrica Scandinavica.? calls this "cognitive debt" - the erosion of mental abilities when you stop exercising them. It applies to individuals, but it applies to entire industries too.
The web ecosystem is accruing cognitive debt at an industrial scale. And there's no bankruptcy protection for lost expertise.
Even the defenders of AI-assisted development reveal the problem. A Finnish AI company CEO recently admitted that junior programmers can't adequately supervise AI agents because they lack deep architectural understanding. His solution? Only hire seniors.
But where do seniors come from? They come from juniors who spent years wrestling with code, making mistakes, debugging at 3am. Cut off that pipeline, and in five years you have no seniors either.
The junior you didn't hire in 2025 is the senior you can't find in 2030.
V. The Spaghetti Reckoning
Here's what's being built right now, in thousands of companies, by thousands of vibe coders:
Codebases that work. Codebases that pass tests. Codebases that ship.
Codebases that nobody understands.
When an LLM generates code, it optimizes for the immediate task. It doesn't think about how this function relates to the rest of the system. It doesn't consider the data flow three layers down. It doesn't know about that edge case your architect handled two years ago in a completely different file.
The result is what I call LLM spaghetti - code that is locally correct but globally incoherent.
| Human-architected code | LLM-generated spaghetti |
|---|---|
| Consistent patterns across the codebase | Different patterns in every file |
| Shared understanding among the team | Nobody understands the whole |
| Predictable behavior under edge cases | Surprising behavior everywhere |
| Technical debt is known and tracked | Technical debt is invisible until it explodes |
| Refactoring is difficult but possible | Refactoring means rewriting from scratch |
The spaghetti reckoning is coming. Not today. Not this quarter. But within 2-3 years, companies will discover that their "AI-accelerated" codebases have become write-only systemsCode that can be created but never meaningfully modified, debugged, or extended by any human - because nobody understands how it works or why it was written that way.? - code that can be created but never modified, never extended, never debugged by any human.
At that point, the only options are: rewrite from scratch (expensive), or find an engineer who can untangle it (rare and expensive).
Either way, the people who actually understand code will be in very high demand.
VI. The Obfuscation Instinct
Here's a prediction: the open web will start closing.
Not because of ideology. Because of economics.
If your competitor can scrape your frontend, feed it to an LLM, and generate a clone in an afternoon - what's the rational response? You make your code harder to scrape. Harder to read. Harder to learn from.
We're already seeing the early signs:
- Open source projects adding AI-hostile license clauses
- Companies moving logic from client-side to server-side to hide it
- Increased interest in code obfuscation tools
- API providers adding aggressive rate limiting and fingerprinting
- Frameworks exploring compiled/binary output formats
The web was built on "View Source." It was a feature, not a bug. An entire generation learned to code by right-clicking a webpage and reading the HTML.
That era is ending.
The economic logic is inescapable:
| Open ecosystem | What LLMs do with it |
|---|---|
| Open source libraries | Free training data → commoditized |
| Public APIs with documentation | Scraped → replicated → undercut |
| Client-side JavaScript | View Source → instant pattern extraction |
| Tutorials and blog posts | Ingested → regurgitated without attribution |
Why would any company publish a well-documented JavaScript library in 2028? So that an LLM can learn it and generate alternatives? So that a competitor can replicate your innovation without ever reading your docs?
The incentive to be open is collapsing. The incentive to hide is growing.
LLMs are making openness a liability.
The economic death spiral:
Here's the logical endpoint that nobody wants to follow to its conclusion.
Everything in technology is driven by economics. Developers build frameworks because there's money in it - either directly (jobs, consulting, SaaS) or indirectly (reputation, hiring leverage, community status). Companies invest in web tooling because web applications generate revenue. The entire ecosystem runs on economic incentive.
Now watch what happens when LLMs commoditize web development:
- Web development stops paying well - because LLMs do it "good enough"
- Talented developers leave the web ecosystem for domains that still pay
- Fewer developers means fewer contributors to frameworks, libraries, and tools
- The ecosystem stagnates - existing tools get maintained, but innovation stops
- LLMs, trained on this stagnating pool, can only recombine what already exists
- New problems arise (new devices, new protocols, new requirements) that nobody solves
- The web platform falls behind - because neither humans nor machines are advancing it
This is the part the AI optimists can't answer: LLMs don't create. They recombine. If the humans who create the things LLMs recombine stop creating - because it no longer pays - then the models are eating their own seed corn.
Who writes the next React? Not an LLM - it can only remix the current React. Not a developer - there's no money in it anymore. The framework that millions of applications depend on becomes an orphan, maintained by a shrinking group of volunteers who are increasingly asking themselves why they bother.
This isn't speculation. This is basic economics applied to an ecosystem that has always run on economic incentive. Remove the incentive, and the ecosystem dies. Not with a bang, but with a slow fade into unmaintained dependency hell.
Kill the host's economic incentive, and the parasite starves too.
You just won't notice until the whole ecosystem is dead.
VII. What Flash Got Right
This is going to be an unpopular opinion. But hear me out.
Adobe FlashFlash was a multimedia platform that dominated the web from the late 1990s until its death in 2020. It enabled rich interactive content in browsers - games, animations, video players, entire applications. Apple killed it by refusing to support it on the iPhone, citing security and performance concerns.? was a security nightmare, a resource hog, and a walled garden. Steve Jobs was right to kill it. The web standards community was right to replace it with HTML5.
But Flash had one property that looks increasingly prescient in 2026:
It shipped compiled binary.
When you visited a Flash site, your browser downloaded a .swf file - compiled bytecodeBytecode is an intermediate form of compiled code - not human-readable source code, and not machine-specific binary. It's designed to be executed by a runtime environment. Think of it as a halfway point between what the programmer writes and what the processor executes.?. You couldn't "View Source" on a Flash application. You couldn't copy-paste the code into your own project. You couldn't trivially reverse-engineer the logic.
The SWF format was a black box. Decompilers existed, but the output was messy, variable names were lost, the structure was flattened. Getting usable source code from a compiled Flash file was hard enough that most people didn't bother.
Now imagine that model applied to today's web - but with modern technology instead of Flash's bloated plugin architecture.
| Flash's model | What it protected |
|---|---|
| Compiled bytecode delivery | Source code invisible to client |
| Proprietary runtime | Logic couldn't be trivially extracted |
| Binary format | LLM-hostile - can't learn patterns from blobs |
Flash died for good reasons. But the problem it accidentally solved - protecting intellectual property on the client side - is about to become the most important unsolved problem in web development.
We killed Flash. Now we need what Flash did - without Flash.
VII½. The EME Cartel
If you still believe browser vendors are neutral stewards of an open platform, let me introduce you to EMEEncrypted Media Extensions - a W3C standard that enables DRM-protected video playback in browsers. In theory, it's CDM-agnostic. In practice, each browser only accepts its vendor's DRM module: Chrome uses Widevine (Google), Safari uses FairPlay (Apple), Edge uses PlayReady (Microsoft).?.
Encrypted Media Extensions was supposed to be an open standard for DRM in the browser. The W3C ratified it. On paper, it's CDM-agnostic - any Content Decryption Module should work in any browser. In practice, it's a cartel.
Chrome only accepts Widevine (owned by Google). Safari only accepts FairPlay (owned by Apple). Edge only accepts PlayReady (owned by Microsoft). Three browsers. Three DRM modules. Three companies that collect a licensing fee on every protected stream.
If you're a small company and want to build your own CDM? There is no process. No certification path. No door to knock on. The standard is "open" but the implementation is a closed shop.
Mozilla fought this for years. They argued publicly in the W3C that DRM had no place in web standards. They were right. But then came the ultimatum: integrate Widevine or Firefox can't play Netflix. Mozilla caved - because the alternative was Firefox's death. The EFFElectronic Frontier Foundation - a nonprofit defending civil liberties in the digital world. The EFF resigned from the W3C in 2017 specifically over the EME decision, calling it a betrayal of the open web. Their departure was unprecedented and largely ignored by the industry.? resigned from the W3C over this decision. An unprecedented protest, and the industry shrugged.
Here's the part that matters for our argument: the entire browser-based DRM system is security theater. All three CDMs run in user-space memory. The decryption keys exist in RAM. A memory dump extracts them. The content ends up on torrent sites regardless. The only DRM that ever actually worked was hardware-based SimulCryptConditional Access System architecture using smart cards (Conax, Irdeto, Nagravision). The decryption key never leaves the hardware security module. This is what cable and satellite TV used for decades - and it actually worked, because you couldn't dump a key that never existed in accessible memory.? with smart cards, where the key never leaves the hardware security module.
So what does EME actually accomplish? Not security - the keys are dumpable. What it accomplishes is gatekeeping. Three companies control who gets to play protected content on the web. Three companies collect rent on every stream. Three companies decide whether your browser is allowed to participate.
This is the same pattern we see with WASM network access, with browser APIs, with every capability that browsers choose to grant or withhold. The justification is always "security." The result is always control.
It's a platform controlled by three companies
who use "standards" as a gatekeeping mechanism.
VII¾. The Java Applet Precedent
There's a historical precedent that demolishes the argument that sandboxed network access in browsers is fundamentally unsafe. It's called Java Applets.
Java Applets ran inside browsers in a security sandbox. Unsigned applets were restricted. But signed appletsJava applets signed with a trusted certificate could request elevated permissions from the user - including file system access, raw socket connections, and system clipboard access. The user had to explicitly grant permission. This model worked for over a decade in banking, government, and enterprise applications.? - applets with a trusted certificate - could request elevated permissions: file system access, raw network sockets, system resources. The user granted permission explicitly. The security model was: identity verification (certificate) + user consent + sandbox isolation.
And it worked. For over a decade.
Banks used signed applets for online banking interfaces. Government agencies used them for document processing. Logistics companies used them for real-time fleet tracking. Enterprise applications used them for everything that "the web couldn't do" - because the web actually couldn't do those things without applet-level access.
Some of these systems are still running today. Large corporations maintain Java Applet infrastructure in 2026 because nothing in the modern web stack can replace what those applets did.
Now here's the critical question: where is the SSRF catastrophe? Where is the wave of internal network attacks that raw sockets in the browser supposedly would have caused? It didn't happen. Not because nobody tried - but because the security model (certificates + user consent + sandboxing) was sufficient.
Java Applets were killed not because sandboxed network access was dangerous, but because:
- Oracle let the Java browser plugin rot (unpatched vulnerabilities in the plugin architecture, not in the socket model)
- Browser vendors wanted to eliminate NPAPI plugins entirely (a reasonable architectural decision)
- Mobile browsers never supported plugins (Apple's decision, again)
The applet was thrown out. The capability it provided - controlled, sandboxed, user-consented network access in the browser - was never replaced. Not because it couldn't be. Because nobody wanted to replace it.
Operating systems solve this identical problem every day. Every process on your Linux machine shares the same network stack, and the OS manages it with iptables, seccomp, cgroups, network namespaces. Multiple untrusted processes, one network, proper isolation. It's a solved problem - everywhere except the browser.
We had it. It worked. Nobody died.
It was removed and never replaced - and that's a choice, not a technical limitation.
VIII. The WASM Endgame
Enter WebAssemblyWebAssembly (WASM) is a binary instruction format for a stack-based virtual machine. It's designed as a compilation target for languages like C, C++, and Rust, enabling near-native performance in web browsers. Think of it as a way to run compiled code in your browser.?.
WASM is a binary format that runs in the browser at near-native speed. You write in C, C++, Rust, or other compiled languages. The browser gets bytecode. Not JavaScript. Not human-readable source. Bytecode.
Sound familiar?
It's Flash's delivery model - compiled binary to the client - but built on open standards, running in a proper sandbox, supported by every major browser, and backed by the W3C.
Here's why WASM changes the game for the LLM era:
| JavaScript (current web) | WASM (emerging web) |
|---|---|
| Human-readable source shipped to client | Compiled bytecode shipped to client |
| "View Source" exposes everything | Binary blob - nothing to read |
| LLMs trained on millions of examples | LLMs can't learn from binary |
| Trivial to clone and replicate | Reverse engineering is expensive |
| IP visible to competitors | IP protected by compilation |
But WASM today has a problem: it's sandboxedWASM in the browser can't directly access the DOM, file system, network, or GPU. It has to call out to JavaScript for almost everything. This limits what you can build purely in WASM - it's fast but isolated.?. It can't access the DOM directly. It can't do network I/O. It can't touch the file system. It's a fast calculator trapped in a box.
Which brings us to WASIWebAssembly System Interface - a standardized API that gives WASM programs access to system resources like files, network, clocks, and environment variables. Think of it as breaking WASM out of its browser sandbox while keeping security guarantees.?.
WASI: Breaking out of the sandbox
WASI - the WebAssembly System Interface - is an effort to give WASM programs access to the real world. File I/O. Network sockets. Environment variables. System clocks.
When WASI matures, the picture changes dramatically:
- Write your application in C or Rust
- Compile to WASM
- Ship bytecode to the browser - or to the server
- Client receives binary that runs at native speed
- Source code never leaves your build pipeline
- LLMs see nothing. Competitors see nothing. Decompilers get noise.
This isn't science fiction. The pieces exist today. They're just not assembled yet.
Now the open web is being eaten alive.
WASM + WASI is the circle closing - Flash's model, done right.
The timeline:
VIII½. The WASM Network Question
WASM's potential as a real application platform depends on one thing: network access. And this is where the technical argument and the political argument become inseparable.
Today, if you want network communication from the browser, you get two options: WebSocketWebSocket provides a persistent, full-duplex communication channel over a single TCP connection. But the browser controls the handshake, enforces CORS, and intermediates all data. You cannot implement your own protocol, your own packet recovery logic, or your own handshake sequence. The browser is a mandatory middleman.? and WebTransportWebTransport is a newer API built on HTTP/3 and QUIC, offering multiple streams and unreliable datagrams. Better than WebSocket for some use cases, but still browser-mediated. You still can't control the transport layer, implement custom congestion control, or build your own protocol.?. Both are presented as "the web has networking." Both are fundamentally different from actual network access.
WebSocket and WebTransport are like sending a letter through a censor: the browser opens your envelope, reads the contents, decides if it approves, rewraps it, and forwards it. You cannot implement your own protocol. You cannot do your own handshake. You cannot build custom packet recovery for real-time video. You sit on top of HTTP, forever, and the browser decides what you're allowed to do.
Flash understood this problem. Flash's crossdomain.xml was essentially a server-side permission system - the destination server declared which origins could connect to it. It was crude, but the concept was sound: let the destination decide, not the intermediary. Sound familiar? It should. The web reinvented the same idea years later and called it CORS.
The standard counter-argument is SSRF: if WASM could open raw sockets, every website becomes an attack vector against local networks. This argument is technically real but intellectually dishonest, because it pretends the problem is unsolvable.
It's not unsolvable. It's solved - in every context except the browser.
| Environment | How it handles untrusted code + network |
|---|---|
| Linux (any process) | iptables, seccomp, cgroups, network namespaces |
| Docker containers | Network policies, bridge isolation, egress rules |
| Java Applets (historical) | Signed certificates + user consent + sandbox |
| Mobile apps (iOS/Android) | Permission system + app review + sandbox |
| Browser (WASM) | ❌ Denied entirely. "Too dangerous." |
A WASM network capability could be built with firewall-style security policies: ACLs for destination control, rate limiting, fingerprint detection, user consent prompts for elevated access. Not the full IP stack wide open - a controlled, filtered, auditable network interface. The technical building blocks exist. The architecture is well-understood. Every operating system on earth does this.
But browser vendors won't build it. The stated reason is security. The actual reason is that controlled network access in the browser would make browsers into true application platforms - and that threatens the gatekeeping position that browser vendors currently enjoy.
If WASM could open a filtered socket, you wouldn't need Chrome's blessed APIs. You wouldn't need to implement your protocol on top of Google's transport layer. You could build a video conferencing app with your own protocol, a game with your own netcode, a financial application with your own encrypted transport. The browser becomes a runtime, not a gatekeeper.
And that's exactly what they don't want.
The strategic dead end:
This leaves the web at a strategic crossroads with two outcomes:
Path A: Browser vendors continue to restrict WASM's capabilities. Native applications accelerate their dominance for any serious use case. The browser retreats to what it always was underneath the hype: a document viewer with delusions of grandeur. Web development becomes irrelevant for anything beyond content display and simple CRUD.
Path B: Someone breaks the deadlock. WASM gets controlled network access through a capability-based security model. The web becomes a genuine application platform for the first time. Innovation explodes. Browser vendors lose some control but gain a platform that's actually worth controlling.
Path B is better for everyone, including browser vendors. But monopolies make irrational decisions. And there's a third factor accelerating the timeline: LLMs are already eroding the browser's value as an internet gateway. If search was the browser's killer app, and LLMs are replacing search, then the browser's gatekeeping position is already on borrowed time.
The actual risk is that the web becomes irrelevant
while they're busy protecting a gate that nobody needs to pass through anymore.
For a deeper technical analysis of browser gatekeeping - including WebSocket limitations, CORS as business logic, the WASM firewall model, a case study of SRT vs. Low Latency HLS that proves the architectural ceiling, and why the web is fundamentally a CRUD platform pretending to be more - see the companion article: The Browser Gatekeeping Problem.
IX. The C Renaissance
If WASM is the delivery format of the future web, then the question becomes: what language do you write it in?
The answer, increasingly, is the language that was there before everything else: C.
And Rust. And C++. But at the foundation, it's C.
Here's why this matters:
| JavaScript world | WASM/C world |
|---|---|
| Garbage collected - runtime handles memory | Manual memory management - you handle it |
| LLMs trained on millions of JS examples | LLMs struggle with low-level C patterns |
| Errors are forgiving (undefined is a feature!) | Errors are fatal (segfault is a teacher) |
| Frameworks change every 2 years | POSIX hasn't changed in decades |
| Knowledge commoditized by LLMs | Knowledge too deep for pattern matching |
C is the language LLMs are worst at. Not because it's undocumented - it's one of the most documented languages in history. But because writing correct C requires understanding memory, pointers, hardware behavior, and system architecture at a level that pattern matching can't fake.
An LLM can generate C code that compiles. It cannot generate C code that doesn't leak memory under load, that handles signals correctly, that manages file descriptors properly, that works reliably in a multi-threaded environment. These require understanding. Real understanding. The kind that comes from years of segfaults and ValgrindValgrind is a programming tool for memory debugging, memory leak detection, and profiling. It's the standard tool for finding memory errors in C and C++ programs. If you've never used it, you've never written serious C.? sessions.
And this is exactly why C skills will become incredibly valuable:
- Embedded systems - IoT, automotive, medical devices. Growing market. Can't be vibe-coded.
- Kernel and OS development - Linux isn't going away. Someone has to maintain it.
- WASM compilation targets - The new frontend, written in C/Rust, compiled to binary.
- Performance-critical backends - When Node.js isn't fast enough (and it often isn't).
- Security-critical systems - Where a hallucinated buffer overflow means game over.
The irony is thick: the oldest mainstream language becomes the most future-proof. Not despite its difficulty, but because of it.
LLMs made every web developer replaceable.
C makes you irreplaceable - because the machine can't fake what you know.
X. Who Wins
The next decade in software:
The web/API layer gets commoditized. React developers become the new PHP developers - plentiful, cheap, interchangeable. LLMs handle most of the boilerplate. The value drops to zero for anyone who can't do more than prompt.
Meanwhile, the domains that LLMs can't touch become premium real estate:
| Commoditized (low value) | Premium (high value) |
|---|---|
| React/Vue/Angular frontends | Systems programming (C, Rust) |
| CRUD REST APIs | Embedded and real-time systems |
| Standard authentication flows | Security architecture and auditing |
| CSS styling and layouts | Kernel and driver development |
| Database CRUD operations | Database internals and optimization |
| Deployment scripts | Infrastructure architecture |
| "Full-stack" generalists | Deep specialists who understand the machine |
The survival strategy is simple:
Go deep where the machines can't follow.
Learn C. Learn how memory works. Learn what happens below the abstraction layers. Learn to write code that runs without a garbage collector, without a framework, without an LLM holding your hand.
Use AI for what it's good at - searching, boilerplate, trivial patterns. But invest your time in skills that can't be commoditized. Skills that require understanding, not prompting.
The people who will thrive in 2030 are not the ones who learned to prompt better. They're the ones who learned things the machine can't replicate.
For the skeptics:
"But AI will get better at C too!"
Maybe. But the gap between generating plausible C and generating correct C is not a gap that closes with more training data. It's a gap that exists because correctness in systems programming requires a model of reality - hardware, timing, memory, concurrency - that statistical pattern matching doesn't provide.
LLMs might get better at C. They won't get better at understanding why your embedded system crashes when the temperature drops below -20°C and the crystal oscillator drifts. That's the domain of engineers who understand physics, electronics, and software simultaneously.
Good luck training that into a pattern matcher.
"The stone age didn't end because we ran out of stones. It ended because something better came along."
The JavaScript age won't end because JavaScript stops working. It will end because compiled delivery becomes the rational choice - and the engineers who prepared for it will be the ones still standing.