The DOM Was the Mistake
Why the SPA Concept Was Right and the Substrate Was Wrong
โ ๏ธ Content Warning
This article argues that twenty years of frontend engineering has been an elaborate exercise in papering over a foundational mismatch. Side effects may include reconsidering whether your framework skills were ever the actual problem, and a sudden urge to read Roy Fielding's dissertation.
๐ Part IV
This is the fourth article in a series. Part I - Vibe Coding: A Monkey Behind the Controls - covers why AI assistants make terrible architects. Part II - The Open Source Trap - and its companion The Browser Gatekeeping Problem - explore where the web ecosystem is heading. Part III - The Shape Is Perfect, It Still Tastes Like Shit - looks at how AI output fails even when it looks right. This article examines the substrate that sits between them all: the DOM itself.
I. The Diagnosis Nobody Wants to Make
The previous articles in this series laid out two converging pressures on the web ecosystem. From above, LLMs are commoditizing the JavaScript layer at industrial speed - every React component, every REST handler, every authentication flow has been ingested and can now be regenerated on demand. From below, browser vendors have capped what the platform is structurally capable of, confining the web to a CRUD ceiling and blocking every capability that would let it become a genuine application runtime.
Both diagnoses are correct. Both miss the third pressure, which has been hiding in plain sight for twenty years.
The substrate itself is wrong.
The DOMThe Document Object Model - a programming interface for HTML documents. It represents the page as a tree of nodes (elements, attributes, text) that scripts can read and modify. Originally designed in the 1990s to let JavaScript manipulate static HTML documents, not to host applications.? - the Document Object Model - was never designed to be an application runtime. It was designed to represent documents. When the industry collectively decided that web applications would be built by manipulating a document tree through JavaScript, it made an architectural commitment whose consequences we are still paying for, three decades later, with no end in sight.
The interesting move is not to argue about whether single-page applications were a good idea. They were. The interesting move is to notice that the concept was correct and the substrate it was built on was wrong, and those two facts have been tangled together for so long that most of the industry can no longer tell them apart.
The DOM was the wrong runtime to build it on.
Twenty years of frontend engineering has been the interest payment.
II. The SPA Concept Was Correct
Before anything else, the obvious must be defended against the current backlash: the SPASingle Page Application - a web app that loads once and then updates content dynamically without full page reloads. The browser keeps the same HTML document and JavaScript rewrites parts of it as the user interacts. React, Angular, and Vue apps are typically SPAs.? architectural idea is not the error.
The reasoning that led to single-page applications was clean. Build the application once. Have it talk to APIs. Keep data and logic cleanly separated. The same backend can serve a mobile app, a desktop app, a CLI client, and a web client. Business logic lives in one place. State management is explicit. The client is a thin presentation layer that pulls structured data and renders it.
This is the same reasoning that produced every successful client-server system in computing history. Database clients talk to database servers. Email clients talk to IMAP servers. Terminal emulators talk to remote shells. In every case, a well-defined protocol sits between a specialized client and a specialized server, and the architecture works precisely because the roles are clear.
The SPA promise was that the browser would become one more client in this pattern - a universal client that could be targeted by any server that spoke HTTP. It was a genuinely good idea.
The failure was not in the concept. The failure was in what the browser actually is.
III. The DOM Was Designed for Something Else
The Document Object Model emerged in the mid-1990s as a way to programmatically manipulate HTML documents. Its mental model was a tree of nodes representing paragraphs, headings, lists, and tables. Its interaction model was what one would expect of a document: highlight text, follow links, submit forms, scroll.
The DOM was not designed to be an application platform. Nobody in 1998 was asking how to build a word processor inside a web page. The DOM was built to render articles, catalogs, forums, and linked reference material. That was its job, and for that job it was well-suited.
Then JavaScript grew, AJAXAsynchronous JavaScript and XML - a technique popularized by Gmail around 2004 that let web pages fetch data from the server and update parts of the page without a full reload. It was the technical foundation that made rich web applications seem feasible and set off twenty years of building applications on a document substrate.? happened, Gmail shipped, and the industry collectively realized that the browser could be pressed into service as something more than a document viewer. This was not a planned transition. It was a series of pragmatic hacks on top of a substrate that was never meant for the purpose.
Consider what the DOM actually is, from an application-runtime perspective:
| DOM property | Why it is wrong for applications |
|---|---|
| Global mutable tree | Every script on the page can mutate any node. No encapsulation. |
| String-based styling (CSS) | Layout described through cascading text rules designed for articles. |
| Retained-mode renderingA rendering model where the system keeps a persistent scene graph and decides when to redraw. The application describes what should exist; the runtime decides when and how to paint. The opposite is immediate-mode rendering, where the application draws every frame directly and owns the render loop - the model used by games, editors, and most real applications.? | The browser owns the render loop. Applications cannot control frame timing. |
| Event bubbling | A click on a button dispatches through every ancestor, invoking arbitrary listeners. |
| Accessibility coupling | Every semantic element drags assumptions about reading order, tab order, and screen readers. |
| Text-centric layout | Flexbox and grid are extraordinary engineering achievements working against a text-layout foundation. |
| No native widgets | Every dropdown, date picker, and modal is reinvented from <div> soup. |
None of these are bugs. All of them are correct decisions for the job the DOM was originally given. They are catastrophic decisions for the job the industry subsequently imposed on it.
An operating system does not hand an application a global mutable tree of widgets and say "good luck, everyone else is also writing to this." An operating system gives the application a process, a memory space, a window handle, and a drawing context. The application owns its own world. The DOM offers nothing comparable - every web application lives in a shared document, pretending the shared document is a private application.
IV. The Three-Layer Pretence
The modern web application stack is an elaborate architecture built to paper over the mismatch between what the DOM is and what applications need.
At the bottom is the DOM - a document tree. On top of the DOM is the browser's rendering engine, which turns the tree into pixels. On top of the rendering engine sits JavaScript, which mutates the tree to change what is rendered. This is the original web, circa 1998, and it is already two abstraction layers removed from an application.
Then came the frameworks.
React decided that manipulating the DOM directly was too slow and too error-prone, so it built a virtual DOMAn in-memory JavaScript representation of the DOM tree. React (and similar libraries) render changes to this virtual tree first, then compute the minimum set of mutations needed to bring the real DOM in sync. It exists because the real DOM is slow and awkward to manipulate directly - an abstraction layer whose job is to hide another abstraction layer.? - a JavaScript-side replica of the tree - and introduced a reconciliation algorithm that diffs the virtual tree against the real tree and emits the minimum set of mutations to keep them in sync. This is a third abstraction layer. It exists because the second one (the DOM) is the wrong shape for the job.
Then came state management. Because the virtual DOM reflects rendering and rendering depends on state, applications needed a way to track state outside the rendering layer. Redux, MobX, Zustand, Jotai, Recoil. Each of them is a small runtime that manages application state and feeds it into the rendering pipeline. A fourth abstraction layer.
Then came build tooling. Because the JavaScript shipped to the browser needs to be bundled, minified, tree-shaken, code-split, polyfilled, and transformed from TypeScript or JSX into runnable code, a pipeline of Webpack, Vite, esbuild, Turbopack, SWC, and PostCSS sits between the source and the client. A fifth abstraction layer, upstream of the browser entirely.
Count them:
- The DOM - document tree
- The rendering engine - turns DOM into pixels
- JavaScript - mutates the DOM
- The virtual DOM - shadow tree to manage mutations
- The state manager - external store to feed the virtual DOM
- The build pipeline - compiles everything before the browser ever sees it
Six layers. Each layer exists because the layer below it is the wrong abstraction for what the layer above it is trying to do. A JavaScript function changes a state atom; the state manager notifies subscribers; React re-renders a virtual subtree; the reconciler diffs against the real DOM; the browser applies mutations; the rendering engine repaints pixels. All of this to change the text on a button.
The equivalent on a native platform is: write a new string into a label's buffer. One layer.
This is an entire discipline paid to manage impedance
between a document model and an application model.
Remove that impedance, and there is very little left that is intrinsically difficult. "Frontend engineering" has largely become a speciality in working around a foundational mismatch.
V. What the Industry Built on the Wrong Foundation
The consequences of building applications on a document substrate are visible everywhere, if one looks.
Framework churn:
JavaScript frameworks have a half-life of roughly three years. Backbone, Knockout, Ember, Angular.js, Angular 2+, React, Vue 2, Vue 3, Svelte, Solid, Qwik. Each generation claims to have solved the problems of the previous generation. None of them solve the underlying problem, because the underlying problem is that the DOM is not an application runtime. The churn is the sound of an industry repeatedly attempting to paper over a foundational mismatch and failing in progressively more sophisticated ways.
The accessibility debt:
Because every web application is technically a document, every application inherits the expectation that screen readers, keyboard navigation, and ARIAAccessible Rich Internet Applications - a set of HTML attributes that describe element roles, states, and properties to assistive technology. ARIA exists because when developers build applications out of generic div elements, the resulting markup tells screen readers nothing about what anything does. ARIA is the tax web applications pay for not being documents while pretending to be.? metadata will work. But because applications are structured as <div> soup rather than semantic documents, accessibility has to be bolted on through explicit ARIA attributes, tab index management, and focus trapping. A native application gets accessibility for free from the OS widget toolkit. A web application gets accessibility as a constant taxation on every component, levied against developers who usually do not pay it.
The performance paradox:
Modern web applications are slower than desktop applications from 1998 despite running on hardware that is a thousand times more powerful. A mid-range laptop can run Photoshop. The same laptop struggles to scroll a modern Jira board. The reason is not that web developers are incompetent. The reason is that the cost of passing a user interaction through six abstraction layers, each with its own bookkeeping, accumulates faster than Moore's Law can compensate.
The bundle size problem:
A modern React application ships 200-500 KB of JavaScript before the user sees a single pixel. This is the build pipeline's output: the framework, the state manager, the routing layer, the component library, the polyfills, the analytics, the feature flags. A native application ships its executable - possibly larger in absolute terms, but executing immediately rather than parsing, compiling, and initializing six runtime layers before it can draw.
The security surface:
Every web application inherits the full attack surface of the DOM. XSSCross-Site Scripting - a class of attacks where an attacker injects malicious JavaScript into a page that other users then load. XSS is specifically a DOM problem: it exists because the document tree is a shared, mutable surface where any script can read and modify anything else. Native applications do not have XSS because they do not share their address space with a document.?, CSRF, clickjacking, prototype pollution, supply chain attacks through npm dependencies. Each of these is a consequence of the application living inside a document that is partially controlled by the browser, partially by the server, and partially by whatever third-party scripts have been loaded. A native application has a smaller, better-defined attack surface precisely because it does not share its address space with a document.
None of these problems are the fault of any particular framework, developer, or browser vendor. They are emergent consequences of building applications on a substrate designed for documents. The frameworks are not making the web slow and bloated; they are doing their best with an impossible starting point.
VI. The HTMX Position: Stop Pretending
One coherent response to this mess is to stop pretending the DOM is an application runtime and use it as what it actually is: a document viewer updated by a server.
This is the HTMXA small JavaScript library (~14 KB) that extends HTML with attributes like hx-get, hx-post, and hx-swap. Instead of sending JSON to a frontend that renders it, the server sends HTML fragments that HTMX swaps into the document. The application logic lives entirely on the server; the client is thin. It is a return to the original hypermedia model of the web, updated for partial page updates.? thesis. Put all the application logic on the server, where it belongs. Have the server emit HTML fragments in response to user interactions. Swap those fragments into the document tree using simple declarative attributes. Treat the browser as a rendering target, not a state machine.
The appeal of this position is its honesty. It does not try to turn the document model into something it is not. It uses HTTP the way HTTP was designed to be used - request, response, hypermedia - and it keeps state in one place, on the server, where it can be reasoned about with normal programming-language tools.
For the use cases the web was originally built for - forms, tables, dashboards, content management, e-commerce, admin panels, anything that is fundamentally CRUD - this approach is not only sufficient but elegant. There is no virtual DOM, no state synchronization, no hydration, no build pipeline beyond a single script tag. A backend written in any language that can emit HTML becomes a complete web application. C, Go, Rust, Python, Ruby, PHP, OCaml, Zig - any of them work, because none of them need a companion JavaScript ecosystem.
The HTMX community calls this approach "hypermedia-driven." The intellectual lineage runs through Roy Fielding's original dissertation on REST, which was a thesis about hypermedia systems and not about JSON APIs. The SPA industry misread Fielding for two decades and built its architecture on a misunderstanding. HTMX is the correction.
But - and this is the crucial limitation - HTMX works because it accepts the CRUD ceiling. It does not attempt to break through it. It says, explicitly, that the browser is a document platform and that anything beyond document manipulation belongs elsewhere. For the 95% of web applications that are actually CRUD under a coat of paint, this is the right answer. For the 5% that are genuinely applications - editors, design tools, real-time collaboration, games, simulations, creative software - HTMX offers nothing, because it has accepted that the browser cannot host those workloads.
That remaining 5% is where the other coherent position lives.
VII. The Canvas Position: Build a Real Runtime
The opposite coherent response is to give up on the DOM as an application substrate entirely and build a real runtime on top of the browser's primitive capabilities - WASMWebAssembly - a binary instruction format that runs in browsers at near-native speed. Languages like C, C++, Rust, and Zig can be compiled to WASM, letting real systems code run in the browser without JavaScript. In applications that render to canvas rather than the DOM, WASM provides the execution model that the DOM never could.? for execution, canvas for rendering, WebGPUA modern graphics API for the web, now shipping in all major browsers. It provides direct access to the GPU for both rendering and compute workloads - the kind of low-level control that games, design tools, and simulation software require. Combined with WASM and canvas, WebGPU lets web applications bypass the DOM entirely and render like native applications.? for GPU access.
This is not hypothetical. Figma ships a design tool that is WASM plus canvas plus WebGL; the DOM appears nowhere meaningful in its editor. Photoshop Web is C++ compiled to WASM, rendering to a canvas. Google Earth Web is the same pattern. Flutter Web in CanvasKit mode renders its entire UI through Skia onto a canvas, bypassing the DOM almost completely. Zed editor's web target is WASM plus WebGPU, with no DOM dependency beyond the bootstrapping element.
These applications exist. They work. They demonstrate that the technical capability is available today, even inside the browser's restrictive capability model.
The mental model is clean. An application gets a canvas. It owns its own render loop. It owns its own event handling. It owns its own layout. It owns its own widget toolkit. The browser provides a process, a drawing surface, and an input stream. The application does the rest. This is how native applications have worked for fifty years. It is how every serious interactive application outside the browser works today.
The strength of this approach is that it gives applications the architectural shape they actually need. The application is not a document. The runtime does not pretend it is. The abstractions match the problem.
The weaknesses are real and worth being honest about:
Accessibility becomes a manual effort - and is often done badly.
The DOM hands accessibility metadata to screen readers automatically. A canvas is opaque. A canvas-based application must construct and maintain a parallel accessibility tree - and the honest observation is that most of them do this poorly. Figma has had documented accessibility problems for years. Flutter Web's CanvasKit mode has a dedicated semantics layer, but the output for screen readers is noticeably worse than equivalent DOM-based applications. Photoshop Web's accessibility story is effectively "use the native version." This is not a temporary implementation gap that will close on its own - it is a structural cost of choosing canvas over DOM, and it is paid by every application that ships the canvas approach. A DOM-based application gets accessibility mostly for free and then has to not actively destroy it. A canvas-based application starts from zero accessibility and has to actively build it, and most teams treat it as a backlog item that never rises to the top. The trade-off is real, and pretending otherwise weakens the case for canvas rather than strengthening it.
Initial load is heavier.
A Flutter Web application in CanvasKit mode ships 2-3 MB of WASM and associated resources before the first frame. HTMX ships 14 KB. On fast networks this is invisible; on constrained networks it is the difference between working and not working.
Text rendering has to be reinvented.
Thirty years of browser text-layout work - bidirectional text, ligatures, complex scripts, emoji, subpixel rendering, font fallback - is all behind the DOM's opaque API. A canvas application gets none of it for free. Skia provides a lot of it, which is why CanvasKit ships a large WASM binary. There is no shortcut.
URLs, back buttons, deep linking, SEO.
The document model provides all of these as platform primitives. A canvas application must implement them explicitly, and must choose how much of the browser's document model to re-expose through its own routing logic.
WASM still cannot talk to the network directly.
As the companion article argued in detail, the browser refuses to give WASM raw network capability. So a canvas-based application still has to route its network traffic through WebSocket, WebTransport, or fetch - which means it still inherits part of the CRUD ceiling, even while escaping the document substrate.
The canvas position is the right answer for the applications that actually are applications. It is not the right answer for a blog, a product catalog, or an admin panel. Different workloads need different runtimes, and the canvas approach is expensive to justify for anything but genuinely interactive work.
VIII. The Split That Is Already Happening
The middle ground - building applications on the DOM using JavaScript frameworks - is where the industry has lived for twenty years. It is also where the combined pressures from above and below are squeezing hardest.
From above, LLMs commoditize exactly this layer. React components, REST endpoints, state management boilerplate, form validation, routing, authentication flows - all of it has been ingested, and all of it can be regenerated on demand. The part of frontend engineering that was always difficult - managing the impedance between document trees and application state - is the part that pattern matching handles best, because there are millions of nearly-identical examples in the training data.
From below, the browser's capability ceiling prevents frameworks from escaping into application territory. React cannot give an application raw sockets. Vue cannot give an application hardware cryptography. Svelte cannot give an application a custom transport protocol. The frameworks inherit the browser's limitations, and the limitations are not negotiable.
The result is predictable. The middle is being compressed toward irrelevance. Applications that are fundamentally CRUD can skip the framework layer entirely and use server-rendered hypermedia with HTMX, delivered by a backend in any language. Applications that are fundamentally interactive can skip the framework layer entirely and compile to WASM with a canvas rendering target. The framework layer is where the cost lives - the complexity, the build tooling, the maintenance burden, the framework-of-the-year churn - and it is where the returns are lowest, because the LLMs can already write it.
This is not a prediction. It is already happening. The following table maps the shift:
| Workload type | 2015-2025 default | 2025-2035 trajectory |
|---|---|---|
| Content site | React + Next.js | Server-rendered HTML + HTMX |
| Admin panel | React + component library | Server-rendered HTML + HTMX |
| E-commerce | React + Next.js + REST API | Server-rendered HTML + HTMX |
| Dashboard | React + charting library | Server-rendered HTML + HTMX + charting |
| CRUD application | React + state manager | Server-rendered HTML + HTMX |
| Design tool | React + canvas components | WASM + canvas + WebGPU |
| Editor (text/code) | React + CodeMirror/Monaco | WASM + custom renderer |
| Real-time media | React + WebRTC wrappers | WASM + custom transport (if allowed) |
| Creative software | React + canvas | WASM + canvas + WebGPU |
| Game | Phaser / PixiJS in React | WASM + WebGPU |
The two columns on the right do not share a technology stack. They do not share a development model. They do not share a skill set. The split is architectural, not stylistic.
IX. Why the Middle Collapses
The middle does not collapse because the frameworks stop working. They work fine. The middle collapses because the economic case for paying a framework tax disappears from both ends.
For CRUD applications:
The framework tax buys nothing the server cannot provide. If the application is fundamentally "user submits form, server updates database, server returns new view," then every layer of client-side state, virtual DOM reconciliation, and hydration logic is pure overhead. A server-rendered application with hypermedia-driven interactivity is faster to build, faster to load, easier to debug, smaller to ship, and cheaper to maintain. The framework layer only made sense when server-rendered interactivity was difficult, which it no longer is.
For genuine applications:
The framework tax buys an abstraction that is the wrong shape for the problem. A design tool does not have pages. It has an infinite canvas with thousands of objects, a toolbox, a property panel, and a real-time rendering loop tied to user input. Wrapping that inside a document model and a virtual DOM wastes cycles and introduces complexity with no corresponding benefit. A canvas-based architecture is a better fit, and the tooling to build canvas-based applications - Flutter, Makepad, direct WASM plus WebGPU - is now mature enough for production work.
The middle's disappearing moat:
The middle survived as long as it did because the two exits were worse than the status quo. Server-rendered HTML was clunky before HTMX made partial updates feel native. Canvas-based applications were difficult before WASM made compiled-language development accessible in the browser. Both exits are now viable. The middle no longer has a monopoly on "good enough."
The LLM pressure accelerates the collapse. Framework-based frontend work is the single most LLM-saturated domain in software, as Part II documented. The skill of building a React component has already been commoditized to the point where a junior developer with a Claude subscription outputs more code per day than a senior developer without one. The economic value of that skill is declining in real time. Anyone whose career depends on it is on a timer.
The skill of designing a server-rendered HTMX application is not yet commoditized, because the relevant LLM training data is sparse and the approach runs against the dominant pattern in the corpus. The skill of writing WASM applications in C or Rust that target WebGPU is actively un-commoditizable, for the reasons Part II laid out: LLMs are bad at C, bad at manual memory management, bad at concurrency, and bad at anything that requires a model of the underlying hardware rather than pattern matching against text.
They are also strategically better from the perspective
of the humans building them.
X. What a Real Application Web Would Look Like
If the browser were a genuine application platform - if the gatekeeping critiqued in the companion article were relaxed - the shape of a web application would look very different from what the industry currently ships.
An application would declare itself as an application, not a document. The browser would hand it a window, a rendering surface, and an event stream. The application would own its render loop. The application would own its layout. The application would own its input handling. The application would own its security policy, within capability bounds enforced by the runtime.
Networking would work the way it works everywhere else. The application would open a socket - subject to policy rules, rate limits, destination whitelists enforced by the runtime, not by a browser vendor's blessed API list - and speak whatever protocol it chose to speak. SRTSecure Reliable Transport - an open-source UDP-based protocol for low-latency media streaming. Used in broadcast production worldwide. Impossible to implement in a browser because the browser will not let WASM open a raw socket, so browser-based streaming has to fake it with HTTP-based hacks like Low Latency HLS. A concrete example of the application-vs-document mismatch imposed at the network layer.?, QUIC with custom congestion control, a proprietary game protocol, a financial messaging protocol, anything. The browser would not be in the middle of every byte.
Cryptography would reach hardware. A smartcard would be addressable. A TPM would be queryable. A YubiKey would be usable as a first-class primitive, not a WebAuthn afterthought. Security-critical applications would be deliverable over the web without requiring a native companion app.
Delivery would be compiled binary. The application would ship as WASM - opaque to scrapers, opaque to LLMs trying to learn from it, opaque to competitors trying to clone it. The "view source" culture would persist for documents, which is where it always belonged, and applications would ship the way applications have always shipped: as artifacts, not as source.
This is not a fantasy of a walled garden. The protocols between applications and servers would remain open. The standards for WASM, WebGPU, and the runtime interfaces would remain open. The browser would remain a competitive market with multiple implementations. What would change is the relationship between the application and the document tree: the application would no longer have to pretend to be a document, and the document would no longer have to pretend to be an application.
Nothing about this vision is technically impossible today. The pieces exist. WebGPU ships in all major browsers. WASM is everywhere. Canvas is universal. The missing capabilities - direct network access, hardware reach, a capability-based security model - are political choices, not engineering limitations, as the companion article established.
The question is whether the industry waits for the browser vendors to allow this, or whether it routes around them.
XI. The Systems Programmer's Return
Part II argued for a C renaissance on the grounds that LLMs are bad at systems programming. That argument compounds when combined with the architectural shift this article has traced.
The two exits from the framework middle both lead back to language choices that the frontend industry spent twenty years trying to forget. The server-rendered HTMX exit leads to backends in any language that can emit HTML - C, Go, Rust, Zig, OCaml, Elixir, whatever the engineer is actually good at. The canvas-plus-WASM exit leads to languages that compile cleanly to WebAssembly with predictable performance and small bundle sizes, which in practice means C, C++, Rust, and Zig. Garbage-collected languages can target WASM too, but they have to ship their runtime - the GC, the type system, the standard library - in the binary. A "Hello World" in Go produces a multi-megabyte WASM file; a Rust equivalent is tens of kilobytes. For applications where bundle size and cold-start performance matter, the languages without runtimes win by default.
The skill set that wins in both exits is the one that was devalued by the JavaScript era:
| JavaScript era valued | Post-framework era values |
|---|---|
| Component hierarchies and state atoms | Memory layouts and cache behavior |
| Framework-specific idioms | Knowing when a system call is expensive |
| npm dependency graphs | Reading a profiler and knowing what to do |
| Hydration and reconciliation | Concurrency without a framework |
| Build pipeline configuration | Manual memory management |
| CSS-in-JS wrangling | Protocol design and binary formats |
These are the skills that LLMs cannot fake, because they require a model of the machine that statistical text-completion does not provide.
There is a particular kind of engineer who has been quietly productive through the entire JavaScript era - writing C for embedded systems, Rust for infrastructure, Go for services, assembly for firmware - and who has been largely absent from web development because the web development model was hostile to the way they think. These engineers are about to become the natural population for the post-framework web.
Then and now:
A C programmer in 2005 who wanted to build a web application had three bad choices. Learn PHP and accept that their careful thinking about memory, data structures, and algorithmic complexity would be useless against a runtime that garbage-collected strings and had no concept of a struct. Learn JavaScript and accept that their careful thinking about anything would be useless against a language where 0 == "0" is true and the standard library had to be reinvented every two years. Ignore the web and work on embedded systems, which paid less and had narrower job markets.
A C programmer in 2026 has a different set of choices. Write the backend in C, emit HTML, use HTMX for interactivity, and have a working web application with the full power of the systems programming toolchain intact. Or write an application in C, compile to WASM, render to a canvas, and have a working interactive application that runs in any browser with the full power of the systems programming toolchain intact. The web has finally made itself approachable to engineers who think in terms of memory layouts and cache behavior, rather than in terms of component hierarchies and state atoms.
The irony is that this approach was always possible. The tools have existed for years. What has changed is that the middle - the framework-based approach that required abandoning systems-programming thinking to be productive - is no longer the default. The alternatives are not only viable, they are now economically preferable for the reasons this series has laid out.
The engineers who will be building the serious web applications of the 2030s are not the ones doubling down on the JavaScript ecosystem. They are the ones who understand the layers below the framework and are building directly on them. Some of them are old hands returning to the web after decades away. Some of them are new graduates who never got stuck in the framework trap in the first place. Both groups share one thing: they see the DOM for what it is, and they refuse to pretend it is what it is not.
The SPA concept was right. The DOM was wrong. The industry spent twenty years trying to hold those two things together, and the result is an ecosystem that LLMs are hollowing out from above and browser vendors have capped from below. The honest response - the response that the next decade of serious web development is going to be built around - is to stop pretending the document tree is a runtime, and to choose deliberately between the two exits that lead out of the mess.
Build a document-shaped application on the document substrate, or build a real application on a real substrate. Do not build a real application on a document substrate and expect it to work.
Software engineering has always had a long tail of talent. The difference between an adequate engineer and an outstanding one is not a percentage; it is an order of magnitude, measured by any honest standard - systems understanding, design judgment, problem-solving speed, ability to reason about what is not yet written.
The JavaScript era flattened this curve by commoditizing the framework layer: the work that a framework imposes on a developer is narrow enough that the gap between adequate and outstanding is smaller than it would be in a less constrained problem space. The post-framework era restores the curve, because the work that remains - systems programming, protocol design, runtime architecture, capability-based security - does not flatten, and cannot be pattern-matched into irrelevance.