CVE is cheap compared to this.
You were told Rust would kill C.
You were told memory safety is the future.
You were told the borrow checker would save billions.
Here's the part nobody wants to say out loud in 2025:
In roughly 8–12 years the biggest cost in software will no longer be memory-safety CVE payouts.
It will be the multi-hundred-billion-dollar bill from trying to keep 2025-era Rust codebases alive when the next wave of breaking changes hits tokio, axum, hyper, tower - and eventually serde - all cascading through your dependency tree at once.
Before we talk about dependency hell, let's talk about why it's inevitable.
| Attribute | C | C++ | Ada | FORTRAN | COBOL | Go | JavaScript | Rust (2025) |
|---|---|---|---|---|---|---|---|---|
| Official standard | ISO/IEC 9899:2024 | ISO/IEC 14882:2024 | ISO/IEC 8652:2023 | ISO/IEC 1539:2023 | ISO/IEC 1989:2023 | Go Spec (formal)* | ECMA-262 | None |
| First standardized | 1989 | 1998 | 1983 | 1966 | 1968 | 2012 | 1997 | Never |
| Language defined by | Formal standard | Formal standard | Formal standard | Formal standard | Formal standard | Formal spec | Formal standard | rustc behaviour |
| Independent compilers | 6+ mature | 5+ mature | 3+ mature | 5+ mature | 4+ mature | 2 (gc, gccgo) | 10+ engines | ~0 |
| Safety-critical cert possible | Yes | Yes | Yes | Yes | Yes | Limited | N/A | Pay Ferrocene |
| 2035 compile: 2015 code | ~100% | ~99% | ~100% | ~100% | ~100% | ~95% | ~99% | <30%? |
* Go has a formal, normative specification maintained by the Go team, but it is not an ISO/IEC standard like C, C++, or Ada.
Yes, even JavaScript – the language systems programmers love to mock – has had a formal ECMA standard since 1997. Rust, the "serious systems language," has been around for 10+ years and still doesn't have one.
The causal chain nobody talks about:
No standard
→ No stable foundation to build against
→ Everything depends on specific rustc version
→ Crates depend on specific rustc behavior
→ When rustc changes, crates must change
→ Synchronized ecosystem breakage
→ DEPENDENCY HELL
In C, the causal chain is different:
ISO standard exists
→ You write against the standard
→ Compiler is a replaceable tool
→ Dependencies target the standard, not the compiler
→ 20 years later: still compiles
→ STABILITY
This isn't a tooling problem. It's not a "skill issue." It's an architectural decision that Rust made, and now the entire ecosystem inherits its consequences.
Here's a delicious irony that should make every Rust evangelist uncomfortable:
This isn't speculation. From Ferrous Systems' own documentation:
Think about what this means:
Rust is marketed as the "safe" language for systems programming. Yet when actual safety-critical industries (medical devices, cars, trains, aircraft) want to use it, they cannot – unless a third party first writes the specification that the Rust Foundation refuses to produce.
This is like a car manufacturer marketing their vehicle as "the safest on the road" – but when regulators ask for crash test data, the manufacturer says "just trust us, we tested it internally." So a third-party company has to independently certify the car, at their own expense, and charge customers for access to that certification.
The FLS was donated to the Rust Project in March 2025:
But note: FLS was born from third-party necessity, not core team foresight. Ferrocene achieved qualification in October 2023 (TÜV SÜD certificate). The Rust team's own Reference? It explicitly states it is "not normative" and "should not be taken as a specification" – not something you can certify against.
No adults in the room.
A language ecosystem where technical criticism is met with "skill issue" and mass dismissal, but a formal specification remains "not a priority" after 10 years, has made its governance choices clear.
When your pacemaker needs to run "memory safe" code, your safety auditor asks for a specification. Rust's official answer? "Trust the compiler."
(To be fair: even C is often restricted or banned in Class III medical devices under IEC 62304. But at least C has a standard to be evaluated against. Rust doesn't even have that.)
There's another cost that doesn't show up in CVE databases or dependency graphs: cognitive load.
The Rust community frames the borrow checker as "the compiler does the hard work for you." This is a half-truth that obscures a fundamental tradeoff.
Resource management is internalized. It's like driving a manual transmission - you don't consciously think about shifting gears, you just do it. Yes, mistakes happen. But this leaves cognitive room for domain logic, architecture, edge cases, and "what if this input is malformed?"
Resource management is externalized to the compiler. You must translate your mental model into the borrow checker's language. That translation is cognitive load. And it doesn't fully disappear - every new function, every new struct, every lifetime annotation demands it again.
Here's the key distinction that Rust evangelists consistently miss:
An experienced C programmer thinks: "This pointer comes from the caller, I shouldn't free it." Done. One mental note, move on.
A Rust programmer must express that same knowledge as lifetime annotations, reference types, ownership transfers - and if the borrow checker disagrees with your (correct) mental model, you refactor until it's satisfied. The knowledge was already there. The proof is the tax.
In 2023, Cloudflare suffered a significant outage. The code was written in Rust. The borrow checker was satisfied. Memory safety was guaranteed.
The bug? A malformed input file that the code didn't handle correctly.
This illustrates what we might call the Cloudflare Paradox:
But here's the insidious part: when a language markets itself as "safe," developers develop a false sense of security. "It compiles, therefore it's correct" becomes an unconscious heuristic. The cognitive attention that would have gone to "what if this file is malformed?" gets consumed by wrestling with lifetimes instead.
The Rust community's response to such incidents is predictable: "That's not a memory safety bug, that's a logic bug. Rust never claimed to prevent logic bugs."
Technically true. But also missing the point entirely.
The question isn't whether Rust claims to prevent logic bugs. The question is: where does the cognitive attention freed by "memory safety" actually go?
We don't have studies on this. But we do have incident reports. And they suggest the answer isn't as flattering as the marketing implies.
Let's look at something every backend developer needs: a thread pool with callbacks. This is not an exotic data structure - it's bread-and-butter concurrent programming.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
typedef struct {
void (*callback)(void* data); // Function pointer
void* data; // Data pointer
} Task;
typedef struct {
Task* tasks;
int capacity, count, head;
pthread_mutex_t lock;
pthread_cond_t not_empty;
int shutdown;
} TaskQueue;
TaskQueue queue = {0};
void queue_push(void (*cb)(void*), void* data) {
pthread_mutex_lock(&queue.lock);
int idx = (queue.head + queue.count) % queue.capacity;
queue.tasks[idx].callback = cb;
queue.tasks[idx].data = data;
queue.count++;
pthread_cond_signal(&queue.not_empty);
pthread_mutex_unlock(&queue.lock);
}
void* worker(void* arg) {
while (1) {
pthread_mutex_lock(&queue.lock);
while (queue.count == 0 && !queue.shutdown)
pthread_cond_wait(&queue.not_empty, &queue.lock);
Task t = queue.tasks[queue.head];
queue.head = (queue.head + 1) % queue.capacity;
queue.count--;
pthread_mutex_unlock(&queue.lock);
if (t.callback == NULL) break;
t.callback(t.data);
}
return NULL;
}
The mental model: pointers go in, pointers come out, mutex protects shared state. That's it.
use std::sync::{Arc, Mutex, Condvar};
use std::collections::VecDeque;
// First, you need to understand this type signature:
type Job = Box<dyn FnOnce() + Send + 'static>;
// ^^^ ^^^^^^^^^^^ ^^^^ ^^^^^^^
// | | | "no short lifetimes"
// | | "safe to send across threads"
// | "callable once, dynamic dispatch"
// "heap-allocated, owned"
struct ThreadPool {
workers: Vec<thread::JoinHandle<()>>,
queue: Arc<TaskQueue>, // Arc: shared ownership
}
impl ThreadPool {
fn execute<F>(&self, f: F)
where
F: FnOnce() + Send + 'static, // Every callback needs these bounds
{
let mut jobs = self.queue.jobs.lock().unwrap();
jobs.push_back(Some(Box::new(f)));
self.queue.condvar.notify_one();
}
}
// Want shared state? More wrappers:
let results = Arc::new(Mutex::new(Vec::new()));
for i in 0..20 {
let results = Arc::clone(&results); // Clone Arc each iteration
pool.execute(move || { // move: closure takes ownership
let mut r = results.lock().unwrap();
r.push(i);
});
}
Arc - atomic reference counting for shared ownershipMutex - mutual exclusion for interior mutabilityCondvar - condition variable for signalingBox<dyn T> - heap allocation with dynamic dispatchFnOnce - closure trait for one-time callableSend - marker trait for thread-safe transfer'static - lifetime bound meaning "no borrowed references"move - keyword to transfer ownership into closureArc::clone() - explicit cloning before each closure.lock().unwrap() - fallible lock acquisitionWatch what happens if you forget Arc::clone() in Rust:
for i in 0..20 {
// OOPS: forgot Arc::clone()
pool.execute(move || {
results.lock().unwrap().push(i);
});
}
// ERROR: value moved in previous iteration
The compiler catches this. Good. But the fix isn't "add a mutex" - you already have a mutex. The fix requires understanding Rust's ownership model deeply enough to know that Arc::clone() creates a new owned handle that can be moved into the closure independently.
In C, you'd just pass the pointer. The pointer doesn't care how many functions reference it.
Yes. Rust's type system prevents data races at compile time. This is genuinely valuable.
But notice what it doesn't prevent:
These are the bugs that actually kill production systems. Data races are relatively rare in well-structured C code with proper mutex discipline. The hard concurrency bugs are logical, not mechanical - and Rust's type system provides zero help there.
Meanwhile, the cognitive overhead of Arc<Mutex<RefCell<Box<dyn Trait + Send + Sync + 'static>>>> is very real, very constant, and very much consuming attention that could go elsewhere.
When you raise these concerns in Rust communities, the response is often: "Sounds like a skill issue. Once you internalize the borrow checker, it becomes natural."
This is partially true - and completely irrelevant to the argument.
Yes, experienced Rust developers get faster at satisfying the borrow checker. But "faster" isn't "free." The cognitive overhead decreases but never reaches zero, because:
Rc<RefCell<Box<dyn Trait>>> patterns add layers of indirectionCompare this to C, where an experienced developer's resource management is genuinely automatic - not "faster translation to compiler syntax," but no translation needed at all.
The "skill issue" defense also carries an uncomfortable implication: if Rust requires mass "re-skilling" of the entire industry, that's not a feature - that's a cost. A multi-billion-dollar cost in training, reduced productivity during learning curves, and permanent cognitive overhead for patterns that were previously internalized.
This isn't a hypothetical. Here's an actual response from a Head of Engineering to criticism of Rust's cognitive overhead:
The comment was later edited to sound more professional - but it's still a strawman. The polished version claims I'm "advocating for developers to ignore code quality." I never said that. The original sentiment just said the quiet part out loud.
"But gccrs and rust-gcc are coming! We'll have compiler diversity!"
These projects are attempting to implement a language that has no specification. They're reverse-engineering rustc's behavior. Every edge case, every undefined behavior, every quirk – they have to match what rustc does, because rustc IS the definition.
This isn't like having GCC and Clang both implement ISO C.
This is like trying to build a second JVM by watching the first one and guessing.
GCC implements ISO C
Clang implements ISO C
MSVC implements ISO C
→ They agree because they follow the same spec
rustc implements... rustc
gccrs tries to copy rustc
mrustc tries to copy rustc
→ They can only agree by mimicking one implementation
How's that alternative compiler coming along? gccrs has been in development for 10+ years and still cannot compile the Rust core library - let alone complex async code. Without a spec to implement against, they're reverse-engineering a moving target.
| Year | Event | Estimated global damage if unmitigated |
|---|---|---|
| 2023–2025 | axum 0.6→0.7→0.8, hyper 0.14→1.0, tower breaking changes | Already happened: mass ecosystem churn, 3–6 month migrations |
| 2027–2029? | Next major version wave (pattern: every 2–3 years) | If pattern holds: 3–12 month migrations per affected project |
| 2028–2032 | First wave of abandoned 2024–2028 crates | 30–50 % of crates.io effectively unmaintained |
| 2030–2035 | Edition churn + allocator & async-trait stabilizations | Every large Rust codebase becomes archaeology |
Heartbleed (2014) cost ~$500 million.
Log4Shell + every OpenSSL disaster combined < $5 billion.
A single synchronized dependency apocalypse across CDN edge, 5G core, payment processors, and blockchain nodes in ~2032?
Realistic downside: $50–500 billion.
"Just vendor your dependencies and lock everything!"
Sure. And now you have:
That's not a solution. That's palliative care.
"But Rust has Editions! Backward compatibility is guaranteed!"
The Edition system promises compatibility. But:
When Python promised long-term Python 2 support, they eventually broke that promise. What makes the rustc team different? Good intentions?
A standard is a contract. A promise is just... a promise.
| Cost Type | C's CVE Tax | Rust's Migration Tax |
|---|---|---|
| Predictability | Known, bounded, insurable | Unknown, unbounded, uninsurable |
| Frequency | Occasional incidents | Continuous churn |
| Per-incident cost | Millions | Potentially billions (synchronized) |
| Mitigation | Static analyzers, sanitizers, careful coding, audits | Vendoring and prayer |
Fifteen years from now, the cheapest, most maintainable, and most predictable code running in production will still be the boring C daemon someone wrote in 2023 that still builds with:
make && ./service
No 2 GB vendor directory.
No Pin<Box<dyn Future<Output = Result<_, anyhow::Error>>>>.
No six-month migration because tokio finally stabilized async traits in edition 2033.
No compiler-as-specification roulette.
| Language | Has formal standard | Still compiles in 2038 | Dependency-hell tax | Memory-safety CVE tax |
|---|---|---|---|---|
| C | Yes (ISO) | Yes | ~0 | Exists, but known & bounded |
| Rust 2025-era | No | Only if vendor-locked in 2027 | Hundreds of billions | Zero (until the rewrite bill arrives) |
Memory safety was never free.
We just haven't been sent the invoice yet.
And when a Rustacean tells you this analysis is a "skill issue," remember:
After all the criticism above, let me be clear about one thing:
The borrow checker, for all its ergonomic costs, demonstrated that a type system can encode ownership and lifetimes in a way that eliminates entire classes of bugs at compile time. Before Rust, this was theoretical. After Rust, it's proven.
But here's the thing about proofs of concept: they're not the same as solutions.
The real problem isn't that Rust chose the borrow checker. It's that Rust was designed in 2010–2015, before we had the tools to explore the design space properly.
Today, we have:
A language designed today could start with mechanized semantics from day one - not bolted on later by a third party for safety certification. It could explore alternatives to the borrow checker model: different approaches to linear types, region-based memory, capability systems, or something entirely new that optimizes for both safety and ergonomics.
Rust's a-mir-formality project is trying to formalize the MIR (Mid-level Intermediate Representation) after the fact. It's valuable work. But retrofitting formalization onto a mature language is fundamentally harder than designing with it from the start.
This is the uncomfortable truth that neither C diehards nor Rust evangelists want to hear:
C will remain because it has a 50-year track record, formal standards, multiple independent compilers, and an ecosystem that doesn't break every 18 months.
Rust will remain because it proved something important and built real infrastructure on that proof.
But somewhere out there - maybe being designed right now, maybe waiting for someone to start - is a language that learns from both. One that has:
When that language arrives, it won't replace C overnight. Nothing does. But it will make both C's manual memory management and Rust's borrow checker ceremony look like the historical artifacts they are.
Until then? Pick your tradeoffs carefully. And don't let anyone tell you there's only one right answer.