Rust: Proof of Concept, Not Replacement

CVE is cheap compared to this.

Rust dependency hell – a burning bridge labeled 'memory safety'

You were told Rust would kill C.
You were told memory safety is the future.
You were told the borrow checker would save billions.

Here's the part nobody wants to say out loud in 2025:

In roughly 8–12 years the biggest cost in software will no longer be memory-safety CVE payouts.
It will be the multi-hundred-billion-dollar bill from trying to keep 2025-era Rust codebases alive when the next wave of breaking changes hits tokio, axum, hyper, tower - and eventually serde - all cascading through your dependency tree at once.

The Elephant in the Room: Rust Has No Standard

Before we talk about dependency hell, let's talk about why it's inevitable.

Rust has no formal specification.
The language is defined by whatever rustc (the compiler) does today. Not by a standard. Not by a spec. By a compiler that changes every 6 weeks.

Yes, "The Rust Reference" exists. It explicitly states: "this book is not normative" and "should not be taken as a specification for the Rust language." It also warns: "This book is incomplete." That's not a specification – that's documentation.
Attribute C C++ Ada FORTRAN COBOL Go JavaScript Rust (2025)
Official standard ISO/IEC 9899:2024 ISO/IEC 14882:2024 ISO/IEC 8652:2023 ISO/IEC 1539:2023 ISO/IEC 1989:2023 Go Spec (formal)* ECMA-262 None
First standardized 1989 1998 1983 1966 1968 2012 1997 Never
Language defined by Formal standard Formal standard Formal standard Formal standard Formal standard Formal spec Formal standard rustc behaviour
Independent compilers 6+ mature 5+ mature 3+ mature 5+ mature 4+ mature 2 (gc, gccgo) 10+ engines ~0
Safety-critical cert possible Yes Yes Yes Yes Yes Limited N/A Pay Ferrocene
2035 compile: 2015 code ~100% ~99% ~100% ~100% ~100% ~95% ~99% <30%?

* Go has a formal, normative specification maintained by the Go team, but it is not an ISO/IEC standard like C, C++, or Ada.

Yes, even JavaScript – the language systems programmers love to mock – has had a formal ECMA standard since 1997. Rust, the "serious systems language," has been around for 10+ years and still doesn't have one.

Why This Causes Dependency Hell

The causal chain nobody talks about:

No standard
    → No stable foundation to build against
        → Everything depends on specific rustc version
            → Crates depend on specific rustc behavior
                → When rustc changes, crates must change
                    → Synchronized ecosystem breakage
                        → DEPENDENCY HELL

In C, the causal chain is different:

ISO standard exists
    → You write against the standard
        → Compiler is a replaceable tool
            → Dependencies target the standard, not the compiler
                → 20 years later: still compiles
                    → STABILITY

This isn't a tooling problem. It's not a "skill issue." It's an architectural decision that Rust made, and now the entire ecosystem inherits its consequences.

The Ferrocene Absurdity

Here's a delicious irony that should make every Rust evangelist uncomfortable:

Ferrocene Systems had to write their own specification for Rust-the Ferrocene Language Specification (FLS)-to get ISO 26262 (ASIL D, automotive) and IEC 61508 (SIL 4, covering railway via IEC 62278) safety certifications.

Why? Because safety-critical industries require a formal, verifiable specification. And Rust doesn't have one.

This isn't speculation. From Ferrous Systems' own documentation:

"The Ferrocene Language Specification (FLS) is an effort of Ferrocene, a collaboration between Ferrous Systems and AdaCore to bring a qualified Rust toolchain to safety-critical environments. [...] It was created as one of the prerequisites for qualifying Ferrocene." - Ferrous Systems Blog: The Ferrocene Language Specification is here! (2023)

Think about what this means:

Rust is marketed as the "safe" language for systems programming. Yet when actual safety-critical industries (medical devices, cars, trains, aircraft) want to use it, they cannot – unless a third party first writes the specification that the Rust Foundation refuses to produce.

This is like a car manufacturer marketing their vehicle as "the safest on the road" – but when regulators ask for crash test data, the manufacturer says "just trust us, we tested it internally." So a third-party company has to independently certify the car, at their own expense, and charge customers for access to that certification.

The FLS was donated to the Rust Project in March 2025:

"The Rust Project would like to thank Ferrous Systems for donating the FLS (formerly the Ferrocene Language Specification) to the Rust Project for its continued maintenance and development. [...] The FLS provides a structured and detailed reference for Rust's syntax, semantics, and behavior, serving as a foundation for verification, compliance, and standardization efforts." - Rust Project Blog: Adopting the FLS (March 2025)

But note: FLS was born from third-party necessity, not core team foresight. Ferrocene achieved qualification in October 2023 (TÜV SÜD certificate). The Rust team's own Reference? It explicitly states it is "not normative" and "should not be taken as a specification" – not something you can certify against.

No adults in the room.

A language ecosystem where technical criticism is met with "skill issue" and mass dismissal, but a formal specification remains "not a priority" after 10 years, has made its governance choices clear.

Sources

When your pacemaker needs to run "memory safe" code, your safety auditor asks for a specification. Rust's official answer? "Trust the compiler."

(To be fair: even C is often restricted or banned in Class III medical devices under IEC 62304. But at least C has a standard to be evaluated against. Rust doesn't even have that.)

The Cognitive Load Nobody Talks About

There's another cost that doesn't show up in CVE databases or dependency graphs: cognitive load.

The Rust community frames the borrow checker as "the compiler does the hard work for you." This is a half-truth that obscures a fundamental tradeoff.

C/C++ (Experienced Developer)

Resource management is internalized. It's like driving a manual transmission - you don't consciously think about shifting gears, you just do it. Yes, mistakes happen. But this leaves cognitive room for domain logic, architecture, edge cases, and "what if this input is malformed?"

Rust (Any Developer)

Resource management is externalized to the compiler. You must translate your mental model into the borrow checker's language. That translation is cognitive load. And it doesn't fully disappear - every new function, every new struct, every lifetime annotation demands it again.

Here's the key distinction that Rust evangelists consistently miss:

Rust doesn't ask: "Is this safe?"
Rust asks: "Can you PROVE to me it's safe, in MY specific syntax?"

Those are two fundamentally different cognitive tasks.

An experienced C programmer thinks: "This pointer comes from the caller, I shouldn't free it." Done. One mental note, move on.

A Rust programmer must express that same knowledge as lifetime annotations, reference types, ownership transfers - and if the borrow checker disagrees with your (correct) mental model, you refactor until it's satisfied. The knowledge was already there. The proof is the tax.

The Cloudflare Paradox

In 2023, Cloudflare suffered a significant outage. The code was written in Rust. The borrow checker was satisfied. Memory safety was guaranteed.

The bug? A malformed input file that the code didn't handle correctly.

This illustrates what we might call the Cloudflare Paradox:

Memory safety ≠ correctness.
The borrow checker guarantees your pointers are valid. It says nothing about whether your logic is correct, your inputs are validated, or your edge cases are handled.

But here's the insidious part: when a language markets itself as "safe," developers develop a false sense of security. "It compiles, therefore it's correct" becomes an unconscious heuristic. The cognitive attention that would have gone to "what if this file is malformed?" gets consumed by wrestling with lifetimes instead.

The Rust community's response to such incidents is predictable: "That's not a memory safety bug, that's a logic bug. Rust never claimed to prevent logic bugs."

Technically true. But also missing the point entirely.

The question isn't whether Rust claims to prevent logic bugs. The question is: where does the cognitive attention freed by "memory safety" actually go?

We don't have studies on this. But we do have incident reports. And they suggest the answer isn't as flattering as the marketing implies.

The Threading Tax: A Concrete Example

Let's look at something every backend developer needs: a thread pool with callbacks. This is not an exotic data structure - it's bread-and-butter concurrent programming.

C Version: 60 lines, zero cognitive overhead

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

typedef struct {
    void (*callback)(void* data);  // Function pointer
    void* data;                     // Data pointer
} Task;

typedef struct {
    Task* tasks;
    int capacity, count, head;
    pthread_mutex_t lock;
    pthread_cond_t not_empty;
    int shutdown;
} TaskQueue;

TaskQueue queue = {0};

void queue_push(void (*cb)(void*), void* data) {
    pthread_mutex_lock(&queue.lock);
    int idx = (queue.head + queue.count) % queue.capacity;
    queue.tasks[idx].callback = cb;
    queue.tasks[idx].data = data;
    queue.count++;
    pthread_cond_signal(&queue.not_empty);
    pthread_mutex_unlock(&queue.lock);
}

void* worker(void* arg) {
    while (1) {
        pthread_mutex_lock(&queue.lock);
        while (queue.count == 0 && !queue.shutdown)
            pthread_cond_wait(&queue.not_empty, &queue.lock);
        Task t = queue.tasks[queue.head];
        queue.head = (queue.head + 1) % queue.capacity;
        queue.count--;
        pthread_mutex_unlock(&queue.lock);
        
        if (t.callback == NULL) break;
        t.callback(t.data);
    }
    return NULL;
}

The mental model: pointers go in, pointers come out, mutex protects shared state. That's it.

Rust Version: Same functionality, triple the ceremony

use std::sync::{Arc, Mutex, Condvar};
use std::collections::VecDeque;

// First, you need to understand this type signature:
type Job = Box<dyn FnOnce() + Send + 'static>;
//         ^^^  ^^^^^^^^^^^   ^^^^   ^^^^^^^
//         |    |             |      "no short lifetimes"
//         |    |             "safe to send across threads"
//         |    "callable once, dynamic dispatch"
//         "heap-allocated, owned"

struct ThreadPool {
    workers: Vec<thread::JoinHandle<()>>,
    queue: Arc<TaskQueue>,  // Arc: shared ownership
}

impl ThreadPool {
    fn execute<F>(&self, f: F)
    where
        F: FnOnce() + Send + 'static,  // Every callback needs these bounds
    {
        let mut jobs = self.queue.jobs.lock().unwrap();
        jobs.push_back(Some(Box::new(f)));
        self.queue.condvar.notify_one();
    }
}

// Want shared state? More wrappers:
let results = Arc::new(Mutex::new(Vec::new()));

for i in 0..20 {
    let results = Arc::clone(&results);  // Clone Arc each iteration
    pool.execute(move || {               // move: closure takes ownership
        let mut r = results.lock().unwrap();
        r.push(i);
    });
}
Count the concepts a Rust developer must juggle:

Arc - atomic reference counting for shared ownership
Mutex - mutual exclusion for interior mutability
Condvar - condition variable for signaling
Box<dyn T> - heap allocation with dynamic dispatch
FnOnce - closure trait for one-time callable
Send - marker trait for thread-safe transfer
'static - lifetime bound meaning "no borrowed references"
move - keyword to transfer ownership into closure
Arc::clone() - explicit cloning before each closure
.lock().unwrap() - fallible lock acquisition

In C, you need: mutex, condvar, function pointer, void pointer.
Four concepts. Not eleven.

The Error That Teaches Nothing

Watch what happens if you forget Arc::clone() in Rust:

for i in 0..20 {
    // OOPS: forgot Arc::clone()
    pool.execute(move || {
        results.lock().unwrap().push(i);
    });
}
// ERROR: value moved in previous iteration

The compiler catches this. Good. But the fix isn't "add a mutex" - you already have a mutex. The fix requires understanding Rust's ownership model deeply enough to know that Arc::clone() creates a new owned handle that can be moved into the closure independently.

In C, you'd just pass the pointer. The pointer doesn't care how many functions reference it.

The C programmer's bug: "I forgot to lock the mutex."

The Rust programmer's bug: "I forgot to clone the Arc before moving it into the closure because the borrow checker's ownership model requires each closure to have independent ownership of the reference-counted handle."

One of these fits in working memory. The other requires a whiteboard.

But Rust Catches Data Races!

Yes. Rust's type system prevents data races at compile time. This is genuinely valuable.

But notice what it doesn't prevent:

These are the bugs that actually kill production systems. Data races are relatively rare in well-structured C code with proper mutex discipline. The hard concurrency bugs are logical, not mechanical - and Rust's type system provides zero help there.

Meanwhile, the cognitive overhead of Arc<Mutex<RefCell<Box<dyn Trait + Send + Sync + 'static>>>> is very real, very constant, and very much consuming attention that could go elsewhere.

The "Skill Issue" Defense

When you raise these concerns in Rust communities, the response is often: "Sounds like a skill issue. Once you internalize the borrow checker, it becomes natural."

This is partially true - and completely irrelevant to the argument.

Yes, experienced Rust developers get faster at satisfying the borrow checker. But "faster" isn't "free." The cognitive overhead decreases but never reaches zero, because:

  1. Every new codebase has new ownership patterns to decode
  2. Every new function signature requires lifetime reasoning
  3. Every refactor risks borrow checker regressions
  4. The ecosystem's Rc<RefCell<Box<dyn Trait>>> patterns add layers of indirection

Compare this to C, where an experienced developer's resource management is genuinely automatic - not "faster translation to compiler syntax," but no translation needed at all.

The "skill issue" defense also carries an uncomfortable implication: if Rust requires mass "re-skilling" of the entire industry, that's not a feature - that's a cost. A multi-billion-dollar cost in training, reduced productivity during learning curves, and permanent cognitive overhead for patterns that were previously internalized.

This isn't a hypothetical. Here's an actual response from a Head of Engineering to criticism of Rust's cognitive overhead:

LinkedIn comment: 'Rust isn't for everyone. You need to be meticulous to write code in Rust. If you're not meticulous, then maybe go sell cookies instead of programming.'

The comment was later edited to sound more professional - but it's still a strawman. The polished version claims I'm "advocating for developers to ignore code quality." I never said that. The original sentiment just said the quiet part out loud.

"Not everyone can internalize manual memory management, just as not everyone drives a manual transmission or plays chess at a competitive level. That's fine - different tools for different people. But when someone suggests critics should 'go sell cookies instead of programming,' it's worth remembering: that mirror hangs in every bathroom."

The "Alternative Compilers" Myth

"But gccrs and rust-gcc are coming! We'll have compiler diversity!"

These projects are attempting to implement a language that has no specification. They're reverse-engineering rustc's behavior. Every edge case, every undefined behavior, every quirk – they have to match what rustc does, because rustc IS the definition.

This isn't like having GCC and Clang both implement ISO C.
This is like trying to build a second JVM by watching the first one and guessing.

C Compiler Ecosystem

GCC implements ISO C
Clang implements ISO C
MSVC implements ISO C
→ They agree because they follow the same spec

Rust Compiler "Ecosystem"

rustc implements... rustc
gccrs tries to copy rustc
mrustc tries to copy rustc
→ They can only agree by mimicking one implementation

How's that alternative compiler coming along? gccrs has been in development for 10+ years and still cannot compile the Rust core library - let alone complex async code. Without a spec to implement against, they're reverse-engineering a moving target.

The Timeline Nobody Is Pricing In (Yet)

YearEventEstimated global damage if unmitigated
2023–2025axum 0.6→0.7→0.8, hyper 0.14→1.0, tower breaking changesAlready happened: mass ecosystem churn, 3–6 month migrations
2027–2029?Next major version wave (pattern: every 2–3 years)If pattern holds: 3–12 month migrations per affected project
2028–2032First wave of abandoned 2024–2028 crates30–50 % of crates.io effectively unmaintained
2030–2035Edition churn + allocator & async-trait stabilizationsEvery large Rust codebase becomes archaeology

Heartbleed (2014) cost ~$500 million.
Log4Shell + every OpenSSL disaster combined < $5 billion.

A single synchronized dependency apocalypse across CDN edge, 5G core, payment processors, and blockchain nodes in ~2032?
Realistic downside: $50–500 billion.

The Cold Numbers (December 2025)

The "Cargo.lock" Cope

"Just vendor your dependencies and lock everything!"

Sure. And now you have:

That's not a solution. That's palliative care.

The "Edition" Cope

"But Rust has Editions! Backward compatibility is guaranteed!"

The Edition system promises compatibility. But:

  1. Editions don't cover standard library behavior changes
  2. Editions don't cover compiler optimization changes
  3. Editions don't cover crate ecosystem compatibility
  4. Editions are promises from the rustc team – not guarantees from an independent standard body

When Python promised long-term Python 2 support, they eventually broke that promise. What makes the rustc team different? Good intentions?

A standard is a contract. A promise is just... a promise.

CVE Tax vs. Migration Tax

Cost Type C's CVE Tax Rust's Migration Tax
Predictability Known, bounded, insurable Unknown, unbounded, uninsurable
Frequency Occasional incidents Continuous churn
Per-incident cost Millions Potentially billions (synchronized)
Mitigation Static analyzers, sanitizers, careful coding, audits Vendoring and prayer

The Punchline

Fifteen years from now, the cheapest, most maintainable, and most predictable code running in production will still be the boring C daemon someone wrote in 2023 that still builds with:

make && ./service

No 2 GB vendor directory.
No Pin<Box<dyn Future<Output = Result<_, anyhow::Error>>>>.
No six-month migration because tokio finally stabilized async traits in edition 2033.
No compiler-as-specification roulette.

Final Score, Circa 2038

Language Has formal standard Still compiles in 2038 Dependency-hell tax Memory-safety CVE tax
C Yes (ISO) Yes ~0 Exists, but known & bounded
Rust 2025-era No Only if vendor-locked in 2027 Hundreds of billions Zero (until the rewrite bill arrives)

The Bottom Line

Memory safety was never free.
We just haven't been sent the invoice yet.

And when a Rustacean tells you this analysis is a "skill issue," remember:

The existence of a specification isn't a skill.
It's a governance decision. And Rust made theirs.

Epilogue: The Waypoint, Not the Destination

After all the criticism above, let me be clear about one thing:

Rust proved something important.
Compile-time memory safety without garbage collection is achievable. That's not nothing. That's a genuine contribution to computer science.

The borrow checker, for all its ergonomic costs, demonstrated that a type system can encode ownership and lifetimes in a way that eliminates entire classes of bugs at compile time. Before Rust, this was theoretical. After Rust, it's proven.

But here's the thing about proofs of concept: they're not the same as solutions.

Mechanized Semantics From the Beginning

The real problem isn't that Rust chose the borrow checker. It's that Rust was designed in 2010–2015, before we had the tools to explore the design space properly.

Today, we have:

A language designed today could start with mechanized semantics from day one - not bolted on later by a third party for safety certification. It could explore alternatives to the borrow checker model: different approaches to linear types, region-based memory, capability systems, or something entirely new that optimizes for both safety and ergonomics.

Rust's a-mir-formality project is trying to formalize the MIR (Mid-level Intermediate Representation) after the fact. It's valuable work. But retrofitting formalization onto a mature language is fundamentally harder than designing with it from the start.

The Language That Replaces C Doesn't Exist Yet

This is the uncomfortable truth that neither C diehards nor Rust evangelists want to hear:

Rust proved that compile-time memory safety is achievable.
That's its lasting contribution.

But it's a waypoint, not the destination.

The language that truly replaces C/C++ in systems programming - with memory safety, ergonomic resource management, ecosystem stability, formal specification, and 50-year longevity - doesn't exist yet.

C will remain because it has a 50-year track record, formal standards, multiple independent compilers, and an ecosystem that doesn't break every 18 months.

Rust will remain because it proved something important and built real infrastructure on that proof.

But somewhere out there - maybe being designed right now, maybe waiting for someone to start - is a language that learns from both. One that has:

When that language arrives, it won't replace C overnight. Nothing does. But it will make both C's manual memory management and Rust's borrow checker ceremony look like the historical artifacts they are.

Until then? Pick your tradeoffs carefully. And don't let anyone tell you there's only one right answer.