Rust Won’t Replace C++ (and That’s Okay)
TL;DR: C++ isn’t going anywhere. Its ecosystem, legacy, and embedded presence are irreplaceable. Rust shines as a complement — a way to introduce strong safety guarantees, fearless concurrency, and modern tooling in places where it matters most. The win isn’t replacement; it’s interoperability and risk reduction.
The honest bit nobody likes to say out loud Every few months my timeline erupts with “Rust will kill C++” hot takes. It’s entertaining — and a little exhausting. I’ve worked on C++ codebases where a single header file had a longer history than some startups, and I’ve written Rust that felt like putting guardrails on an alpine road. After years of shipping both, here’s where I landed: Rust won’t replace C++… and that’s not a failure.
Think about the surface area C++ already owns: game engines, browsers, high-frequency trading, real‑time systems, CUDA-heavy ML runtimes, the firmware in machines you don’t want to brick. Rewrites aren’t just expensive; they’re existentially risky. Yet whenever we carved out a gnarly, crash‑prone corner and rewrote that part in Rust, the pager got quieter. The team slept better. And management didn’t have to approve a multi‑year rewrite with unknowable ROI. That’s the play.
False binary: rewrite vs. reality The interesting question isn’t “Rust or C++?” — it’s “Where does Rust reduce risk the most, for the least disruption?” In practice, the answer clusters around:
- Memory‑unsafe edges: parsers, decoders, binary protocol handlers, IPC bridges, and plugin boundaries.
- Concurrency‑heavy modules: task schedulers, work queues, bounded ring buffers, lock‑free data structures.
- Untrusted input: anything that processes files from users or the network.
- Greenfield utilities around a large C++ core: CLIs, test harnesses, fuzzers, deployment tools.
These are high‑impact targets where Rust’s ownership model and type system pay rent immediately.
Why C++ is irreplaceable (and you should be glad)
1) Ecosystem gravity. C++ has decades of libraries and vendor SDKs: graphics (Vulkan, OpenGL, DirectX), compute (CUDA, SYCL), robotics (ROS), UI (Qt), game engines (Unreal), and deeply tuned math kernels. Rewriting these is fantasy; wrapping them is practical.
2) Legacy and longevity. Critical systems have C++ tentacles in build pipelines, perf counters, testing infra, and team expertise. Teams know its footguns and mitigations (ASan/UBSan/TSan, fuzzers, hardening flags). That institutional memory is real IP.
3) Performance and control. C++ offers zero‑cost abstractions when you get it right. Rust can match this in many cases, but C++ already sits in the hot path of the world’s lowest‑latency systems. “Proven fast” is a selling point in itself.
4) Tooling that fits certain domains. If your world runs on vendor compilers, custom linkers, binary compatibility constraints, or build systems like Bazel/CMake with bespoke tooling, C++ remains the path of least resistance. The right framing isn’t “C++ bad, Rust good.” It’s “C++ broad, Rust precise.”
Where Rust adds leverage in a C++ shop
- FFI without fear: Wrap sharp C APIs with safe Rust types and lifetimes; export a thin extern "C" layer so C++ calls it like any other C library.
- Data races by construction: The borrow checker prevents aliasing + mutation combos that cause heisenbugs; Send/Sync traits keep concurrency honest.
- Modern ergonomics and batteries included: Cargo, crates.io, cargo check, cargo bench, cargo fuzz, cargo auditable; consistent tooling lowers friction.
- Security posture: Use no_std where appropriate, minimize unsafe, fuzz by default. Rust narrows the blast radius of “we didn’t think that input would happen”.
A practical example: a line‑protocol parser behind a C ABI Let’s build a tiny module you could realistically drop into a C++ service: a parser for a Prometheus‑style line protocol. It turns text lines into a compact internal struct, returning a parse error if anything looks off. We’ll expose a C ABI so C++ can call it with zero drama.
Rust (library crate) // Cargo.toml // [lib] // crate-type = ["staticlib", "cdylib"]
- [repr(C)]
pub struct Sample {
// (timestamp, value) simplified for the demo ts: i64, val: f64,
}
- [no_mangle]
pub extern "C" fn parse_line(ptr: *const u8, len: usize, out: *mut Sample) -> i32 {
// Safety: FFI boundary. Create a slice from raw parts.
let bytes = unsafe { std::slice::from_raw_parts(ptr, len) };
match parse(bytes) {
Ok(s) => {
unsafe { *out = s; }
0 // success
}
Err(_) => -1 // parse error
}
}
fn parse(b: &[u8]) -> Result<Sample, ()> {
// Extremely simplified: parse "<ts> <val>\n"
let s = std::str::from_utf8(b).map_err(|_| ())?.trim();
let mut it = s.split_whitespace();
let ts: i64 = it.next().ok_or(())?.parse().map_err(|_| ())?;
let val: f64 = it.next().ok_or(())?.parse().map_err(|_| ())?;
Ok(Sample { ts, val })
} Build as a static lib: cargo build --release
- Produces target/release/lib<crate>.a (or .dll/.so for cdylib)
C++ (call site) extern "C" {
struct Sample { long long ts; double val; };
int parse_line(const unsigned char* ptr, size_t len, Sample* out);
}
- include <string>
- include <vector>
bool parse_many(const std::vector<std::string>& lines, std::vector<Sample>& out) {
out.resize(lines.size());
for (size_t i = 0; i < lines.size(); ++i) {
const auto& s = lines[i];
if (parse_line(reinterpret_cast<const unsigned char*>(s.data()), s.size(), &out[i]) != 0)
return false;
}
return true;
}
What we get:
- A safe parser internally (no UB, bounds‑checked, clear error paths).
- A thin C ABI that’s boring to integrate.
- No runtime tax in steady state: zero‑copy slices on the Rust side; C++ sees a plain C function.
This is the day‑to‑day pattern: keep your C++ where it shines; isolate risk in Rust.
Microbenchmark: how close is “close enough”? Raw parsing of simple ASCII is heavily memory/branch‑predictor bound. In practice, idiomatic Rust and modern C++ land in the same zip code when you follow equivalent algorithms and build flags. The question is less “which language is faster?” and more “which makes the faster code easier to write and keep correct under pressure?” Below is a fully reproducible microbenchmark recipe you can run locally. It’s intentionally simple, IO‑bound, and fair to both sides.
Benchmark task
- Generate 10 million lines like: 1698781234567 12.345 (timestamp + value).
- Parse into (i64, f64) structs, sum the values, and check we saw the expected count.
- Single‑threaded and then N‑threaded using a work queue.
Build & run
- C++ (GCC/Clang)
g++ -O3 -march=native -std=c++20 -DNDEBUG parser.cpp -o cpp_parser
- Rust
RUSTFLAGS="-C target-cpu=native" cargo build --release
- Dataset (same for both)
python3 gen.py --lines 10000000 > data.txt
- Timings (use whatever you prefer; here’s a simple loop)
/usr/bin/time -f "%E real, %M KB" ./cpp_parser data.txt /usr/bin/time -f "%E real, %M KB" ./target/release/rust_parser data.txt
What to expect (guidance, not gospel)
Your numbers will vary with CPU, storage, and compiler versions. The key observation is that algorithm and IO dominate. If you vectorize, batch parse, or pin memory, both languages go faster by roughly the same proportion. The practical win for Rust here isn’t raw speed; it’s the confidence that your new fast path can’t scribble past a buffer or data‑race itself into nonsense when you add threads at 2am.
Where this helps in real codebases
- Untrusted data at the edge. Wrap file/network parsers in Rust, feed typed, validated structs to the C++ core. Crash loops disappear; CVE count trends down.
- Concurrency hot spots. Use Rust to implement worker pools, bounded channels, or task executors; expose a C ABI to C++ producers/consumers.
- Plugin/extension systems. Ship plugins as Rust cdylibs with a minimal C shim. Sandboxing + safety without giving up perf.
- Testing and fuzzing. cargo fuzz + property tests around dangerous algorithms. Keep the harness in Rust while the code under test stays in C++.
- Gradual refactors. Replace just the leaky allocator, brittle cache, or flaky parser. No flag days, no multi‑year rewrites.
Interop patterns that actually ship
- C++ ➜ Rust: Build Rust as staticlib/cdylib, export C functions, consume from C++ like any C library. Use cbindgen to generate headers.
- Rust ➜ C++: When Rust is the host, cxx, bindgen, or ffi_support help talk to existing C++ libraries.
- Data contracts: Keep FFI structs #[repr(C)] (Rust) / POD (C++). Avoid throwing exceptions across the boundary. Propagate errors as status codes + out parameters or explicit result structs.
Footguns to avoid (ask me how I know)
- Hidden allocations. Small‑string optimizations and copy‑on‑write semantics differ. Be explicit about ownership at the boundary.
- Unbounded unsafe. Confine unsafe to tiny, reviewed modules. Make the safe API impossible to misuse.
- Mismatch in build modes. O0 vs release will “benchmark” your build flags, not your code. Align -C target-cpu / -march=native.
- Panic/exception crossings. Don’t.
The mindset shift: less heroics, more systems thinking If you’re a C++ shop, adopting Rust isn’t a repudiation of expertise — it’s a systems decision. You keep the parts that are battle‑tested and hard to replace; you insert Rust where bugs are costly and iteration is risky. Over time, you accumulate modules that are both fast and difficult to break. That’s a healthier codebase, not a holier language. When teams stop arguing about replacement and start strategizing about interfaces, a lot of false choices evaporate:
- We get the control and reach of C++ where it matters.
- We add safety and clarity with Rust where it hurts.
- We ship sooner, sleep better, and keep our options open.
Rust won’t replace C++. It’ll help you keep the promises your C++ system already makes — faster, safer, and with fewer 3am pages. And that’s more than okay.
Appendix: Full microbenchmark sketch (optional) Rust (binary) use std::{fs::File, io::{BufRead, BufReader}};
fn main() {
let path = std::env::args().nth(1).expect("file");
let f = File::open(path).unwrap();
let mut rdr = BufReader::with_capacity(1 << 20, f);
let mut buf = String::new();
let mut sum = 0.0f64;
let mut n = 0u64;
while rdr.read_line(&mut buf).unwrap() > 0 {
let mut it = buf.split_whitespace();
let _ts: i64 = it.next().unwrap().parse().unwrap();
let v: f64 = it.next().unwrap().parse().unwrap();
sum += v; n += 1; buf.clear();
}
eprintln!("lines={} sum={}", n, sum);
}
C++ (binary)
- include <bits/stdc++.h>
int main(int argc, char** argv) {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
std::ifstream in(argv[1]);
std::string s; s.reserve(128);
double sum = 0.0; unsigned long long n = 0;
while (std::getline(in, s)) {
std::istringstream iss(s);
long long ts; double v; iss >> ts >> v; sum += v; ++n;
}
std::cerr << "lines=" << n << " sum=" << sum << "\n";
}
Dataset generator (Python)
- gen.py
import random, sys from argparse import ArgumentParser p = ArgumentParser(); p.add_argument('--lines', type=int, default=10_000_000) args = p.parse_args() for i in range(args.lines):
ts = 1_600_000_000_000 + i
val = random.random() * 100
sys.stdout.write(f"{ts} {val}\n")
Tweak buffer sizes, switch to scanf/fast_float, lexical-core, or SIMDified parsers to explore how algorithm choices swamp language differences. Then decide where Rust’s safety margin buys you the most peace of mind in your C++ system.