Rust Just Gave Some Crates A Free 38% Compile-Time Speedup
You know that feeling when you hit cargo build with a small change and your brain whispers:
“Why does this still feel slow?” You glance at chat. You check your phone. The progress lines move, but not fast enough to match your impatience.
Now imagine this: one day you update your toolchain, do the same build, and it just feels lighter. You did not rewrite a single function. You did not swap your logger. You did not touch your Cargo.toml. Yet your build finishes earlier.
That is what this new Rust formatting change delivers for some projects. For one extreme benchmark, it pushed compile times down by about 38% and shrank the binary as well. For many real projects, it gives a smaller but still real speedup. This article is about what actually changed, how it affects your daily builds, and how you can measure the effect on your own codebase.
That Build That Makes You Question Your Life Choices Let me start with something you have probably lived through. You are working on a Rust service with a lot of crates. Nothing wild, just a normal workspace that grew with features and deadlines. Logging everywhere. Helpers everywhere. Small command line tools sprinkled around. You fix one bug, add one log line, and then you build.
The fan spins up. The screen fills with Compiling ... lines. Someone pings you. You tell yourself, “I will just wait for this build to finish before replying.” Five minutes later you realize you already answered three different chats and glanced at an email thread
The build eventually completes, but your focus is already gone. That slow bleed of seconds and attention is why even a two percent speedup matters. You do not notice the number. You notice that you can stay in the flow just a bit longer.
Now, Rust’s compiler team has given you a way to reclaim some of that time without forcing you to change how you write your code.
The Surprise Behind A Faster Cargo Build
Open almost any Rust codebase and search for:
- println!
- eprintln!
- format!
- panic!
- write!
- log::info! and friends
They are everywhere. In tests. In logs. In debug prints you swore you would remove. In quick tools you built for one sprint and then kept forever. All of these share the same hidden core: they go through format_args! and the formatting machinery in the standard library. Very roughly, the flow looks like this:
Your Code: println!("Hello {name}, you have {count} messages");
↓ Macro Expansion
format_args!("Hello {name}, you have {count} messages", name, count)
↓ Compiler Work
Generate code that knows:
- the format string
- where each argument goes
- how to render them
↓ Final Binary
Executable that calls into the formatting engine at runtime
and writes to stdout, stderr, or a log sink.
The recent change did not alter the macros you call. It did not change the format string syntax. It did not ask you to learn a fresh API. Instead, it changed how that middle section is built and represented, in a way that lets the compiler:
- Do less work during compilation.
- Use less memory while doing that work.
- Produce leaner code in the final binary.
You keep writing println! as before. The compiler just carries less baggage while turning it into machine code.
What Actually Changed Inside Rust’s Formatting Engine To understand the improvement, you only need one mental model. Before this change, every formatting call (println!, format!, panic!, and so on) was expanded into a fairly rich structure describing:
- The format string.
- The arguments.
- The mapping between them.
- The machinery for building the final output.
That structure was powerful, but it also meant more tokens for the compiler to process and more code to generate. The new implementation reorganizes how this structure is built and used. It keeps the same public behavior but focuses on:
- Representing formatting plans more compactly.
- Reducing repeated work when similar patterns appear many times.
- Emitting code that does the same thing with less noise.
Here is a simple sketch of the before and after flow:
Before Change
-------------
Source Code
↓
println! / format! / panic!
↓
format_args! builds a rich formatting plan
↓
Compiler generates larger, more repetitive glue code
↓
Binary has more formatting-related code and
takes more work to build
After Change
------------
Source Code
↓
Same macros, same format_args!
↓
Compiler builds a more compact plan
↓
Compiler generates leaner glue code
↓
Binary has less formatting-related code and
takes less work to build
From your perspective as a developer:
- Same inputs.
- Same observable outputs.
- Cleaner path through the compiler in between.
That is exactly the kind of invisible upgrade you want in a language ecosystem you rely on for years.
How Much Faster Are Real Projects
Numbers always matter here. “Faster” without context is just decoration. Across the benchmarks that were run for this change, the picture looked roughly like this:
- A minimal “hello world” program compiled about 3% faster.
- Larger real projects, like well known command line tools, saw compile times go down by around 1.5–2%, with binaries roughly 2% smaller.
- Many other programs showed small but measurable gains, often under 3%, and some showed almost no change.
- A synthetic benchmark consisting of a huge workspace filled with crates that mostly use println! saw about 38% faster compilation and around 22% smaller binaries.
That last benchmark is deliberately extreme. It is designed to stress the part of the system that changed. Your everyday service probably will not see that level of improvement.
Still, the shape is clear:
- If your project uses a lot of formatting macros.
- If you have many crates with similar patterns.
- If your test and logging code leans hard on println! and log macros.
Then this change is more likely to help you. And even in projects where the gain is closer to two percent, that is not something you should dismiss.
A Simple Experiment You Can Run On Your Codebase The best way to understand this change is to see how it behaves on your own project.
Here is one straightforward experiment you can run on any Rust workspace. First, make sure you know which toolchain you are using. Then:
- Clean your build to remove old artifacts:
cargo clean 2. Build your project in release mode and measure the time with your usual timing tool:
cargo build --release
- Record the duration. Run it a couple of times to avoid one-off noise.
- Update your toolchain to a version that includes the formatting improvement.
- Repeat the same clean and build sequence and record the new numbers.
You are not looking for perfect benchmarking here. You just want a feel for whether your builds sit in the “no visible change” group or in the “this feels lighter” group. If you want a small reproducible test inside your project, you can drop a small helper binary alongside your main code:
// src/bin/format_hero.rs
fn main() {
for i in 0..50_000 {
println!("Processing record {i} for user {i}");
}
let user = "[email protected]";
let action = "update-profile";
let log_line = format!(
"event=profile-update user={user} action={action} success=true"
);
println!("{log_line}");
}
Build this helper with release settings before and after the change. It leans heavily on formatting, so it is a decent way to stress this particular path without touching your core application logic.
The goal is not to brag about a specific number. The goal is to build an intuition for where your code sits on that benchmark spectrum.
Why A Few Percent Matters More Than You Think At first glance, a two percent shift does not sound like something worth talking about. It is not the sort of change that makes it into flashy product announcements. But think about how many times you build per week:
- While iterating on a feature.
- While running integration tests.
- While cutting a release candidate.
- While your continuous integration system builds every branch and pull request.
Every build is a small interruption. Every interruption is a chance to lose focus. A couple of percent off a single build is not dramatic. A couple of percent off hundreds of builds in a month is real.
On top of that, smaller binaries are easier to ship, easier to cache, and slightly kinder to your storage and network.
None of this transforms your workflow overnight. Yet all of it bends the curve in a direction that favors your concentration and your team’s productivity. The most important part is that you did not have to trade correctness or readability to get it. The language got out of your way a little more.
What This Tells Us About Rust’s Culture This is not just a nice benchmark story. It reveals a lot about how the Rust world tends to operate.
Backward Compatibility As A First-Class Constraint The visible behavior of println!, format!, and the rest stayed the same. Formatting rules did not change. There is no new macro for you to adopt, no breaking shift in how errors are reported. You benefit without chasing a new pattern.
Investing In The Boring Hot Paths Improving the internals of formatting macros is not glamorous work. It will never become a keynote slogan. But it hits code paths that almost every Rust project touches. That is the kind of work that makes a language pleasant to live in for a decade.
Letting Developers Write The Obvious Code The best optimizations are the ones that make obvious code the right choice, not the wrong one. You should not feel guilty every time you add a clean, readable format! call just because you are afraid of compiler overhead. These values matter more than any single speedup. They tell you what to expect from the language and the community in the future.
Where Different Types Of Developers Benefit Different roles feel this change in different ways.
Backend Engineers You log everything. Requests, errors, metrics, strange edge cases you hope never happen again. Your test suite and your debug sessions are full of formatted strings. Shaving work off that path means your feedback loop gets smoother every day.
Full-Stack And Frontend Developers Using Rust If you are building with WebAssembly or shared Rust libraries, you already fight build times while hopping between front-end and back-end concerns. Even small improvements here help you keep both halves of your brain aligned when switching contexts.
Tooling And Infrastructure Engineers If you maintain internal CLIs, data migration tools, or performance probes, your code often runs everywhere. These tools tend to print a lot of information. Faster builds and slightly smaller binaries make them easier to maintain and distribute.
Technical Leads And Founders You care about developer time and infrastructure cost. Faster builds mean less time wasted staring at progress bars and less pressure to throw more hardware at build servers. It is a slow, steady return on trust in the ecosystem.
You might not be able to attach a direct revenue number to this change. You will still feel its impact in how often developers complain about waiting.
Share Your Numbers, Share Your Pain No compiler change matters in abstract. It matters when it hits real projects with real deadlines.
If you run the experiment on your workspace and notice an improvement, talk about it. Share how big your codebase is, what kind of work it does, and what shift you saw in build times.
If you see almost no difference, that is useful information as well. It helps everyone understand which workloads benefit most, and it lets the people working on the compiler aim even more precisely.
Most of us will never rewrite a core part of Rust’s formatting engine. But all of us live with the effects.
So the next time your build feels just a little less heavy, remember that someone spent days and weeks shaving cost off a path you rarely think about. And if you do measure a big change, I would genuinely love to hear your numbers and your stories. That kind of discussion is where better tools, better defaults, and better languages are born.
Read the full article here: https://medium.com/@the_atomic_architect/rust-compile-time-speedup-format-args-98f8645b3c76