Rust in 2025: The Ecosystem Finally Feels Complete: Difference between revisions
Created page with "The Cargo.toml file is open. You're adding dependencies for a new service web framework, database layer, async runtime and for the first time in years, you don't stop to research which crate is winning the ecosystem wars. Axum for HTTP. SeaORM for the database. Tokio underneath it all. You type them out, run cargo build, and twenty seconds later you have a working server with connection pooling and migrations. There’s this moment that happens when a language ecosyst..." |
No edit summary |
||
| Line 1: | Line 1: | ||
The Cargo.toml file is open. You're adding dependencies for a new service web framework, database layer, async runtime and for the first time in years, you don't stop to research which crate is winning the ecosystem wars. Axum for HTTP. SeaORM for the database. Tokio underneath it all. You type them out, run cargo build, and twenty seconds later you have a working server with connection pooling and migrations. | The Cargo.toml file is open. You're adding dependencies for a new service web framework, database layer, async runtime and for the first time in years, you don't stop to research which crate is winning the ecosystem wars. Axum for HTTP. SeaORM for the database. Tokio underneath it all. You type them out, run cargo build, and twenty seconds later you have a working server with connection pooling and migrations. | ||
There’s this moment that happens when a language ecosystem matures where you stop fighting it and start just building with it. | There’s this moment that happens when a language ecosystem matures where you stop fighting it and start just building with it. | ||
You’re not alone if Rust felt promising but unfinished for years. A 2025 Rust survey found that 67% of developers who tried Rust before 2023 cited “immature ecosystem” as a reason they returned to Go or TypeScript for production services. The language was solid. The borrow checker made sense eventually. But the crates kept churning, the patterns kept shifting, and nothing felt settled. | You’re not alone if Rust felt promising but unfinished for years. A 2025 Rust survey found that 67% of developers who tried Rust before 2023 cited “immature ecosystem” as a reason they returned to Go or TypeScript for production services. The language was solid. The borrow checker made sense eventually. But the crates kept churning, the patterns kept shifting, and nothing felt settled. | ||
Here’s what changed: the ecosystem stopped competing with itself and started converging on answers that actually work. | Here’s what changed: the ecosystem stopped competing with itself and started converging on answers that actually work. | ||
So Rust crossed some invisible threshold in the last two years where the core building blocks — the ones you need for basically any backend service stabilized. Not just “this crate exists,” but “this crate is maintained, documented, and the default choice.” The fragmentation that made every project feel like a research expedition just kind of dissolved. | So Rust crossed some invisible threshold in the last two years where the core building blocks — the ones you need for basically any backend service stabilized. Not just “this crate exists,” but “this crate is maintained, documented, and the default choice.” The fragmentation that made every project feel like a research expedition just kind of dissolved. | ||
The big shift is around async. Tokio won not because the alternatives disappeared, but because the ecosystem just decided to standardize on it. Axum, the web framework built by the Tokio team, became the obvious choice for HTTP services. SeaORM emerged as the pragmatic ORM that doesn’t fight the type system. Tower middleware, which felt like an academic exercise for years, is now just how you compose HTTP layers. | The big shift is around async. Tokio won not because the alternatives disappeared, but because the ecosystem just decided to standardize on it. Axum, the web framework built by the Tokio team, became the obvious choice for HTTP services. SeaORM emerged as the pragmatic ORM that doesn’t fight the type system. Tower middleware, which felt like an academic exercise for years, is now just how you compose HTTP layers. | ||
What this actually means: you can build a production Rust service in 2025 without evaluating fifteen competing crates for every layer. The decisions are made. The patterns are documented. The integration points actually work. And honestly? The ergonomics caught up to the performance story. | What this actually means: you can build a production Rust service in 2025 without evaluating fifteen competing crates for every layer. The decisions are made. The patterns are documented. The integration points actually work. And honestly? The ergonomics caught up to the performance story. | ||
The promise isn’t that Rust is suddenly easy it’s not, the borrow checker still has opinions but it’s that the ecosystem stopped making you reinvent the wheel every time you need to handle an HTTP request or talk to a database. | The promise isn’t that Rust is suddenly easy it’s not, the borrow checker still has opinions but it’s that the ecosystem stopped making you reinvent the wheel every time you need to handle an HTTP request or talk to a database. | ||
The Tokio Thing Stopped Being Controversial | The Tokio Thing Stopped Being Controversial | ||
Okay so for years there was this tension in the Rust async world. Tokio was the biggest runtime but it wasn’t the only one. async-std existed. smol existed. People kept writing blog posts about why you should use this other thing, and every time you started a project you had to make this decision that felt consequential but had no clear answer. | Okay so for years there was this tension in the Rust async world. Tokio was the biggest runtime but it wasn’t the only one. async-std existed. smol existed. People kept writing blog posts about why you should use this other thing, and every time you started a project you had to make this decision that felt consequential but had no clear answer. | ||
And then somewhere around 2023–2024, the ecosystem just… picked Tokio. Not through some official decree. Just through gravitational pull. The major frameworks built on it. The database drivers assumed it. The tooling integrated with it. And now in 2025, if you’re starting a new async Rust project, you just use Tokio and nobody questions it. | And then somewhere around 2023–2024, the ecosystem just… picked Tokio. Not through some official decree. Just through gravitational pull. The major frameworks built on it. The database drivers assumed it. The tooling integrated with it. And now in 2025, if you’re starting a new async Rust project, you just use Tokio and nobody questions it. | ||
Wait — that sounds like I’m saying competition is bad, which isn’t what I mean. The other runtimes are still there and still have legitimate use cases. But for the 95% case — you’re building a web service or a CLI tool that does I/O — Tokio is the default now, and that defaultness is liberating. You don’t spend cognitive energy on the decision. You just write tokio::main and move on. | Wait — that sounds like I’m saying competition is bad, which isn’t what I mean. The other runtimes are still there and still have legitimate use cases. But for the 95% case — you’re building a web service or a CLI tool that does I/O — Tokio is the default now, and that defaultness is liberating. You don’t spend cognitive energy on the decision. You just write tokio::main and move on. | ||
What changed with Tokio specifically is it got better at the edges. Error messages improved. The tracing integration became first-class. The documentation stopped assuming you already understood futures deeply. And critically, the performance characteristics became predictable. You can reason about what a Tokio service will do under load without needing to be a runtime expert. | What changed with Tokio specifically is it got better at the edges. Error messages improved. The tracing integration became first-class. The documentation stopped assuming you already understood futures deeply. And critically, the performance characteristics became predictable. You can reason about what a Tokio service will do under load without needing to be a runtime expert. | ||
The earned lesson here is that ecosystems need winners. Not monopolies, but clear defaults. Because the cognitive overhead of evaluating every foundational choice is what kills momentum. | The earned lesson here is that ecosystems need winners. Not monopolies, but clear defaults. Because the cognitive overhead of evaluating every foundational choice is what kills momentum. | ||
Axum Made HTTP Feel Native | Axum Made HTTP Feel Native | ||
So if you tried Rust web frameworks before 2024, you probably used Actix or Rocket or maybe Warp. And they all worked, sort of, but they all had these moments where you were fighting the framework instead of building features. Actix had its actor model. Rocket was sync by default. Warp’s filter system was brilliant until it wasn’t, and then it was just confusing. | So if you tried Rust web frameworks before 2024, you probably used Actix or Rocket or maybe Warp. And they all worked, sort of, but they all had these moments where you were fighting the framework instead of building features. Actix had its actor model. Rocket was sync by default. Warp’s filter system was brilliant until it wasn’t, and then it was just confusing. | ||
Axum showed up and made a different set of tradeoffs. It’s built directly on Tower, which means middleware is just functions that transform requests and responses. It embraces extractors — types that pull data out of requests using Rust’s type system. And it assumes async everywhere, with Tokio underneath. | Axum showed up and made a different set of tradeoffs. It’s built directly on Tower, which means middleware is just functions that transform requests and responses. It embraces extractors — types that pull data out of requests using Rust’s type system. And it assumes async everywhere, with Tokio underneath. | ||
The practical impact is you write handlers that look like normal Rust functions. You want JSON? Add a Json<T> parameter. Need database access? Add a State<Pool> parameter. Want to return different status codes? Return a Result<Json<T>, MyError>. The framework uses type inference to figure out what you're doing. No macros. No magic strings. Just types. | The practical impact is you write handlers that look like normal Rust functions. You want JSON? Add a Json<T> parameter. Need database access? Add a State<Pool> parameter. Want to return different status codes? Return a Result<Json<T>, MyError>. The framework uses type inference to figure out what you're doing. No macros. No magic strings. Just types. | ||
And honestly, this is where Rust’s type system stops being a tax and becomes an asset. The extractors compose cleanly. The error handling integrates with the standard Result type. The compiler tells you when you've forgotten to handle an error case. You write less code and catch more bugs at compile time. | And honestly, this is where Rust’s type system stops being a tax and becomes an asset. The extractors compose cleanly. The error handling integrates with the standard Result type. The compiler tells you when you've forgotten to handle an error case. You write less code and catch more bugs at compile time. | ||
One thing that surprised me — Axum’s performance is genuinely competitive with Go and even beats it in some benchmarks. Not because it’s doing anything exotic, just because async Rust on Tokio has very little overhead when you’re mostly waiting on I/O. And it does this while giving you memory safety and type safety that Go can’t match. | One thing that surprised me — Axum’s performance is genuinely competitive with Go and even beats it in some benchmarks. Not because it’s doing anything exotic, just because async Rust on Tokio has very little overhead when you’re mostly waiting on I/O. And it does this while giving you memory safety and type safety that Go can’t match. | ||
Try this today: cargo new my-service, add axum = "0.7" and tokio = { version = "1", features = ["full"] } to your Cargo.toml, and copy the hello-world example from the Axum docs. You'll have a working HTTP server in five minutes. Then add a route. Add JSON handling. Add middleware. It just composes. | Try this today: cargo new my-service, add axum = "0.7" and tokio = { version = "1", features = ["full"] } to your Cargo.toml, and copy the hello-world example from the Axum docs. You'll have a working HTTP server in five minutes. Then add a route. Add JSON handling. Add middleware. It just composes. | ||
SeaORM Solved the Database Problem (Mostly) | SeaORM Solved the Database Problem (Mostly) | ||
Database access in Rust used to be this choose-your-pain situation. Use Diesel and fight the compile times and macro errors. Use SQLx and write raw SQL everywhere but with compile-time checking. Use something lightweight and lose type safety. Every option had a serious downside. | Database access in Rust used to be this choose-your-pain situation. Use Diesel and fight the compile times and macro errors. Use SQLx and write raw SQL everywhere but with compile-time checking. Use something lightweight and lose type safety. Every option had a serious downside. | ||
SeaORM feels like the first Rust ORM that doesn’t require accepting a major compromise. It’s built on SQLx for the query layer, so you get the compile-time SQL validation when you want it. But it also generates entities from your database schema, so you’re working with typed structs instead of raw queries most of the time. And the async story is clean — everything returns futures, everything integrates with Tokio. | SeaORM feels like the first Rust ORM that doesn’t require accepting a major compromise. It’s built on SQLx for the query layer, so you get the compile-time SQL validation when you want it. But it also generates entities from your database schema, so you’re working with typed structs instead of raw queries most of the time. And the async story is clean — everything returns futures, everything integrates with Tokio. | ||
The migration story works. The relationship handling (has-many, belongs-to, etc.) is intuitive if you’ve used any ORM before. And critically, when you need to drop down to raw SQL for complex queries, it doesn’t fight you. You can mix ORM queries and raw SQLx in the same codebase without weird impedance mismatches. | The migration story works. The relationship handling (has-many, belongs-to, etc.) is intuitive if you’ve used any ORM before. And critically, when you need to drop down to raw SQL for complex queries, it doesn’t fight you. You can mix ORM queries and raw SQLx in the same codebase without weird impedance mismatches. | ||
Edge case worth mentioning: if you’re doing really complex relational queries with lots of joins, you might still want to write raw SQL. ORMs — any ORM, in any language — struggle with the complexity ceiling eventually. But for the 80% case where you’re doing CRUD with some filtering and pagination, SeaORM just works and the generated code is readable. | Edge case worth mentioning: if you’re doing really complex relational queries with lots of joins, you might still want to write raw SQL. ORMs — any ORM, in any language — struggle with the complexity ceiling eventually. But for the 80% case where you’re doing CRUD with some filtering and pagination, SeaORM just works and the generated code is readable. | ||
The tooling around it matters too. There’s a CLI that generates entities from your database. Migrations are first-class. The documentation has examples for all the major databases. It’s not perfect but it’s the first time I’ve used a Rust ORM and thought “yeah, I’d use this in production” without qualification. | The tooling around it matters too. There’s a CLI that generates entities from your database. Migrations are first-class. The documentation has examples for all the major databases. It’s not perfect but it’s the first time I’ve used a Rust ORM and thought “yeah, I’d use this in production” without qualification. | ||
Back to That Cargo.toml | Back to That Cargo.toml | ||
Remember just typing out those dependencies and having it work? Here’s what the current Rust stack looks like for a standard web service: | Remember just typing out those dependencies and having it work? Here’s what the current Rust stack looks like for a standard web service: | ||
Tokio for async. Axum for HTTP. SeaORM for database access. Tower for middleware composition. Serde for JSON (that one’s been stable forever). Maybe add tracing for structured logging. That's it. Six crates and you have a production-ready foundation. | Tokio for async. Axum for HTTP. SeaORM for database access. Tower for middleware composition. Serde for JSON (that one’s been stable forever). Maybe add tracing for structured logging. That's it. Six crates and you have a production-ready foundation. | ||
You write your handlers as functions. You define your database models as structs. You add middleware by wrapping routes with Tower layers. The error handling flows through Result types. The compiler catches most bugs before they run. And the runtime characteristics are predictable — low memory, fast startup, handles load well. | You write your handlers as functions. You define your database models as structs. You add middleware by wrapping routes with Tower layers. The error handling flows through Result types. The compiler catches most bugs before they run. And the runtime characteristics are predictable — low memory, fast startup, handles load well. | ||
The pattern that works: start with the Axum examples, add SeaORM following their migration guide, structure your code with a simple layer architecture (handlers, service logic, data access), and use tracing instead of println debugging. The conventions are there now. Follow them and you'll ship faster than you expect. | The pattern that works: start with the Axum examples, add SeaORM following their migration guide, structure your code with a simple layer architecture (handlers, service logic, data access), and use tracing instead of println debugging. The conventions are there now. Follow them and you'll ship faster than you expect. | ||
What This Actually Means | What This Actually Means | ||
Rust in 2025 isn’t the same proposition it was in 2022. The language itself barely changed some syntax improvements, stabilized features, the usual refinements. But the ecosystem caught up. The libraries matured. The patterns solidified. And now you can build production services without constantly evaluating whether you’ve chosen the right crates or worrying that everything will churn in six months. | Rust in 2025 isn’t the same proposition it was in 2022. The language itself barely changed some syntax improvements, stabilized features, the usual refinements. But the ecosystem caught up. The libraries matured. The patterns solidified. And now you can build production services without constantly evaluating whether you’ve chosen the right crates or worrying that everything will churn in six months. | ||
The performance story was always there. The safety story was always there. What’s new is the productivity story. You can move fast now. The ecosystem supports you instead of requiring you to be an expert in every layer. And the integration points between crates actually work because they’ve all converged on the same foundations. | The performance story was always there. The safety story was always there. What’s new is the productivity story. You can move fast now. The ecosystem supports you instead of requiring you to be an expert in every layer. And the integration points between crates actually work because they’ve all converged on the same foundations. | ||
The tooling helps too. Rust-analyzer got dramatically better. Cargo’s compile times improved. The error messages continued to evolve. But honestly the bigger shift is just that the community stopped arguing about fundamentals and started building on shared abstractions. | The tooling helps too. Rust-analyzer got dramatically better. Cargo’s compile times improved. The error messages continued to evolve. But honestly the bigger shift is just that the community stopped arguing about fundamentals and started building on shared abstractions. | ||
What’s one service in your stack that would benefit from Rust’s performance and safety characteristics, now that the ecosystem makes it realistic to actually build? Maybe something that handles high concurrency, or needs strong correctness guarantees, or would benefit from low memory overhead? | What’s one service in your stack that would benefit from Rust’s performance and safety characteristics, now that the ecosystem makes it realistic to actually build? Maybe something that handles high concurrency, or needs strong correctness guarantees, or would benefit from low memory overhead? | ||
Latest revision as of 18:15, 15 November 2025
The Cargo.toml file is open. You're adding dependencies for a new service web framework, database layer, async runtime and for the first time in years, you don't stop to research which crate is winning the ecosystem wars. Axum for HTTP. SeaORM for the database. Tokio underneath it all. You type them out, run cargo build, and twenty seconds later you have a working server with connection pooling and migrations.
There’s this moment that happens when a language ecosystem matures where you stop fighting it and start just building with it. You’re not alone if Rust felt promising but unfinished for years. A 2025 Rust survey found that 67% of developers who tried Rust before 2023 cited “immature ecosystem” as a reason they returned to Go or TypeScript for production services. The language was solid. The borrow checker made sense eventually. But the crates kept churning, the patterns kept shifting, and nothing felt settled.
Here’s what changed: the ecosystem stopped competing with itself and started converging on answers that actually work. So Rust crossed some invisible threshold in the last two years where the core building blocks — the ones you need for basically any backend service stabilized. Not just “this crate exists,” but “this crate is maintained, documented, and the default choice.” The fragmentation that made every project feel like a research expedition just kind of dissolved.
The big shift is around async. Tokio won not because the alternatives disappeared, but because the ecosystem just decided to standardize on it. Axum, the web framework built by the Tokio team, became the obvious choice for HTTP services. SeaORM emerged as the pragmatic ORM that doesn’t fight the type system. Tower middleware, which felt like an academic exercise for years, is now just how you compose HTTP layers.
What this actually means: you can build a production Rust service in 2025 without evaluating fifteen competing crates for every layer. The decisions are made. The patterns are documented. The integration points actually work. And honestly? The ergonomics caught up to the performance story. The promise isn’t that Rust is suddenly easy it’s not, the borrow checker still has opinions but it’s that the ecosystem stopped making you reinvent the wheel every time you need to handle an HTTP request or talk to a database.
The Tokio Thing Stopped Being Controversial Okay so for years there was this tension in the Rust async world. Tokio was the biggest runtime but it wasn’t the only one. async-std existed. smol existed. People kept writing blog posts about why you should use this other thing, and every time you started a project you had to make this decision that felt consequential but had no clear answer.
And then somewhere around 2023–2024, the ecosystem just… picked Tokio. Not through some official decree. Just through gravitational pull. The major frameworks built on it. The database drivers assumed it. The tooling integrated with it. And now in 2025, if you’re starting a new async Rust project, you just use Tokio and nobody questions it.
Wait — that sounds like I’m saying competition is bad, which isn’t what I mean. The other runtimes are still there and still have legitimate use cases. But for the 95% case — you’re building a web service or a CLI tool that does I/O — Tokio is the default now, and that defaultness is liberating. You don’t spend cognitive energy on the decision. You just write tokio::main and move on.
What changed with Tokio specifically is it got better at the edges. Error messages improved. The tracing integration became first-class. The documentation stopped assuming you already understood futures deeply. And critically, the performance characteristics became predictable. You can reason about what a Tokio service will do under load without needing to be a runtime expert.
The earned lesson here is that ecosystems need winners. Not monopolies, but clear defaults. Because the cognitive overhead of evaluating every foundational choice is what kills momentum.
Axum Made HTTP Feel Native So if you tried Rust web frameworks before 2024, you probably used Actix or Rocket or maybe Warp. And they all worked, sort of, but they all had these moments where you were fighting the framework instead of building features. Actix had its actor model. Rocket was sync by default. Warp’s filter system was brilliant until it wasn’t, and then it was just confusing.
Axum showed up and made a different set of tradeoffs. It’s built directly on Tower, which means middleware is just functions that transform requests and responses. It embraces extractors — types that pull data out of requests using Rust’s type system. And it assumes async everywhere, with Tokio underneath.
The practical impact is you write handlers that look like normal Rust functions. You want JSON? Add a Json<T> parameter. Need database access? Add a State<Pool> parameter. Want to return different status codes? Return a Result<Json<T>, MyError>. The framework uses type inference to figure out what you're doing. No macros. No magic strings. Just types.
And honestly, this is where Rust’s type system stops being a tax and becomes an asset. The extractors compose cleanly. The error handling integrates with the standard Result type. The compiler tells you when you've forgotten to handle an error case. You write less code and catch more bugs at compile time. One thing that surprised me — Axum’s performance is genuinely competitive with Go and even beats it in some benchmarks. Not because it’s doing anything exotic, just because async Rust on Tokio has very little overhead when you’re mostly waiting on I/O. And it does this while giving you memory safety and type safety that Go can’t match.
Try this today: cargo new my-service, add axum = "0.7" and tokio = { version = "1", features = ["full"] } to your Cargo.toml, and copy the hello-world example from the Axum docs. You'll have a working HTTP server in five minutes. Then add a route. Add JSON handling. Add middleware. It just composes. SeaORM Solved the Database Problem (Mostly)
Database access in Rust used to be this choose-your-pain situation. Use Diesel and fight the compile times and macro errors. Use SQLx and write raw SQL everywhere but with compile-time checking. Use something lightweight and lose type safety. Every option had a serious downside. SeaORM feels like the first Rust ORM that doesn’t require accepting a major compromise. It’s built on SQLx for the query layer, so you get the compile-time SQL validation when you want it. But it also generates entities from your database schema, so you’re working with typed structs instead of raw queries most of the time. And the async story is clean — everything returns futures, everything integrates with Tokio.
The migration story works. The relationship handling (has-many, belongs-to, etc.) is intuitive if you’ve used any ORM before. And critically, when you need to drop down to raw SQL for complex queries, it doesn’t fight you. You can mix ORM queries and raw SQLx in the same codebase without weird impedance mismatches. Edge case worth mentioning: if you’re doing really complex relational queries with lots of joins, you might still want to write raw SQL. ORMs — any ORM, in any language — struggle with the complexity ceiling eventually. But for the 80% case where you’re doing CRUD with some filtering and pagination, SeaORM just works and the generated code is readable.
The tooling around it matters too. There’s a CLI that generates entities from your database. Migrations are first-class. The documentation has examples for all the major databases. It’s not perfect but it’s the first time I’ve used a Rust ORM and thought “yeah, I’d use this in production” without qualification.
Back to That Cargo.toml Remember just typing out those dependencies and having it work? Here’s what the current Rust stack looks like for a standard web service: Tokio for async. Axum for HTTP. SeaORM for database access. Tower for middleware composition. Serde for JSON (that one’s been stable forever). Maybe add tracing for structured logging. That's it. Six crates and you have a production-ready foundation.
You write your handlers as functions. You define your database models as structs. You add middleware by wrapping routes with Tower layers. The error handling flows through Result types. The compiler catches most bugs before they run. And the runtime characteristics are predictable — low memory, fast startup, handles load well.
The pattern that works: start with the Axum examples, add SeaORM following their migration guide, structure your code with a simple layer architecture (handlers, service logic, data access), and use tracing instead of println debugging. The conventions are there now. Follow them and you'll ship faster than you expect.
What This Actually Means Rust in 2025 isn’t the same proposition it was in 2022. The language itself barely changed some syntax improvements, stabilized features, the usual refinements. But the ecosystem caught up. The libraries matured. The patterns solidified. And now you can build production services without constantly evaluating whether you’ve chosen the right crates or worrying that everything will churn in six months.
The performance story was always there. The safety story was always there. What’s new is the productivity story. You can move fast now. The ecosystem supports you instead of requiring you to be an expert in every layer. And the integration points between crates actually work because they’ve all converged on the same foundations.
The tooling helps too. Rust-analyzer got dramatically better. Cargo’s compile times improved. The error messages continued to evolve. But honestly the bigger shift is just that the community stopped arguing about fundamentals and started building on shared abstractions. What’s one service in your stack that would benefit from Rust’s performance and safety characteristics, now that the ecosystem makes it realistic to actually build? Maybe something that handles high concurrency, or needs strong correctness guarantees, or would benefit from low memory overhead?