Rust sqlx vs Diesel: The One Check That Killed Our DB Bugs
You trust your ORM until a tiny schema drift wipes out a morning. We thought our Rust DB layer was safe — until a silent column rename passed every test and triggered a live incident. We did not catch the mismatch with types, reviews, or integration tests. Only when we wired one simple check into the build did the pain stop for good. Now, the compiler fails loud the second a query drifts from schema, not hours after deploy. How that check works is worth seeing. When a Database Change Goes Undetected We had migrations and reviews, but when tables changed, queries still compiled. sqlx would let raw SQL sneak by. Diesel’s generated schema could be stale. This meant we could merge code that was already broken — just waiting for the wrong row to throw an error. +---------------------------+ | git commit | | | | | ┌─────▼─────┐ | | | build | | | └─────┬─────┘ | | | (query OK?) | | Schema drift missed | | | | | Deploy broken code | +---------------------------+ Every missed break turned into late nights patching live services. But with the right proof wired into the pipeline, these bugs stopped before they hit main. sqlx Query Macros: Compile-Time SQL Validation The game-changer was moving to query! and query_as! macros. Instead of running SQL unchecked, these macros force Rust to verify every column, type, and nullability at compile time, using a local schema snapshot. use sqlx::{PgPool, FromRow};
- [derive(FromRow)]
pub struct UserRow {
pub id: i32, pub name: String, pub email: String,
} pub async fn get_user(pool: &PgPool, id: i32) -> sqlx::Result<UserRow> {
sqlx::query_as!(
UserRow,
"SELECT id, name, email FROM users WHERE id = $1",
id
)
.fetch_one(pool)
.await
} After this shift, new columns and type mismatches were caught instantly — on the developer’s machine, not on a server. The difference: failed builds replaced broken deploys, and we saw zero “column not found” errors in production. Anyone applying sqlx in practice should always run sqlx prepare after each migration and keep sqlx-data.json current. Teach teams to reject PRs that use dynamic SQL unless absolutely required. Diesel: Living by Schema.rs, Not By Guesswork Diesel’s type system is powerful, but only if the schema is fresh. The fix was to auto-generate schema.rs in every pipeline run and refuse the build if the file drifted. use diesel::prelude::*; use crate::schema::orders;
- [derive(Queryable)]
pub struct Order {
pub id: i64, pub total_cents: i32, pub status: String,
}
- [derive(Insertable)]
- [table_name = "orders"]
pub struct NewOrder<'a> {
pub total_cents: i32, pub status: &'a str,
} pub fn create_order(conn: &PgConnection, n: NewOrder) -> QueryResult<Order> {
diesel::insert_into(orders::table)
.values(&n)
.get_result(conn)
} When a migration changed status to order_status, compilation broke, not production. Diesel’s safety only worked when enforced as a daily discipline—diesel print-schema in CI, not just after big changes. Make schema generation part of your build. Fail the pipeline if there’s any difference, so broken shapes never reach prod. The Schema Check Script That Closed the Gap To catch every drift, we built a single CI job: apply all migrations, re-run schema generation for both sqlx and Diesel, and compare with the main branch. +-------------------------------+ | CI Job: Schema Check | | | | | Apply migrations | | | | | Run sqlx prepare + | | diesel print-schema | | | | | Compare to committed files | | | | | If mismatch: fail build | +-------------------------------+ This check broke the build at the exact moment a PR drifted from the live database. Our DB error rate dropped, and so did time spent in rollback calls. Teams using multiple ORMs should wrap their schema check scripts and run them before tests, not after. Handling Dynamic SQL Without Losing Safety Not every query is static. For analytics or report filters, we used builder APIs to stitch queries, but kept value binds strongly typed and structures predictable. use sqlx::{PgPool, QueryBuilder};
pub async fn user_count(pool: &PgPool, active: bool) -> sqlx::Result<i64> {
let mut qb = QueryBuilder::new("SELECT count(*) FROM users");
if active {
qb.push(" WHERE active = ").push_bind(true);
}
let row = qb.build_query_as::<(i64,)>().fetch_one(pool).await?;
Ok(row.0)
} Throughput was solid, memory flat — no panics from missing columns, no surprises under load. The trick: always bind values, never mix dynamic SQL strings unless strictly necessary, and prefer builder APIs where you can. Code Review to Build Gate: How Our Habits Changed The biggest shift was cultural. Before, we relied on code review to catch bad queries and missing columns. But reviewers are human. We wired the schema check into CI and let the build, not the brain, catch every drift. +---------------------------+ | Code Review | | | | | Build runs schema check| | | | | Query/type mismatch? | | | | | ┌─────Yes────┐ No──────┘ | | Fail | +---------------------------+ Incidents dropped, and team confidence grew. Reviews focused on logic, not field names, and the team shipped with less fear. Enforce the build check locally before pushing and as a blocking CI job for every main-branch merge. Final Wrap-Up The one build-time schema check changed how we trusted our database code. Now, every query is locked to the schema, every deploy is safer, and our team spends less time firefighting. No amount of testing caught as many bugs as this tiny automation step.
Read the full article here: https://medium.com/@maahisoft20/rust-sqlx-vs-diesel-the-one-check-that-killed-our-db-bugs-5b017de03873