Rust 1.80 vs Go 1.23 on Postgres: Same Box, Different Curve
We ran Rust 1.80 and Go 1.23 on the same Postgres box and expected a draw. The first graphs looked close, then the curves drifted as load rose. Our wins and losses came from tiny defaults, not language slogans. We fixed the dials, reran, and one stack held shape longer. Same hardware, clean runs, honest baselines We kept the playground small so differences were visible. One VM, pinned CPU, fixed Postgres config, warm caches, and no cross-talk. We reset the database between phases and replayed the same query mix. The goal was to watch where the line bends, not chase peak hero numbers. test | median | p99
+--------+-----
ping | 0.3 | 0.7 SELECT id | 1.1 | 2.6 read-heavy | 12.4 | 31.0 mixed r/w | 18.7 | 48.9 We saw similar medians, then diverging p99 under mixed load. The stack with steadier backpressure held lower tail latency as concurrency grew. We learned that a simple, repeatable scaffold is worth more than broad benchmarks. Keep the run list tiny, isolate noise, and compare shapes, not headlines. How we opened connections to Postgres Our first mistakes lived in the pool. Defaults looked safe until concurrency rose and the pool thrashed. We narrowed both stacks to the same max size and kept health checks cheap. Closing idle connections mattered once GC and runtime scheduling joined the party. // Go 1.23, pgxpool cfg, _ := pgxpool.ParseConfig(dsn) cfg.MaxConns = 32 cfg.MinConns = 8 cfg.HealthCheckPeriod = 30 * time.Second cfg.MaxConnIdleTime = 2 * time.Minute pool, _ := pgxpool.NewWithConfig(ctx, cfg) // Rust 1.80, sqlx + Postgres let pool = PgPoolOptions::new()
.max_connections(32) .min_connections(8) .idle_timeout(Duration::from_secs(120)) .test_before_acquire(true) .connect(&dsn).await?;
Tail spikes fell once both pools matched capacity and idleness. Throughput stopped oscillating under bursts. Set pool sizes to match server limits, trim idle time, and make health checks predictable. Fix the pool before touching query code. Concurrency shape that changed the curve We built worker layers that accepted requests, queued work, and asked the pool when ready. The shape of that queue decided who bent first. Goroutines were cheap but hid backpressure; tokio tasks were light but honest about waiting. +--------+ +---------+ +--------+ | client | -> | channel | -> | worker | +--------+ +---------+ +--------+
\ ^ ^ |
\ | | v
\----> metrics +------+
| pool |
+------+
As offered load rose, the stack with clearer queue limits degraded smoothly instead of flooding the pool. Errors stayed low and p99 rose gently. Expose a visible queue and cap it. Count drops early rather than creating hidden waits that explode the tail. Prepared statements and batching in practice We left ORM toys out and went direct. Preparing statements once per connection trimmed parse time and helped the planner stick. Batching similar writes cut round trips and reduced lock jitter in hot tables. // Go: prepare once per connection and batch conn, _ := pool.Acquire(ctx) defer conn.Release() stmt, _ := conn.Conn().Prepare(ctx, "up1",
"UPDATE accounts SET bal = bal + $1 WHERE id = $2")
batch := &pgx.Batch{} for _, op := range ops {
batch.Queue(stmt.SQL, op.Delta, op.ID)
} br := conn.Conn().SendBatch(ctx, batch) _ = br.Close() // Rust: prepare and reuse; batched exec let mut tx = pool.begin().await?; let stmt = sqlx::query("UPDATE accounts SET bal = bal + $1 WHERE id = $2")
.persistent(true);
for op in &ops {
stmt.clone().bind(op.delta).bind(op.id).execute(&mut *tx).await?;
} tx.commit().await?; Under write-heavy phases, parse time and lock waits dropped, and throughput held steadier. Prepare what repeats, and batch when correctness allows. Measure locks and round trips, not just rows per second. Mapping rows without hidden allocations Serialization surprised us more than SQL. Extra copies and reflection crept into hot loops. We rewired row mapping to stay zero-ish copy, avoid heap churn, and keep field order explicit. +---------+ bytes +-----------+ | socket | -------> | row buf | +---------+ +-----------+
|
v
+---------+
| struct |
+---------+
// Go: map with pgx.Row, avoid interface{} type User struct{ ID int64; Name string; Active bool } func readUser(r pgx.Row) (User, error) {
var u User err := r.Scan(&u.ID, &u.Name, &u.Active) return u, err
} // Rust: derive FromRow, avoid String clones
- [derive(sqlx::FromRow)]
struct User<'a> {
id: i64, #[sqlx(borrow)] name: &'a str, active: bool,
} CPU time per row fell and GC pauses shrank on both stacks. p99 improved on list endpoints without touching the database. Make mapping explicit. Borrow where safe, skip reflection, and keep hot paths free of unnecessary allocations. Timeouts, retries, and backpressure that helped Transient failures were rare until they were not. We made timeouts short, retried once where idempotent, and allowed the queue to say no. The winning curve did fewer useless retries and surfaced pressure earlier. // Go: context timeouts and one retry ctx, cancel := context.WithTimeout(context.Background(), 80*time.Millisecond) defer cancel() err := withRetry(1, func() error {
_, e := pool.Exec(ctx, "SELECT 1") return e
}) // Rust: bounded timeout and retry let fut = sqlx::query!("SELECT 1").execute(&pool); let res = timeout(Duration::from_millis(80), fut).await; if res.is_err() { /* optionally one retry here */ } We saw fewer cascades during spikes and a faster return to steady state. The tail stayed narrower when we admitted pressure at the edge. Keep timeouts tight, retries bounded, and let the front door shed load. Stability beats chasing every transient win. Final Thoughts We expected language to decide the winner; configuration and shape did the heavy lifting. Pools, queues, mapping, and small timeout choices changed the story more than compilers. Rust held form longer once we tuned borrowing and pool honesty; Go stayed smooth when we exposed backpressure and cut reflection. If you run a similar stack, match capacity to the database first, then shave copies, then shape the queue.
Read the full article here: https://medium.com/@maahisoft20/rust-1-80-vs-go-1-23-on-postgres-same-box-different-curve-10b40f4e2d53