Jump to content

Rust vs Go: Garbage Collector vs Ownership — The Memory Showdown

From JOHNWICK

P95 creeps. CPU warms. Dashboards start drawing little waves where you wanted a flat line. The question hits: is this coming from Go’s collector doing its housekeeping, or from the way your Rust lifetimes are set up? Under pressure, both languages are safe in different ways. One cleans in the background. The other refuses to take off until your seatbelt clicks.

This is the head-to-head I wish someone had handed me: what actually happens to memory when real traffic arrives, how that shapes latency, and the specific habits in each language that move p95 and p99 in the right direction. We’ll look at mental models, short code paths, and a few hand-drawn diagrams you can keep in your head while you’re shipping.

No fluff. No links. Just what helps you keep the graph boring. The map in your head

Process Memory +-------------------------------------------------------------+ | Stack (per thread) | | +-----+ +-----+ +-----+ | | | f() | | g() | | h() | frames appear/disappear | | +-----+ +-----+ +-----+ as scopes enter/exit | +-------------------------------------------------------------+ | Heap (shared across threads) | | [object][object][object] ... grows/shrinks with allocation | +-------------------------------------------------------------+

Go: the heap is managed by a garbage collector that runs alongside your code. When objects become unreachable, the runtime notices and frees them. Your job is to avoid making the collector’s job harder than it needs to be.

Rust: the heap is managed by ownership and lifetimes. Values are freed precisely when they go out of scope. Your job is to make ownership obvious and to borrow instead of copying whenever possible.

Both produce safety. They just ask for different discipline. Two heartbeats under load

In stable conditions you might not notice a difference. When allocation rate spikes,

Go’s collector becomes visible as a low hum of work. Rust has no collector, so the line remains as flat as your own allocation behavior. A hot path you actually ship

Imagine a service that validates compact JSON requests and returns a small response. The job is simple; the volume is not. What you do with buffers decides your latency story.

Go: reduce pressure on the collector by reusing memory package ioapi

import (

"bytes"
"encoding/json"
"errors"
"io"
"sync"

)

type Payload struct {

UserID string `json:"user_id"`
Plan   string `json:"plan"`
Count  int    `json:"count"`

}

var bufPool = sync.Pool{

New: func() any { return new(bytes.Buffer) },

}

func validate(p *Payload) error {

if p.UserID == "" || p.Count < 0 {
 return errors.New("invalid payload")
}
return nil

}

// Process decodes, validates, and writes a compact response. // It avoids short-lived garbage by reusing pooled buffers. func Process(r io.Reader, w io.Writer) error {

dec := bufPool.Get().(*bytes.Buffer)
dec.Reset()
defer bufPool.Put(dec)
// copy request body into reusable buffer
if _, err := io.Copy(dec, r); err != nil {
 return err
}
var p Payload
if err := json.Unmarshal(dec.Bytes(), &p); err != nil {
 return err
}
if err := validate(&p); err != nil {
 _, _ = w.Write([]byte(`{"ok":false}`))
 return nil
}
enc := bufPool.Get().(*bytes.Buffer)
enc.Reset()
defer bufPool.Put(enc)
enc.WriteString(`{"ok":true,"user":"`)
enc.WriteString(p.UserID)
enc.WriteString(`"}`)
_, err := io.Copy(w, enc)
return err

}

What matters:

  • Reuse, don’t spray: pooled bytes.Buffer objects keep the collector scanning fewer fresh allocations.
  • Pre-size when you can: resetting a buffer and writing into it is cheaper than growing backing arrays repeatedly.
  • Ergonomics remain high; you keep Go’s speed of iteration while smoothing allocations.

Rust: make lifetimes obvious and borrowing the default use std::io::{Read, Write};

  1. [derive(Debug)]

struct Payload<'a> {

   user_id: &'a str,
   plan:    &'a str,
   count:   i32,

}

// Very lightweight parsing for the demo; in production you'd use a proper parser. fn parse<'a>(raw: &'a str) -> Option<Payload<'a>> {

   let uid = raw.split("\"user_id\":\"").nth(1)?.split('"').next()?;
   let plan = raw.split("\"plan\":\"").nth(1)?.split('"').next()?;
   let cnts = raw.split("\"count\":").nth(1)?;
   let count: i32 = cnts
       .chars()
       .skip_while(|c| !c.is_ascii_digit() && *c != '-')
       .take_while(|c| c.is_ascii_digit() || *c == '-')
       .collect::<String>()
       .parse()
       .ok()?;
   Some(Payload { user_id: uid, plan, count })

}

fn validate(p: &Payload) -> bool {

   !p.user_id.is_empty() && p.count >= 0

}

pub fn process<R: Read, W: Write>(mut r: R, mut w: W) -> std::io::Result<()> {

   // Reuse one owned String for the entire request lifecycle
   let mut body = String::with_capacity(1024);
   r.read_to_string(&mut body)?;
   if let Some(p) = parse(&body) {
       if !validate(&p) {
           w.write_all(br#"{"ok":false}"#)?;
           return Ok(());
       }
       // Assemble the response with one preallocated buffer
       let mut out = String::with_capacity(32 + p.user_id.len());
       out.push_str(r#"{"ok":true,"user":""#);
       out.push_str(p.user_id);
       out.push('"');
       out.push('}');
       w.write_all(out.as_bytes())?;
   } else {
       w.write_all(br#"{"ok":false}"#)?;
   }
   Ok(())

}

What matters:

  • Own once, borrow often: the Payload holds &str slices into the request buffer. No extra String allocations.
  • Deterministic drops: when out and body go out of scope, they free immediately. There is no background tax.

The tactile difference in one glance // Go: hold ergonomics, avoid heap churn, own the result explicitly. var pool = sync.Pool{ New: func() any { return new(bytes.Buffer) } }

func join(a, b string) []byte {

   buf := pool.Get().(*bytes.Buffer)
   buf.Reset()
   defer pool.Put(buf)
   buf.WriteString(a)
   buf.WriteByte(':')
   buf.WriteString(b)
   // Return an owned slice so callers aren't tied to the pool.
   out := append([]byte(nil), buf.Bytes()...)
   return out

} // Rust: preallocate once, borrow inputs, return an owned String. fn join(a: &str, b: &str) -> String {

   let mut s = String::with_capacity(a.len() + b.len() + 1);
   s.push_str(a);
   s.push(':');
   s.push_str(b);
   s

}

Both are fast. The mental model is what changes your defaults. When the pressure rises: how the lines move GC pressure zones (Go) alloc rate → low | smooth medium | smooth | micro-pause blips appear under load high | smooth | blips | more frequent assist work

            ------------------------------------------------▶ time

In Go, short-lived garbage creates more work for the collector. Reduce creation of throwaway objects and the line smooths out. In Rust, accidental cloning inflates memory and cache churn. Borrowing makes the line boring again.

Concurrency without surprises

Go: copy when retaining beyond scope, reuse for speed type Job struct {

ID   int
Body []byte

}

func worker(in <-chan Job, out chan<- int) {

for j := range in {
 // If you need to keep j.Body, take ownership with an explicit copy.
 // Otherwise, process and discard to avoid retention.
 id := fastHash(j.Body)
 out <- id
}

}

The habit: if a goroutine must hold data after the sender is gone, make the copy deliberate and obvious. Otherwise, keep it local and ephemeral. Background GC will do the rest.

Rust: share only when you truly must use std::sync::{Arc, Mutex}; use std::thread;

  1. [derive(Debug)]

struct Job { id: u64, body: Vec<u8> }

fn main() {

   let queue = Arc::new(Mutex::new(Vec::<Job>::new()));
   let q1 = Arc::clone(&queue);
   let t = thread::spawn(move || {
       let mut v = q1.lock().unwrap();
       v.push(Job { id: 1, body: vec![1,2,3,4] });
   });
   t.join().unwrap();
   // Ownership is explicit; nothing is freed until the last Arc drops.
   let v = queue.lock().unwrap();
   println!("jobs: {}", v.len());

}

The habit: reach for Arc only at the boundary where multiple threads must share ownership. Inside a thread, keep & borrows and pass slices. Fewer owners means fewer locks.

Diagnostics that actually help

If your Go service shows a sawtooth pattern in RSS while allocations spike, you are likely creating lots of short-lived objects. Replace conversions between []byte and string where possible, pre-size slices, and pull small buffers from a pool you control. The collector calms down when you stop giving it mountains of fresh work.

If your Rust service grows memory steadily during normal traffic, audit hot functions for .to_string() and .clone(). The borrow checker cannot save you from deliberate copies; it only guarantees correctness. Own large data once, pass &str and &[u8] everywhere else, and let scope boundaries do the cleanup. If you see lock contention in either language, the problem is almost always shared structures that grew beyond what they were meant to hold. Keep ownership local and pass only what is needed.

Two short habits that move p95

In Go, reuse instead of spray. A little discipline with buffers (plus pre-sizing) turns the collector into a background detail rather than a foreground actor. In Rust, borrow instead of clone. Make ownership boring. When you do need shared ownership, keep it at the edges with Arc, not in the center of the codebase.

These are not rules for a poster. They are choices you can make in the next commit. The real cost curve

At small scale, developer time dominates. Go frequently wins because you move faster and the collector’s tax is small. As scale grows, CPU time, cache behavior, and tail latency dominate. Rust frequently wins because there is no background collector and your memory story is explicit. In the middle, team habits matter more than the logo on your laptop.

A quick before-and-after you can try today

Go: pre-size and cut conversions // Avoid repeated growth by estimating capacity upfront. // Also avoid creating strings from []byte unless absolutely necessary. func joinIDs(ids []int) []byte {

b := make([]byte, 0, len(ids)*12) // safe upper bound: "2147483647," is 11 chars
for i, id := range ids {
 b = strconv.AppendInt(b, int64(id), 10)
 if i+1 < len(ids) {
  b = append(b, ',')
 }
}
return b

}

Rust: keep data owned once, expose borrowed views struct Catalog {

   names: Vec<String>,

}

impl Catalog {

   fn find<'a>(&'a self, needle: &str) -> Option<&'a str> {
       self.names.iter()
           .find(|n| n.contains(needle))
           .map(|s| s.as_str())
   }

}

Both examples are simple on purpose. They reinforce the right default: reuse in Go, borrow in Rust.

What your pager actually feels like

Load spike ─────────────────────────▶

Go  : ▁▁▁▁▂▁▁▂▁▁▁▂ (collector hum; reuse calms the pattern) Rust  : ▁▁▁▁▁▁▁▁▁▁▁ (flat unless you introduce unnecessary copies)

If you see blips in Go, look for short-lived allocations and string/byte churn. If you see growth in Rust, look for cloning and oversized shared structures. Most teams don’t need heroics; they need fewer accidental allocations.

Watch-outs worth remembering

Pooling enormous buffers in Go can pin memory and increase overall footprint. Pool only the small, frequent objects on your hot path. Let the big ones be created deliberately when needed.

In Rust, trait bounds can quietly introduce clones. A method signature that takes T by value may force a copy where a &T would be free. Adjusting one signature can remove surprising allocations down the chain.

Cross-language rule: if something must live longer than a single step in your pipeline, make that decision explicit. Hidden retention is the enemy of predictable memory.

The quotable pick rule If your bottleneck is the calendar, pick Go. If your bottleneck is the cache line, pick Rust.

That one sentence captures the real trade: delivery speed versus deterministic memory behavior. You can succeed with either; you can also make either miserable.

The difference is whether your habits match the language’s strengths. Keep the line boring

Your users don’t care whether the win came from pooled buffers or borrowed slices.

They care that the graph is flat when they need it to be. Go saves your calendar. Rust saves your CPU. Pick the habit that saves your scarcest resource right now, and switch when your graph changes.

You do not need grand rewrites to earn the result. In Go, reuse and pre-size. In Rust, borrow and keep ownership local. Do that, and the next time traffic surges, your page stays quiet and your graph stays calm.

One last sketch for your mental drawer Throughput stability under rising load

flat ─┐ Rust with clear lifetimes

     ├───────────────╮
     │               │
     │               └────────
     │
     ├────╮ Go with GC-aware code (buffer reuse)
     │    ╰───────────────
     │
     └╮  noisy  Go with accidental heap churn
      ╰──────────────────────────────────────────▶ load

Keep the line boring. People will call it magic. You will call it a good night’s sleep.