Jump to content

Beyond delete(): Solving Go Map Memory Leaks with a Rust Perspective

From JOHNWICK

Ever been there? It’s late, you know, the kind of late where your brain’s a bit foggy, and the only thing really awake is the glow from your screen. Then, BAM! Your dashboard just explodes with alerts — memory spikes everywhere, and on a service that’s supposed to be chill. Ugh. If you’ve been coding for a while, especially on the backend, you’ve probably had this nightmare dance: the memory leak that just won’t quit.

For ages, I kinda thought I had memory management figured out. Go, with its neat little garbage collector, always promised this sweet escape from all the manual memory fiddling. And Rust? Oh man, Rust, with its whole compile-time safety thing, felt like an impenetrable fortress against memory woes. But here’s the kicker: even with these awesome languages, subtle little quirks can totally lead to memory bloating out of control. Today, I wanna spill the beans on a particularly nasty incident — a memory leak chilling right inside a seemingly innocent map in Go. And trust me, looking at how Rust handles things really helped shine a light on what was actually happening.

We’re gonna walk through the whole mess, figure out why Go’s maps can be a bit tricky, and then look at some solid ways to fix it. After that, we’ll peek at Rust’s approach, which, honestly, just skips this whole problem category entirely. Ready to untangle this puzzle and make your apps way more robust? Let’s jump in! 👇

The Case of the Ever-Growing Cache 📈 So, our story starts with a pretty standard setup: a super-fast session cache. The idea was simple, right? Store millions of temporary user sessions, mapping a userID (just an int) to a Session struct full of user bits and bobs. The plan was, when a session kicked the bucket, we'd just delete it from the map, and poof, the memory would just go back to where it belonged. Easy peasy. Our initial Go code? Looked perfectly fine, you know, totally clean and straightforward: package mainimport (

"fmt" "runtime" "time" )type Session struct { UserID int LastSeen time.Time Metadata map[string]string // Let's make this struct chunky to exaggerate memory use BigData [1024]byte }func printAlloc() { var m runtime.MemStats runtime.ReadMemStats(&m) fmt.Printf("Allocated Heap: %v MiB\n", m.Alloc/1024/1024) }func main() { n := 1_000_000 // A million sessions! Can you imagine? sessionCache := make(map[int]Session) fmt.Println("--- Initial State ---") printAlloc() // Allocated Heap: ~0 MiB - All good here! // Add a million sessions for i := 0; i < n; i++ { sessionCache[i] = Session{ UserID: i, LastSeen: time.Now(), Metadata: map[string]string{"key": "value"}, } } fmt.Println("\n--- After adding sessions ---") printAlloc() // Allocated Heap: ~1000 MiB (example output for a large struct) - Whoa, that's a lot! // Now, delete all sessions for i := 0; i < n; i++ { delete(sessionCache, i) } fmt.Println("\n--- After deleting sessions ---") runtime.GC() // Explicitly ask Go to clean up. Please? printAlloc() // Allocated Heap: Still high! Say, ~950 MiB - My face when I saw this 😮 runtime.KeepAlive(sessionCache) // Prevent compiler from GC-ing the whole map }

Can you believe our faces when, after deleting all million sessions and even hitting the garbage collector with a stick, the memory just… didn’t budge much? The heap stayed stubbornly huge, hogging hundreds of megabytes. And get this, our sessionCache was practically empty. Like, len(sessionCache) would scream 0 at us. What gives?! This wasn’t just a one-off thing. It was a pattern, a really consistent one. It actually led to our service getting OOMKilled — yeah, Out Of Memory Killed — during a busy traffic surge way back in October 2025. Honestly, it felt like our Go maps were acting like squirrels, just stashing away nuts — “just in case” they’d need that memory later, you know?

Diagnosing the Go Map’s Secret Stash 🧐 My first thought, of course, was “memory leak!” But then, wait. Go has a garbage collector! So how can memory leak? This isn’t C where you’re manually mallocing and freeing stuff yourself! Here’s the big ‘aha!’ moment: a “memory leak” in a language like Go, one with a garbage collector, isn’t usually about forgotten pointers. Nah, it’s more about memory that the program can still technically reach, but that you, the developer, don’t actually need anymore. The garbage collector can’t just toss it because, well, there’s still a reference to it somewhere.

The heart of the problem, it turns out, is how Go’s map works under the hood:

  • Buckets and Growth: Go maps use an array of hash “buckets.” When your map grows and gets packed, it just allocates more buckets. So, if your map once held a gazillion items, it’ll size its underlying bucket array to fit that peak number.
  • delete() Doesn't Shrink: And this is the real kicker. When you delete(map, key), Go doesn't actually go and free up that specific spot in the bucket's memory. Instead, it kinda just marks that slot as "empty" or a "tombstone." The overall memory footprint of the map - meaning those underlying buckets - does not shrink. They stay at their biggest size. Period.
  • GC Limitations: Now, Go’s garbage collector will eventually pick up the memory for the values themselves (especially if they’re big structs or pointers to stuff on the heap) once the map doesn’t point to them anymore. But the map’s internal structure — those bucket arrays? They stick around. So while the contents might get cleaned up, the container itself stays bloated.

Let’s try a little experiment with a slightly different Session struct to really see the difference between storing pointers versus the actual values: package mainimport (

"fmt" "runtime" "time" )type Session struct { UserID int LastSeen time.Time Metadata map[string]string BigData [1024]byte }func printAlloc() { var m runtime.MemStats runtime.ReadMemStats(&m) fmt.Printf("Allocated Heap: %v MiB\n", m.Alloc/1024/1024) }func main() { n := 1_000_000 // ⭐️ Scenario 1: Map of values (our initial head-scratcher) fmt.Println("--- Map of Values ---") valueMap := make(map[int]Session) printAlloc() for i := 0; i < n; i++ { valueMap[i] = Session{UserID: i, LastSeen: time.Now(), BigData: [1024]byte{}} } printAlloc() // Still pretty high, right? for i := 0; i < n; i++ { delete(valueMap, i) } runtime.GC() printAlloc() // Memory still annoyingly high! // ⭐️ Scenario 2: Map of pointers - Let's see! fmt.Println("\n--- Map of Pointers ---") ptrMap := make(map[int]*Session) // Big change here: notice the '*' printAlloc() for i := 0; i < n; i++ { ptrMap[i] = &Session{UserID: i, LastSeen: time.Now(), BigData: [1024]byte{}} } printAlloc() // Yeah, still high, but less for the map's internal buckets this time for i := 0; i < n; i++ { delete(ptrMap, i) } runtime.GC() printAlloc() // 🎉 Memory drops way more! (Pointers are zeroed, which means the actual Session structs can finally be garbage collected) runtime.KeepAlive(valueMap) // Just to make sure the compiler doesn't get too clever runtime.KeepAlive(ptrMap) // Same here }

Even when using pointers, the map’s structure itself, the buckets, don’t just magically shrink. But, by stuffing pointers into the map, those buckets only hold tiny pointer values (like 8 bytes on a 64-bit system), not the entire chunky Session struct. So when you delete something, those pointers effectively become nil, making the actual Session struct data eligible for the GC. This really helps cut down the memory that the values are holding, even if the map's internal layout stays the same size. It’s a common trick, but honestly, it doesn't solve the core problem of the map itself not shrinking. Oh, and fun fact: if a map key or value goes over 128 bytes, Go's runtime actually starts storing a pointer to it anyway, instead of embedding it directly in the bucket. For stuff that's exactly 128 bytes, Go might still inline it, so being explicit with a pointer type can still be a smart move. Solutions for Go and Rust’s Contrasting Approach 🎯

Alright, so how do we deal with this “feature” (not a bug, remember?) in Go that makes our maps gobble up memory? Fixing the Go Map Memory Bloat

  • Reinitialize the Map: Honestly, the simplest way, and often the best, is just to recreate the map. When you really want to snatch that underlying memory back, you just make a brand new, empty map. The old, oversized one? It becomes totally unreachable and the garbage collector can finally sweep it all away — buckets and everything. It’s like moving into a new, smaller house instead of trying to empty out a giant mansion.

// ... (rest of your setup)

// To truly "clear" and shrink the map, like, for real: sessionCache = make(map[int]Session) // Assign a new, empty map. Bye-bye old map! runtime.GC() // Give the old map a nudge towards oblivion fmt.Println("\n--- After reinitializing map ---") printAlloc() // Memory back to almost nothing! Victory!

  • This technique is a lifesaver, especially for caches or other data structures that you periodically want to completely flush.
  • Use clear() (Go 1.21+): Since Go 1.21 dropped in August 2023, we've got a shiny new clear() function built right in. Now, clear() is super handy for zapping all the entries and setting their values to zero (which lets the GC grab any pointed-to data). BUT, and this is a big BUT - it doesn't shrink the map's underlying capacity. It's faster than looping through and deleteing everything, for sure, but don't expect it to give back those bucket memories.

// ... (rest of your setup)

// Using the shiny new clear() builtin (Go 1.21+) clear(sessionCache) runtime.GC() fmt.Println("\n--- After using clear() ---") printAlloc() // Still high, but faster than deleting one-by-one!

So, yeah, clear() is awesome for just emptying out the contents quickly. But if you're really sweating about peak memory usage, a full reinitialization is often still your best bet right after a clear() if you want that capacity reduction.

  • Pointer Values (as we saw above): If you’re stuffing big old structs into your map, using map[KeyType]*ValueType is smart. It makes sure that only small pointers live in the map's buckets. When a key gets deleted, that pointer turns nil, and boom, the actual data it pointed to becomes fair game for the GC. This dramatically cuts down on the memory held by the values, even if the map's internal structure stays stubbornly big.
  • Shard Maps: For those absolutely massive, constantly changing caches, you might wanna break your single, giant map into a bunch of smaller ones. You can call this “sharding.” This way, you can periodically recreate or clear out individual shards without messing with your entire cache. It’s a bit more work, but for huge, volatile datasets, it’s often worth it.

The Rust Way: Memory Safety by Design 🛡️ Now, let’s swing over to Rust. Man, Rust just does things differently when it comes to memory. Instead of a garbage collector running around, it uses this super cool ownership and borrowing system that’s actually checked before your code even runs. At compile time! This system gives you crazy strong guarantees about memory safety and basically stops entire categories of bugs, including loads of those pesky memory leaks. When you look at std::collections::HashMap in Rust, its whole philosophy just shines:

  • Ownership and Drop: In Rust, every single value has an owner. And when that owner goes out of scope? BAM, the value is dropped, and its memory is automatically, deterministically freed up. It’s like clockwork, right when that scope ends. No surprises!
  • clear() and shrink_to_fit(): When you hit clear() on a HashMap, all those key-value pairs get dropped, and their memory is reclaimed. But here’s the really sweet part: HashMap also has a shrink_to_fit() method. This baby reduces the capacity of the map down to just what’s needed for its current elements (which, after a clear(), would be exactly zero). This is where Rust explicitly gives back that extra memory.

Let’s check out the Rust equivalent code: use std::collections::HashMap;#[derive(Debug)] struct Session {

   user_id: i32,
   // Add other fields, similar to Go struct
   big_data: [u8; 1024],

}fn main() {

   let n = 1_000_000;
   let mut session_cache: HashMap<i32, Session> = HashMap::new();    println!("--- Initial State ---");
   // Memory profiling in Rust often needs external tools.
   // So for now, we'll just trust `shrink_to_fit` to do its job.    // Add a million sessions
   for i in 0..n {
       session_cache.insert(
           i,
           Session {
               user_id: i,
               big_data: [0; 1024], // Just filling with zeros for the example
           },
       );
   }
   println!("\n--- After adding sessions (HashMap size: {}) ---", session_cache.len());    // Now, clear all sessions
   session_cache.clear();
   println!("\n--- After clearing sessions (HashMap size: {}) ---", session_cache.len());
   
   // Explicitly shrink the capacity - this is the magic sauce ✨
   session_cache.shrink_to_fit();
   println!("\n--- After shrinking capacity ---");
   // If you were watching with a tool like `perf` or `Valgrind`,
   // you'd see a big, happy memory drop right here.

}

When this Rust code runs, especially after session_cache.clear() and then that awesome session_cache.shrink_to_fit(), you’d see the memory footprint pretty much go right back down to where it started. Those Session structs get dropped the moment they’re out of the map, and shrink_to_fit() is your explicit command to reclaim all that extra bucket memory. Now, shrink_to_fit() makes the memory available to Rust's allocator, which might or might not immediately hand it back to the operating system; it kinda depends on the allocator and what the OS is doing. But for practical purposes, it’s no longer your HashMap hogging it. If you're a real stickler and want all memory released ASAP, just reassigning the map (session_cache = HashMap::new();) would completely nuke the old map and its allocations. Rust’s tight control means these “phantom memory” issues are just way less common by default. And if you need to be super precise, you’ve got the tools for it. While Rust’s power does mean a steeper learning curve up front — you really gotta wrap your head around memory stuff early — it truly wipes out whole classes of runtime errors and weird memory behaviors. There are even community crates, like rehashinghashmap, that offer gradual shrinking if shrink_to_fit() is too much of a hammer for frequent use. Pretty neat, huh?

Key Takeaways from a Memory Odyssey 💡 This whole memory leak adventure with Go maps, and then seeing how cleanly Rust handles it, taught me some really important lessons:

  • No Language is Perfect: Yeah, both Go and Rust have their moments, you know? Go’s all about keeping things simple and giving you pretty predictable runtime performance, even if that means a slightly bigger memory footprint for maps that shrink. Rust, on the other hand, is a control freak — in a good way! It’s all about super fine-grained memory control and rock-solid safety guarantees, even if it feels a bit tougher to get started with.
  • Understand the Internals: Seriously, just assuming how a data structure works inside can totally bite you later. Take a peek at the language docs, or even the source code if you’re brave! It can save you so many hours of head-scratching debugging. I mean, my god, the hours I’ve lost!
  • Proactive Memory Management: In Go, especially for services that run forever and have maps that get really busy, you gotta be on top of things. Periodically recreating maps, or at least understanding what clear() doesn't do for capacity, is super important. And for chunky values, consider using pointers. Your pprof tool? That's your best friend for keeping tabs on Go's memory use, trust me.
  • Rust’s Determinism: Rust’s ownership system just guarantees that memory is freed up exactly when it’s supposed to be — when a value leaves its scope or gets explicitly dropped. This just prevents that whole “reachable but not needed” memory problem we saw with Go’s maps. It’s a game changer.

Ultimately, picking the right tool is all about your specific needs. If you’re all about getting stuff done fast and a little extra memory usage here and there is fine, Go is still a brilliant choice. I mean, the current stable Go version, Go 1.25.2, just came out on October 7, 2025, and it keeps getting better. But if memory efficiency, top-tier performance, and those awesome compile-time checks against memory errors are your absolute must-haves, then Rust, well, it’s pretty much in a league of its own. So, what about you? Got any gnarly memory leak stories? Or maybe you’ve found Rust’s ownership system to be a total lifesaver for similar woes? Drop your thoughts and experiences in the comments below! Let’s swap war stories. 👇