Jump to content

Inside the Stack Frame: What Rust Functions Really Compile To

From JOHNWICK
Revision as of 09:56, 21 November 2025 by PC (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The Illusion of Simplicity Every Rust developer remembers their first fn. It feels clean, mathematical, safe — like this:

fn add(a: i32, b: i32) -> i32 {

   a + b

}

But what actually happens when you call add(5, 10)? If you think “it just adds two numbers,” you’re only seeing the surface. Beneath that line of code, the compiler orchestrates a micro-architecture:
stack frames, registers, prologues, epilogues, ABI calls, and safety bookkeeping. Rust may look modern, but under the hood — it’s dancing on the same ancient stage as C, the CPU stack, and the calling conventions of old.

This article peels back that layer and shows you what really happens when a Rust function runs.

Let’s Start Simple Take this small Rust program:

fn add(a: i32, b: i32) -> i32 {

   a + b

}

fn main() {

   let x = add(10, 20);
   println!("{}", x);

}

On the surface:

  • add takes two integers.
  • Adds them.
  • Returns the result.

But let’s compile it:

rustc --emit=asm main.rs

Open the .s file (assembly), and you’ll see something like this (simplified x86-64 output):

add:

   push    rbp
   mov     rbp, rsp
   mov     eax, edi
   add     eax, esi
   pop     rbp
   ret

That’s not Rust.
That’s your CPU.

And what you’re seeing is a stack frame — the skeleton of how every Rust function actually exists at runtime. What’s a Stack Frame, Really?

A stack frame is the scratchpad every function gets when it’s called.
Think of it as a small “workspace” in memory where:

  • Parameters are passed
  • Local variables are stored
  • Return addresses are saved
  • Temporary data lives

You can visualize it like this:

┌──────────────────────────┐  ← Higher memory addresses
│ Return Address (from fn) │
├──────────────────────────┤
│ Previous Frame Pointer   │
├──────────────────────────┤
│ Local Variables          │
├──────────────────────────┤
│ Function Arguments       │
└──────────────────────────┘  ← Lower memory addresses (grows down)

When Rust compiles your code, it uses the System V ABI (on Linux) or Microsoft x64 calling convention (on Windows) to manage these frames. Step by Step: What Happens When You Call a Function Let’s walk through the execution flow for add(10, 20).

1. Arguments Passed via Registers

The first few arguments are placed into CPU registers:

  • On x86–64 Linux: rdi, rsi, rdx, rcx, r8, r9
  • On Windows: rcx, rdx, r8, r9

So here

  • 10 goes into rdi
  • 20 goes into rsi

2. Function Prologue When add starts executing:

  • It pushes the base pointer (rbp) to the stack (saving the previous frame).
  • Sets the new base pointer: mov rbp, rsp
  • Allocates stack space for locals (if any).

3. Computation The function executes its logic:

mov eax, edi  ; move first arg into eax add eax, esi  ; add second arg

eax now holds the result — 30.

4. Epilogue

Before returning:

  • It restores the old stack frame (pop rbp)
  • Returns to the caller (ret), jumping to the saved return address

That’s it. 
Every Rust function does this — no magic, just machine choreography. Example: With Locals and Scopes Let’s make it more interesting:

fn calc() -> i32 {

   let a = 3;
   let b = 7;
   let c = a * b;
   c + 10

}

Compile and inspect (simplified pseudo-assembly):

calc:

   push    rbp
   mov     rbp, rsp
   mov     DWORD PTR [rbp-4], 3     ; a = 3
   mov     DWORD PTR [rbp-8], 7     ; b = 7
   mov     eax, [rbp-4]
   imul    eax, [rbp-8]             ; c = a * b
   add     eax, 10
   pop     rbp
   ret

Every let you write becomes a reserved slot in the stack frame ([rbp-4], [rbp-8], etc.).

Stack Frames and Rust Safety Now here’s where Rust gets emotional. 
The stack isn’t just a place for data — it’s where ownership rules come to life.

Rust’s borrow checker ensures that:

  • Stack data doesn’t outlive its scope.
  • References never dangle beyond their frame.
  • Every function call respects ownership boundaries.

Example:

fn bad_ref<'a>() -> &'a i32 {

   let x = 42;
   &x  // ❌ Error: x does not live long enough

}

The compiler knows that x lives in the current stack frame, which vanishes when the function returns. 
That’s why Rust screams here. If you visualized it:

┌──────────────────────────┐
│ Stack frame (bad_ref)    │
│   x = 42                 │  ← Lives only until fn returns
│   &x returned! ❌         │
└──────────────────────────┘

The pointer would point into garbage after return.
The borrow checker catches that before you ever compile to assembly. Stack vs. Heap: The Real Split

Rust functions default to stack allocation because it’s faster, cleaner, and deterministic.

But when you use Box, Vec, or String, those types allocate on the heap and store only small pointers on the stack:

fn heap_demo() {

   let v = vec![1, 2, 3];

}

Stack frame view:

┌──────────────────────────┐
│ Return address           │
│ Base pointer             │
│ v (Vec { ptr, len, cap })│ → Heap: [1, 2, 3]
└──────────────────────────┘

So the stack only knows where the data is, not what’s inside it. That separation is what allows Rust to mix performance with safety — no hidden allocations unless you explicitly opt in. Architecture Design: Function as a Micro-Unit

Let’s map out a Rust function’s architecture as seen by the compiler:

┌──────────────────────────────┐
│ High-level Rust fn           │
│ fn calc(a: i32, b: i32) -> i32 │
└──────────┬───────────────────┘
           │
    ┌──────▼──────┐
    │ HIR (AST)   │  ← Syntax Tree: variable names, scopes
    └──────┬──────┘
           │
    ┌──────▼──────┐
    │ MIR         │  ← Simplified for borrow checking
    └──────┬──────┘
           │
    ┌──────▼──────┐
    │ LLVM IR     │  ← Platform-independent assembly
    └──────┬──────┘
           │
    ┌──────▼──────┐
    │ Machine Code │  ← Stack frames, registers, syscalls
    └──────────────┘

Rust’s MIR (Mid-level IR) is where lifetimes, scopes, and ownership are resolved.
By the time it hits LLVM IR, every function has already been broken into machine-ready units with explicit stack management.

Code Flow Example

Let’s see what happens when Rust compiles a nested call chain:

fn multiply(x: i32, y: i32) -> i32 {

   x * y

}

fn compute(n: i32) -> i32 {

   let temp = multiply(n, 10);
   temp + 1

} fn main() {

   let result = compute(5);
   println!("{}", result);

}

Stack frame flow:

main()
│
└── compute(5)
     │
     └── multiply(5, 10)
          ├── creates local frame
          ├── multiplies
          └── returns → compute()
     │
     └── adds +1
     └── returns → main()

At runtime, this stack expands and contracts like an accordion:
each call adds a frame, each return pops one off. This precise orchestration is why Rust panics unwind so gracefully — it knows exactly where each frame lives and dies. Performance and Inlining

Now here’s the twist: Rust doesn’t always keep stack frames separate.
Thanks to LLVM’s optimizer, small functions get inlined — their body is directly pasted into the caller’s code.

So:

fn add(a: i32, b: i32) -> i32 { a + b } fn main() { println!("{}", add(2, 3)); }

… might compile into assembly without any stack frame at all for add.

Inlining kills function overhead but increases binary size — a classic tradeoff. Rust gives you control through attributes:

  1. [inline(always)]

fn fast_add(a: i32, b: i32) -> i32 { a + b }

  1. [inline(never)]

fn safe_add(a: i32, b: i32) -> i32 { a + b }

Use #[inline(always)] sparingly — it’s not just a hint; it’s an order to the compiler.

The Future: Zero-Cost Function Calls? With Rust’s push into embedded and OS-level development, the compiler team is experimenting with function merging and frame reuse to minimize call overhead in tight loops.

The dream:
Functions so optimized that stack creation is virtually zero-cost — no frame setup, just register shuffles.

But that balance — between readability, safety, and control — is what makes Rust special.

Final Thoughts

Every fn you write in Rust is a tiny cathedral of CPU design. Behind the simplicity of:

fn do_something() { ... }

…lies a story of:

  • Stack allocation
  • Lifetime tracking
  • ABI conformance
  • Register juggling
  • Safety validation

Rust doesn’t just compile to fast code.
It compiles to understandable architecture — where every function has a predictable cost, footprint, and lifetime.

That’s what makes Rust’s model so beautiful: 
Every abstraction hides power, but never mystery.

  • Every Rust function gets a stack frame: arguments, locals, return address.
  • The compiler follows strict ABI rules and ownership semantics.
  • Borrow checker ensures references never outlive their frames.
  • Small functions can be inlined to avoid stack setup.
  • Rust’s transparency gives developers the mental model of real machines, not magical runtimes.

Read the full article here: https://medium.com/@theopinionatedev/inside-the-stack-frame-what-rust-functions-really-compile-to-5b676e74f277