Futures and Wakers Explained — The Real Async Engine Inside Rust
When I first learned async programming in Rust, I made a rookie mistake. I thought async and await worked like in JavaScript — just yield, resume, done. Oh, how wrong I was. Rust’s async system is nothing like JS, Python, or Go. It has no built-in runtime, no threads magically waiting around — it’s pure state-machine wizardry. Underneath every .await, there’s an engine of Futures and Wakers quietly scheduling, polling, and waking your code — all without a single hidden loop. This is the real async engine inside Rust. Let’s unwrap it layer by layer. What Is a Future in Rust, Really? At first glance, a Future looks like something that "represents a value that will be ready later." But in Rust, that definition isn’t quite right. In Rust, a Future is a state machine that you manually poll until it completes. Here’s its actual definition: pub trait Future {
type Output;
fn poll(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Self::Output>;
} See that poll() function? That’s the entire async engine. There’s no “magic runtime” — just something that keeps calling poll() until it returns Poll::Ready(value). If it’s not ready yet, it returns: Poll::Pending And that’s when Wakers come in. The Flow: Future + Waker = Async Engine Let’s visualize the high-level architecture: ┌───────────────────┐
│ Executor │
│ (runs event loop) │
└─────────┬─────────┘
│ polls futures
▼
┌───────────────────┐
│ Future │
│ (state machine) │
└─────────┬─────────┘
│ returns Pending
▼
┌───────────────────┐
│ Waker │
│ (signals wakeup) │
└─────────┬─────────┘
│ schedules again
▼
┌───────────────────┐
│ Executor polls... │
└───────────────────┘
Every async fn in Rust gets compiled into a state machine that the executor repeatedly polls until it’s done. When a future can’t make progress (say it’s waiting for I/O), it says: “I’m pending right now — here’s a Waker you can use to nudge me later.” When the I/O completes, the Waker says: “Hey executor, this task is ready again — poll me!” Let’s Write a Minimal Future (By Hand) We’ll write our own Future that waits for one second before completing. use std::{
future::Future,
pin::Pin,
task::{Context, Poll, Waker},
thread,
time::Duration,
sync::{Arc, Mutex},
};
struct Delay {
done: Arc<Mutex<bool>>,
} impl Future for Delay {
type Output = &'static str;
fn poll(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Self::Output> {
let mut done = self.done.lock().unwrap();
if *done {
Poll::Ready("done!")
} else {
let waker = cx.waker().clone();
let done = self.done.clone();
// Spawn a thread to simulate I/O completion
thread::spawn(move || {
thread::sleep(Duration::from_secs(1));
*done.lock().unwrap() = true;
waker.wake();
});
Poll::Pending
}
}
} Let’s test it: fn main() {
let future = Delay { done: Arc::new(Mutex::new(false)) };
poll_executor(future);
}
// A minimal executor
fn poll_executor<F: Future>(mut fut: F) {
use std::task::{Context, Poll, RawWaker, RawWakerVTable, Waker};
fn dummy_raw_waker() -> RawWaker {
fn clone(_: *const ()) -> RawWaker { dummy_raw_waker() }
fn wake(_: *const ()) {}
fn wake_by_ref(_: *const ()) {}
fn drop(_: *const ()) {}
RawWaker::new(std::ptr::null(), &RawWakerVTable::new(clone, wake, wake_by_ref, drop))
}
let waker = unsafe { Waker::from_raw(dummy_raw_waker()) };
let mut cx = Context::from_waker(&waker);
loop {
match Pin::new(&mut fut).poll(&mut cx) {
Poll::Ready(msg) => {
println!("{}", msg);
break;
}
Poll::Pending => {
thread::sleep(Duration::from_millis(100));
}
}
}
} Output: done! Boom. You just implemented your own async engine from scratch. It polled the future, waited, and got the result — no runtime, no magic, just traits and wakers. Inside the Compiler: async fn Turns Into a State Machine When you write this: async fn fetch_data() -> u32 {
let x = read_from_socket().await; x + 1
} The compiler desugars it into something roughly like this: enum FetchDataState {
Start, Waiting(SocketFuture), Done,
}
struct FetchData {
state: FetchDataState,
} impl Future for FetchData {
type Output = u32;
fn poll(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<u32> {
match self.state {
FetchDataState::Start => {
let fut = read_from_socket();
self.state = FetchDataState::Waiting(fut);
Poll::Pending
}
FetchDataState::Waiting(ref mut fut) => {
match Pin::new(fut).poll(cx) {
Poll::Ready(v) => {
self.state = FetchDataState::Done;
Poll::Ready(v + 1)
}
Poll::Pending => Poll::Pending,
}
}
FetchDataState::Done => Poll::Ready(0),
}
}
} That’s your async function — turned into a hand-written state machine. Every .await saves progress, returns Pending, and resumes when the Waker signals. Wakers — The Heartbeat of Async Rust A Waker is just an object that tells the executor: “The Future you’re polling is ready to make progress again.” The Waker is created by the executor when the Future is first polled. The Future stores that Waker (via cx.waker()) somewhere, and when an event occurs (like socket ready), it calls: waker.wake(); That signals the executor to poll the Future again. Wakers are thread-safe, cheap to clone, and completely user-implementable. Architecture Diagram — The Real Async Flow
┌───────────────────────────────┐
│ Executor │
│ (Event loop or task runner) │
└──────────────┬────────────────┘
│
▼
┌────────────────────┐
│ Future (state) │
│ Polls + returns │
│ Poll::Pending │
└─────────┬─────────┘
│
┌─────────────┴────────────┐
│ Waker │
│ wakes task when ready │
└─────────────┬────────────┘
│
▼
┌────────────────────┐
│ Executor polls │
│ again │
└────────────────────┘
This loop continues until every Future returns Poll::Ready. Real Example: Using Tokio’s Executor Here’s what happens when you use tokio::spawn() — under the hood.
- [tokio::main]
async fn main() {
let task1 = tokio::spawn(async {
println!("Hello from task 1");
});
let task2 = tokio::spawn(async {
println!("Hello from task 2");
});
task1.await.unwrap();
task2.await.unwrap();
} Behind the scenes:
- Tokio wraps your async blocks into Futures.
- It creates a Task struct containing the Future and a Waker handle.
- The executor keeps a queue of ready tasks.
- When a task blocks (returns Pending), it gives the Waker to the reactor (e.g., epoll, kqueue).
- When I/O completes, the reactor calls waker.wake().
- The executor polls it again.
That’s it — it’s a polling orchestra of tiny, self-contained state machines. Benchmark: Futures vs Threads | Model | Latency (per task) | Memory (per task) | Scalability | | -------------- | ------------------ | ----------------- | ------------------------ | | Thread | ~1.1 ms | ~2 MB | Limited by OS threads | | Future (async) | ~0.05 ms | ~2 KB | Tens of thousands easily | Rust’s async model wins massively in memory and scheduling efficiency. That’s why high-performance systems — like hyper, tokio, and reqwest — all use it. Why It Matters Rust’s async model gives you runtime-level performance without any runtime. Everything — the polling, scheduling, and waking — happens in your code, through traits and data structures, not hidden threads. You can write your own executor, design custom Wakers, or integrate async logic into embedded systems with no OS. This is the power of async without abstraction leaks. Key Takeaways
- A Future is a pollable state machine, not a magic promise.
- A Waker tells the executor when the Future is ready again.
- The executor runs a loop that polls Futures until completion.
- .await just desugars into “poll until ready.”
- Rust async is zero-runtime, zero-garbage-collector, zero-magic.
Final Thought Every time you write: let data = socket.read().await; Remember what’s actually happening:
- You’re not blocking a thread.
- You’re yielding control.
- You’re wiring a polling microstate machine that wakes itself when ready.
That’s not async sugar. That’s systems-level elegance — Rust style. Because under the hood, Futures and Wakers aren’t abstractions — they’re architecture. And Rust lets you see the gears turning.