What Happens When Rust Meets DMA (Direct Memory Access)
When you write Rust, you feel safe. The compiler guards your memory like a loyal knight — no use-after-free, no data races, no null dereferences. But then… you meet DMA — Direct Memory Access — a hardware-level beast that says:
“I’ll just write into memory directly, thanks. No need to bother your borrow checker.” And suddenly, Rust’s guarantees start trembling.
This is the story of what happens when Rust’s ownership model collides with bare-metal hardware reality — when memory can change underneath your code, and the compiler has no clue.
A Quick Refresher: What Is DMA?
Before we dive deep, let’s recap DMA in simple terms. Normally, when a program wants to move data — say, from disk to RAM, or from RAM to a GPU — it asks the CPU to do it.
But the CPU is slow at I/O. So modern systems delegate this to a DMA controller, a hardware module that can copy data directly between memory and peripherals without CPU involvement. That’s the “direct” in Direct Memory Access.
Here’s a mental diagram:
┌───────────┐ ┌──────────┐
│ CPU │ │ Device │
│ (Rust) │ │ (e.g. NIC│
│ │ │ or Disk) │
└────┬──────┘ └────┬─────┘
│ │
│ DMA Controller │
└──────────┬────────┘
│
┌────┴─────┐
│ Memory │
└──────────┘
DMA can read/write to memory while your program is running.
That’s where things get interesting.
The Ownership Collision: Rust vs. DMA
Rust assumes that if you have a &mut T, you’re the only one who can mutate that data. But DMA doesn’t care. It just writes to the same memory, bypassing CPU caches and Rust’s borrow checker entirely.
Let’s look at a small embedded example:
static mut BUFFER: [u8; 512] = [0; 512];
fn start_dma_transfer() {
unsafe {
DMA_CONTROLLER.start_transfer(&BUFFER as *const _ as u32, 512);
}
}
fn process_data() {
for byte in unsafe { &BUFFER } {
// read from buffer after DMA
println!("{byte}");
}
}
Looks innocent? It’s not. Between start_dma_transfer() and process_data(), the DMA controller could be writing into that buffer. So when Rust iterates over it, it might be reading half-written data.
Worse, the compiler can reorder reads because it assumes memory isn’t being changed externally.
You just broke Rust’s core assumption — that memory doesn’t mutate behind its back.
What Actually Happens Under the Hood
Let’s visualize the architecture:
[ CPU Core ] │ │ Executes Rust code ▼ ┌──────────────────────┐ │ Rust ownership model │ │ &mut T => exclusive │ └──────────────────────┘ │ │ (thinks memory is stable) ▼ ┌──────────────────────┐ │ Physical Memory │ └──────────────────────┘ ▲ │ │ [ DMA Controller ] │ independently writes │ to the same memory ▼ ┌──────────────────────┐ │ Peripheral Device │ └──────────────────────┘
The CPU cache might have stale data. The DMA controller might overwrite memory while the CPU is reading it. And Rust’s compiler — relying on LLVM optimizations — might hoist or remove reads it thinks are redundant.
Result: Undefined behavior at the hardware level, even if your Rust code looks safe.
How the Pros Handle It
In real embedded or kernel-level Rust code, developers tame this with a mix of:
1. volatile Access
Rust provides core::ptr::read_volatile() and write_volatile() to tell the compiler: “Don’t optimize these reads or writes — hardware might be doing things behind your back.”
let data = unsafe { core::ptr::read_volatile(BUFFER.as_ptr()) };
2. Memory Barriers
At the CPU level, barriers (fence or dsb in ARM) force the processor to flush caches and synchronize memory before accessing DMA buffers.
use core::sync::atomic::{fence, Ordering};
unsafe {
start_dma_transfer();
fence(Ordering::SeqCst); // ensure DMA writes are visible
process_data();
}
3. UnsafeCell and Interior Mutability
To work within Rust’s ownership rules, DMA buffers are often wrapped in UnsafeCell<T> — Rust’s only legal way to express “this memory can change even if I have a shared reference”.
use core::cell::UnsafeCell;
struct DmaBuffer {
data: UnsafeCell<[u8; 512]>,
}
unsafe impl Sync for DmaBuffer {}
static DMA_BUFFER: DmaBuffer = DmaBuffer {
data: UnsafeCell::new([0; 512]),
};
This pattern tells Rust: “Trust me — I’ll handle synchronization.” You’ve opted out of some safety, but done so explicitly.
Architecture: DMA in a Rust Embedded Runtime
Let’s look at a real-world DMA design used in Rust embedded HALs (like stm32-hal):
┌───────────────────────────┐
│ Application Code │
│ - async tasks │
│ - state machines │
└───────────┬───────────────┘
│
┌───────────▼───────────────┐
│ DMA Abstraction Layer │
│ - safe wrappers │
│ - buffer ownership split │
└───────────┬───────────────┘
│
┌───────────▼───────────────┐
│ Hardware Registers (HAL) │
│ - start/stop DMA │
│ - set buffer pointers │
│ - configure interrupts │
└───────────┬───────────────┘
│
┌───────────▼───────────────┐
│ DMA Controller │
│ - writes to RAM │
│ - triggers IRQ on done │
└───────────────────────────┘
The idea is to move ownership of the buffer from CPU to DMA safely. Some HALs even use typestates like this:
struct DmaTransfer<'a, B: Buffer> {
buffer: &'a mut B,
}
impl<'a, B: Buffer> DmaTransfer<'a, B> {
fn new(buffer: &'a mut B) -> Self { ... }
fn start(self) -> TransferInProgress<'a, B> { ... }
}
So when DMA starts, you lose access to the buffer until it finishes. When it’s done, ownership is returned — a pure Rust ownership transfer that models hardware semantics.
That’s the beauty of Rust’s type system used against the chaos of hardware.
Real Pain in Practice If you’ve worked with DMA in Rust (say, on STM32 or ESP32), you’ve probably seen weird bugs like:
- “Why is my buffer full of zeros sometimes?”
- “Why does my interrupt fire but data is garbage?”
- “Why does release mode break but debug works?”
Those are almost always due to missing barriers or stale caches.
The compiler’s optimizer assumes ownership equals exclusivity — which DMA violates. So your safe Rust code silently gets optimized into nonsensical machine code.
The Real Lesson
DMA reminds us that Rust’s safety is not magic. It’s a contract between the compiler and the programmer.
When hardware doesn’t play by those rules, you have to step in and enforce them manually — with fences, volatiles, and type-level ownership patterns.
But what’s truly beautiful? Rust lets you do this without abandoning structure. You can model DMA transfers in a way that still feels ergonomic, still type-safe — just at a lower level of abstraction.
It’s the meeting point between compiler trust and bare-metal chaos.
Final Thoughts
Rust’s borrow checker was built for CPUs, not for DMA engines. But Rust’s philosophy — making unsafe explicit and safe composable — fits surprisingly well with hardware realities.
When Rust meets DMA, it’s not a betrayal of safety. It’s a collaboration — a handshake between software determinism and hardware autonomy. And that’s the story of how Rust keeps its soul, even when memory starts moving on its own.
