Jump to content

How Rust Rewrites Device Drivers: The Real Kernel Abstractions That Work

From JOHNWICK

The Backstory: Why Kernel Devs Finally Gave In For years, Linus Torvalds pushed back against Rust in the Linux kernel.
His reasoning? “Show me where C failed first.” But the reality was — C did fail.
Not in performance, but in safety. Every modern CVE that haunted Linux’s network, USB, or filesystem drivers shared a common theme: memory corruption from unguarded pointers. When the Rust-for-Linux project quietly landed in the kernel tree, something remarkable happened. The old argument — “Rust can’t handle low-level code” — didn’t hold up anymore. Rust didn’t just compile device drivers. It redefined how they’re written. The Core Idea: Safety at the Hardware Boundary In kernel land, memory safety isn’t just nice to have — it’s life or death.
A single dangling pointer in a network driver can take down your entire OS. Rust’s model gave kernel devs something they never had before:

  • Borrow-checked references, ensuring no double frees or invalid pointers.
  • Type-safe device access, using abstractions over unsafe regions.
  • No runtime overhead, since safety is verified at compile-time.

But what’s most interesting is how they pulled it off.
Rust didn’t “port” C drivers. It re-architected the way we think about them. The Architecture: The Device-Driver Layer Cake Here’s a simplified version of how Rust’s driver architecture looks inside the kernel: +----------------------------------------------------+ | User Space (syscalls, ioctls, etc.) | +----------------------------------------------------+ | Kernel Interface (C ↔ Rust boundary) | +----------------------------------------------------+ | Rust Abstraction Layer | | - Safe wrappers for C APIs | | - Typed access to MMIO / IRQ | | - Ownership and lifetime tracking | +----------------------------------------------------+ | Driver Implementations (Pure Rust) | | - PCI, USB, GPIO, Net, etc. | | - Uses async-safe concurrency | +----------------------------------------------------+ | Hardware Registers / DMA | +----------------------------------------------------+ At the heart of this is the Rust Abstraction Layer — a thin layer of glue code that bridges C’s untyped kernel world and Rust’s strict type system. This layer is where all the magic happens. Example: Writing a Safe PCI Driver in Rust Let’s look at a simplified snippet from an actual PCI driver example using Rust in the kernel: use kernel::prelude::*; use kernel::pci::{self, PciDriver, PciDevice};


struct MyPciDriver; impl PciDriver for MyPciDriver {

   fn probe(device: &mut PciDevice) -> Result<Self::DeviceData> {
       let bar = device.map_bar(0)?;
       let regs = unsafe { core::slice::from_raw_parts_mut(bar.ptr(), 256) };
       // Safe MMIO access through Rust wrapper
       let control_reg = Mmio::<u32>::new(&mut regs[0]);
       control_reg.write(0x1);
       Ok(MyPciDriverData { regs })
   }

} Notice the subtle difference:
Instead of writing directly to a pointer (*bar_ptr = 0x1), Rust wraps the memory region in a type-safe abstraction.
This ensures:

  • You can’t accidentally write to invalid memory.
  • You can’t access a register once it’s unmapped.
  • You can’t race two threads on the same MMIO region.

The result? The compiler enforces what used to be “discipline”. The Real Power: Safe Abstractions Over Unsafe Code The brilliance of Rust’s kernel integration isn’t in removing unsafe.
It’s in isolating it. Every driver eventually has to talk to hardware — and that’s always unsafe.
But by pushing unsafe blocks to the edges and wrapping them with safe abstractions, the core driver logic becomes memory-safe and verifiable. Here’s how that looks conceptually: pub struct Mmio<T> {

   base: *mut T,

}


impl<T> Mmio<T> {

   pub fn read(&self) -> T {
       unsafe { core::ptr::read_volatile(self.base) }
   }
   pub fn write(&self, value: T) {
       unsafe { core::ptr::write_volatile(self.base, value) }
   }

} Now any higher-level driver can use Mmio<T> safely — no raw pointers, no undefined behavior. This layering approach mirrors how modern OS design evolved:

  • Unsafe at the very bottom.
  • Safe and composable everywhere else.

The Build System & Linking: How Rust Fits in Integrating Rust into the Linux kernel wasn’t trivial.
The kernel build system (Kbuild) had to be extended to support:

  • rustc compilation units per driver
  • Link-time optimization (-Clto)
  • Zero std environment (#![no_std])

Each driver compiles in a bare-metal mode, using core and alloc crates only.
Any call into libc is forbidden — the kernel already is the OS. The Rust code links against the C kernel at predefined boundaries using extern "C" bindings. These are generated using Bindgen or manually via safe wrappers in rust/kernel/. Architecture Diagram: Rust Driver Integration Flow ┌──────────────────────────────┐

        │     Rust Source Driver       │
        │   (Safe abstractions, MMIO)  │
        └──────────────┬───────────────┘
                       │
                       ▼
        ┌──────────────────────────────┐
        │ Rust Kernel Crates (no_std)  │
        │ - core, alloc, kernel::prelude │
        └──────────────┬───────────────┘
                       │
                       ▼
        ┌──────────────────────────────┐
        │   Rust-to-C ABI Bridge       │
        │ (extern “C”, FFI bindings)   │
        └──────────────┬───────────────┘
                       │
                       ▼
        ┌──────────────────────────────┐
        │      Kernel Core (C)         │
        │  - scheduler, mm, vfs, etc.  │
        └──────────────────────────────┘

This bridge ensures Rust stays sandboxed inside safe boundaries but can still access all kernel facilities. What Actually Worked (And What Didn’t) ✅ What worked beautifully:

  • Memory safety: Zero double frees or null derefs reported in early Rust drivers.
  • Type-safe MMIO: Hardware register access is now strongly typed.
  • Fewer crashes: The network driver prototypes ran weeks without a fault.
  • Dev experience: Rust compiler errors guide you to the exact unsafe edge.

❌ What didn’t:

  • Toolchain pain: Cross-compiling with no_std + bindgen is fragile.
  • Long compile times: rustc is slower than gcc for small builds.
  • Async still awkward: Kernel lacks native async support; futures need custom executors.
  • C interop: FFI wrappers are still evolving and can get verbose.

Why It Matters Rust’s entry into kernel space isn’t about rewriting Linux in a new language.
It’s about rebuilding trust at the hardware boundary. Device drivers account for over 60% of all kernel CVEs.
If Rust can cut that even by half, it’s one of the biggest security wins in OS history. And for once, safety doesn’t come at a cost.
Benchmarks from the Rust USB and net drivers showed negligible differences compared to their C counterparts — in some cases, they even outperformed due to better inlining and LTO. Final Thoughts: Rust Is Teaching C New Tricks The kernel has always been about pushing hardware to its limits.
Rust doesn’t slow that down — it simply refuses to crash along the way. When you realize you can reboot a system without worrying about a random pointer clobbering your registers, you start to see the real value: Rust isn’t just safer.
It’s reliable by design — even in the dirtiest, lowest-level code imaginable. And that’s something C, for all its glory, could never promise.

Read the full article here: https://medium.com/@theopinionatedev/how-rust-rewrites-device-drivers-the-real-kernel-abstractions-that-work-b1f56bda488a