Jump to content

Rust Kernel Modules, Ready-to-Ship: A cargo-generate Template with Tests, CI, and Zero-Panic Defaults

From JOHNWICK
Revision as of 05:23, 22 November 2025 by PC (talk | contribs) (Created page with "500px Just like shipping containers standardized global logistics, cargo-generate templates standardize Rust kernel module development — complete with safety guarantees, automated tests, and CI pipelines ready from the first commit. I shipped my first Rust kernel module last month. Zero panics in 14 months of production. But getting there? That was a nightmare. The initial setup took three days. Makefiles that wouldn’t cooperate wi...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Just like shipping containers standardized global logistics, cargo-generate templates standardize Rust kernel module development — complete with safety guarantees, automated tests, and CI pipelines ready from the first commit. I shipped my first Rust kernel module last month. Zero panics in 14 months of production. But getting there? That was a nightmare.

The initial setup took three days. Makefiles that wouldn’t cooperate with Cargo. Kbuild configurations that required reading 40-year-old documentation. GitHub Actions workflows that mysteriously failed because I forgot LLVM=1 in exactly one place. And don't even ask about the panic handler—I thought "abort" was the default until production proved otherwise (it's not).

Then I looked at the calendar. We had four more drivers to port from C. At three days per setup, that’s 12 days of pure boilerplate before writing a single line of useful code. So I built what should have existed: a cargo-generate template that gets you from zero to production-ready in under 60 seconds.

The Problem: Setup Friction Kills Momentum

The official Rust-for-Linux out-of-tree module template provides basic scaffolding, but it’s deliberately minimal. You get a Makefile, a Kbuild file, and a hello-world module. What you don’t get:

  • Tests (KUnit integration requires manual wiring)
  • CI pipeline (GitHub Actions needs kernel-specific configuration)
  • Panic configuration (defaults vary by target, can silently break)
  • rust-analyzer support (the rust-project.json generation is documented but not automated)
  • Clippy integration (available but not configured by default)

I measured the setup time across our team. Average: 4.2 hours to go from git clone to a module that builds, has tests, and runs in CI. Minimum: 1.8 hours (our senior kernel engineer). Maximum: "I gave up after six hours and asked for help."

The cost isn’t just time. It’s context switching. Every kernel module starts the same way but requires reading kernel documentation, Rust-for-Linux docs, KUnit docs, and cargo-generate docs simultaneously. Your brain is juggling four different mental models before you’ve written the first impl kernel::Module. Since Linux 6.1 merged initial Rust support in 2022, the kernel is now build-tested in Rust’s pre-merge CI. The tooling is stabilizing. Linux 6.8 merged the first Rust drivers in December 2023. As of July 2024, the kernel supports Rust versions 1.78.0 and 1.79.0, finally establishing a minimum version policy. The infrastructure is ready. The gap? Developer experience.

What Actually Ships: The Template Structure

Here’s what cargo generate rust-kernel-template gives you:


rust_network_driver/
├── .github/
│   └── workflows/
│       ├── ci.yml           # Kernel build + tests
│       └── clippy.yml       # Lints on every PR
├── src/
│   ├── lib.rs               # Your module code
│   └── tests.rs             # KUnit tests
├── Cargo.toml               # panic="abort" already set
├── Kbuild                   # Kernel build integration
├── Makefile                 # Wraps kernel build system
├── rust-toolchain.toml      # Pinned nightly version
└── .cargo/
    └── config.toml          # Target specification

Every file configured. Every gotcha documented in comments. The template is opinionated about what production looks like. Configuration #1: The Panic Handler Kernel modules require a panic strategy because unwinding is not supported in kernel space. You must set panic = "abort" in both dev and release profiles. The template's Cargo.toml:

[profile.dev]
panic = "abort"
opt-level = 0

[profile.release]
panic = "abort"
opt-level = 2
lto = true

Why this matters: I forgot this once. The module compiled. It loaded. Then, on the first allocation failure (memory pressure during testing), it tried to unwind. Kernel panic. The kernel must not panic on allocation failure — you need fallible allocation with Result types. But if your panic strategy isn't abort, you won't even get a compile-time error. You'll discover it at 2 AM during load testing. The template catches this at generation time. Your first build has the right configuration.

Configuration #2: KUnit Integration That Actually Works

Rust tests in the kernel are mapped to KUnit, and documentation tests get transformed into KUnit test suites. But wiring this up requires three pieces:

  • Test module with #[kunit_tests] macro
  • Kconfig entry enabling tests
  • Make target that invokes kunit.py

The template includes all three. Here’s the test structure in src/tests.rs:

#[cfg(CONFIG_RUST_KERNEL_TESTS)]
#[kunit_tests(rust_network_driver_tests)]
mod tests {
    use super::*;
    
    #[test]
    fn test_initialization() {
        let driver = NetworkDriver::new().unwrap();
        assert!(driver.is_initialized());
    }
    
    #[test]
    fn test_resource_cleanup() {
        {
            let _driver = NetworkDriver::new().unwrap();
            // Drop happens here
        }
        // If cleanup failed, kernel would panic
    }
}

Unlike userspace Rust tests, kernel tests use custom assert! and assert_eq! macros that forward to KUnit instead of panicking. The template's tests demonstrate this pattern immediately.

Run tests locally:

make LLVM=1 test

  1. Invokes: ./tools/testing/kunit/kunit.py run \
  2. --make_options LLVM=1 \
  3. --kconfig_add CONFIG_RUST_KERNEL_TESTS=y

Output:

KTAP version 1

  1. Subtest: rust_network_driver_tests

ok 1 test_initialization ok 2 test_resource_cleanup ok 3 rust_network_driver_tests

I thought setting this up would take an hour — reading KUnit docs, figuring out the make target syntax, testing. Instead: the template has it working. Clone, run, done.

Configuration #3: GitHub Actions That Understand Kernels

Regular Rust CI is easy — cargo build, cargo test, done. Kernel modules? Not so much. You need:

  • LLVM toolchain (Rust kernel modules require LLVM, not GCC)
  • Kernel headers with Rust metadata
  • Specific bindgen version
  • The right environment variables (LLVM=1, KDIR, target specification)

The template’s .github/workflows/ci.yml:

name: CI
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    container:
      image: rust:nightly
    steps:
      - uses: actions/checkout@v4
      
      - name: Install kernel build dependencies
        run: |
          apt-get update
          apt-get install -y \
            llvm clang \
            linux-headers-generic \
            bindgen
      
      - name: Check Rust availability
        run: make LLVM=1 check-rust
      
      - name: Build module
        run: make LLVM=1
      
      - name: Run tests
        run: make LLVM=1 test

The actions-rs toolchain action supports rustup profiles, which install minimally required component sets to speed up CI, but kernel modules need the full nightly with rust-src. The template uses rust:nightly containers which include everything. The CI fails fast if your Rust version is wrong, if bindgen is missing, or if you forgot LLVM=1 anywhere. On my team, this caught configuration errors that would have wasted an hour of local debugging.

The Hidden Benefit: Standardized Error Messages

Before the template, our team’s kernel modules had different error conventions:

  • Module A: ENOSPC for allocation failures
  • Module B: ENOMEM for allocation failures
  • Module C: Custom error codes that didn’t map to kernel conventions

The kernel documentation notes that Rust error handling should use Result types with proper error codes. The template demonstrates this pattern:

impl NetworkDriver {
    pub fn new() -> Result<Self> {
        let buffer = DmaBuffer::alloc(DMA_SIZE)
            .map_err(|_| ENOMEM)?;
        
        let registers = ioremap(REGISTER_BASE)
            .ok_or(ENOMEM)?;
        
        Ok(Self { buffer, registers })
    }
}

Every allocation returns Result. Every error is a kernel error code. No exceptions, no ambiguity. After we adopted the template, code review comments dropped 37% (tracked over 8 weeks across 6 modules). Fewer “why did you use ENOSPC here?” questions. Fewer “this needs error handling” comments. The template establishes conventions before you write the first line.

When The Template Doesn’t Fit

The Rust for Linux project notes that out-of-tree development is not their primary focus, and internal kernel APIs can change at any time. If you’re targeting upstream submission, the template might be too opinionated. Specifically:

  • Upstream prefers in-tree modules: Abstractions submitted upstream require in-tree users, and patches should generally focus on in-tree development. If your goal is mainline merge, start with the official samples instead.
  • Some subsystems lack Rust support: Individual subsystems have little built-in Rust support, and many kernel functions lack Rust abstractions. If you’re writing a driver for a subsystem without Rust bindings, you’ll spend time on unsafe FFI instead of high-level abstractions.

The template shines for:

  • Out-of-tree drivers that won’t be upstreamed
  • Internal company modules
  • Rapid prototyping before upstream work
  • Learning kernel module development in Rust

It’s a poor fit for:

  • Modules targeting immediate upstream merge
  • Subsystems with zero Rust abstractions (you’ll fight the template’s assumptions)
  • Teams that want complete control over build configuration

Decision Framework: When to Use Templates vs. Manual Setup If your module… Use the template Manual setup Is out-of-tree, won’t be upstreamed ✓ Needs CI/tests from day one ✓ Uses established kernel abstractions (DMA, interrupts, PCI) ✓ Targets a subsystem with mature Rust support ✓ Will be submitted to LKML within 3 months ✓ Requires extensive unsafe FFI to C APIs ✓ Needs non-standard build configuration ✓

The template assumes you’re building on the official Rust-for-Linux out-of-tree module structure with Makefiles that integrate with kbuild. If that’s your starting point, the template adds testing and CI with zero configuration. If you need something radically different, start from scratch.

The 60-Second Setup

# Install cargo-generate if you haven't
cargo install cargo-generate

# Generate your module
cargo generate --git https://github.com/rust-kernel-template \
  --name my_network_driver
cd my_network_driver

# Build against your kernel
export KDIR=/path/to/linux-with-rust
make LLVM=1

# Run tests
make LLVM=1 test

# Load the module
sudo insmod my_network_driver.ko
From nothing to a tested, CI-enabled kernel module. In one minute.

What Changed For Our Team

We ported four drivers from C to Rust over 6 weeks: Before template (first two drivers):

  • Setup time: 4.2 hours average per driver
  • Time to first test: 6.1 hours average
  • Configuration bugs in CI: 11 total
  • Forgotten panic=”abort”: 1 (discovered in production)

After template (second two drivers):

  • Setup time: 3 minutes average (just module name + generation)
  • Time to first test: 12 minutes average (write test, run it)
  • Configuration bugs in CI: 0
  • Forgotten panic=”abort”: 0

The real win wasn’t the time saved. It was the cognitive load removed. Our junior engineer, who’d never touched kernel code, had a working module with tests in 20 minutes. Not because she’s brilliant (though she is), but because the template removed every decision that wasn’t about the actual driver logic. The template doesn’t make you a better kernel developer. It just gets the boilerplate out of your way so you can focus on being one.


Read the full article here: https://medium.com/@chopra.kanta.73/rust-kernel-modules-ready-to-ship-a-cargo-generate-template-with-tests-ci-and-zero-panic-5b5c1ed1dc93