Jump to content

We Built a Microkernel in Rust: Here’s What Actually Worked

From JOHNWICK
Revision as of 05:09, 18 November 2025 by PC (talk | contribs) (Created page with "500px There’s this moment every systems developer has when they stare at their bootloader, watch a blank screen flash, and whisper: “Did I just write an OS… or a very expensive infinite loop?” That was us — three developers, one foolish dream: building a microkernel in Rust from scratch. No libc, no POSIX, no kernel to lean on. Just cargo, bare metal, and a questionable amount of caffeine. And the truth? Rust didn’t ma...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

There’s this moment every systems developer has when they stare at their bootloader, watch a blank screen flash, and whisper:

“Did I just write an OS… or a very expensive infinite loop?” That was us — three developers, one foolish dream: building a microkernel in Rust from scratch. No libc, no POSIX, no kernel to lean on. Just cargo, bare metal, and a questionable amount of caffeine.

And the truth? Rust didn’t make it easy. But it did make it possible.

The Why: Because We Wanted to Suffer (and Learn)

We didn’t start this project to reinvent Linux. We started it because we were obsessed with the question:
Can Rust really deliver on its promise of memory safety in the lowest layers of an OS? The microkernel architecture made sense for testing that hypothesis. Unlike monolithic kernels (like Linux), a microkernel keeps only the bare minimum in kernel space:

  • Scheduling
  • IPC (inter-process communication)
  • Memory management
  • Basic hardware drivers

Everything else — file systems, device drivers, even networking — runs in user space as separate processes.

Here’s the architecture we aimed for:

+-------------------------------------+
|          User Processes             |
|-------------------------------------|
|   File System   |   Network Stack   |
|-------------------------------------|
|         IPC (Message Passing)       |
|-------------------------------------|
|   Scheduler | Memory Manager | HAL  |
|-------------------------------------|
|                Hardware             |
+-------------------------------------+

The First Steps: Booting Without std When you’re building a kernel, you don’t get println!(). You don’t even get a heap.
That means we had to go no_std right from the start.

#![no_std]
#![no_main]


use core::panic::PanicInfo;
#[panic_handler]
fn panic(info: &PanicInfo) -> ! {
    // This is our only form of "logging" early on
    loop {}
}

#[no_mangle]
pub extern "C" fn _start() -> ! {
    // This is where execution begins
    loop {}
}

At this point, our kernel was just spinning forever — but that meant it booted! Once we had the CPU in our hands, the next milestone was to write to the VGA buffer. That’s right — the old-school “Hello, world” is literally writing bytes into video memory.

const VGA_BUFFER: *mut u8 = 0xb8000 as *mut u8;

pub fn write_char(x: usize, y: usize, c: u8, color: u8) {
    let offset = (y * 80 + x) * 2;
    unsafe {
        *VGA_BUFFER.add(offset) = c;
        *VGA_BUFFER.add(offset + 1) = color;
    }
}

Seeing a letter appear on screen from bare-metal Rust is an emotional moment. That’s when it stopped being an experiment — and started feeling like a real OS.

The Kernel Core: Message Passing and Scheduling

The heart of a microkernel is message passing — the ability for isolated processes to talk safely.
Rust’s ownership model fits beautifully here.

We built a tiny IPC mechanism using Rust’s type system to ensure that messages were never shared mutably across threads without synchronization.

Here’s a simplified version of the idea:

pub struct Message<T> {
    sender: PID,
    payload: T,
}


pub fn send<T: Send>(pid: PID, msg: Message<T>) -> Result<(), IPCError> {
    let queue = get_process_queue(pid)?;
    queue.lock().push(msg);
    Ok(())
}

pub fn receive<T: Send>(pid: PID) -> Option<Message<T>> {
    let queue = get_process_queue(pid).ok()?;
    queue.lock().pop()
}

Every message is typed, thread-safe, and isolated. The Rust compiler ensures that once a message is sent, it’s moved — you can’t accidentally mutate or alias it afterward. That’s a massive win compared to C, where message queues are a minefield of pointer hell.

The Architecture in Rust Terms

To make sense of it all, here’s how our kernel modules map to Rust crates:


/kernel
 ├── /arch          # Architecture-specific code (x86, ARM)
 ├── /mm            # Memory management
 ├── /ipc           # Message passing
 ├── /scheduler     # Round-robin scheduler
 ├── /drivers       # Basic hardware drivers
 ├── /sys           # System calls
 └── main.rs

Each module is a self-contained crate with #![no_std] and a clean API boundary—mirroring how microkernels prefer separation of concerns.
Rust’s crate system turned out to be perfect for this kind of modularity.

The Safety Myth (and What Actually Bit Us) Rust’s borrow checker saved us from buffer overflows, data races, and dangling pointers. But what it couldn’t save us from were logic bugs.

Here’s a classic example that haunted us for days:

fn schedule_next() -> Option<&'static mut Process> {
    let next_pid = (CURRENT_PID + 1) % MAX_PROCESSES;
    Some(&mut PROCESSES[next_pid])
}

Looks fine, right?
Except we forgot that if a process was blocked waiting for IPC, it shouldn’t have been scheduled at all. The bug wasn’t unsafe memory access — it was unsafe logic. That’s when we learned the painful truth: Rust prevents memory corruption, not mental corruption.

We eventually solved it by tracking process states:

#[derive(Copy, Clone, PartialEq)]
enum ProcessState {
    Ready,
    Running,
    Waiting,
}

fn schedule_next() -> Option<&'static mut Process> {
    for i in 0..MAX_PROCESSES {
        let pid = (CURRENT_PID + i) % MAX_PROCESSES;
        if PROCESSES[pid].state == ProcessState::Ready {
            return Some(&mut PROCESSES[pid]);
        }
    }
    None
}

The Wins That Made It Worth It

  • Zero undefined behavior. Not once did we hit memory corruption after boot.
  • Compile-time guarantees meant that if IPC compiled, it was safe.
  • Crate modularity made testing in isolation possible — we could build and run IPC code on Linux with std and test it separately.
  • Performance was predictable. No hidden GC pauses, no thread races, just raw deterministic code.

The Pain That Almost Broke Us

  • Bootstrapping without std is brutal. Every missing symbol error hurts.
  • Unsafe blocks are inevitable. You can’t write a kernel without touching raw pointers.
  • No global allocator. We had to roll our own bump allocator:
static mut HEAP_START: usize = 0x_4444_4444_0000;
static mut HEAP_SIZE: usize = 0;

pub unsafe fn init_heap(size: usize) {
    HEAP_SIZE = size;
}

pub unsafe fn alloc(size: usize) -> *mut u8 {
    let ptr = HEAP_START as *mut u8;
    HEAP_START += size;
    ptr
}

Debugging was old-school. When your screen is your log, every bug feels personal. What We Learned (and Why It Matters)

Rust doesn’t magically make kernel dev easy — but it changes the kind of pain you deal with.
Instead of chasing segfaults, you fight lifetimes. Instead of race conditions, you fight ownership.

But here’s the real reason this matters: Rust makes OS development approachable again.

What used to be the domain of C wizards can now be explored by any developer who’s not afraid of unsafe {} blocks and late-night reboots.

Final Thoughts

When people ask us, “Would you build another OS in Rust?” the answer is yes — but with scars. Because here’s the truth:
The biggest thing Rust gave us wasn’t memory safety.
It was confidence — the feeling that even when we touched hardware, we wouldn’t break everything.

And that’s what makes Rust more than a language. It’s a safety net woven into systems programming itself.