- Rust Kernel Module: Getting Started
- Getting Started
- Dependencies
- System Requirements
- QEMU & GDB
- LLVM (or libclang)
- Rust
- Compiling Kernel
- Build initramfs
- Running on QEMU
- Debugging with GDB
- Conclusion
- Reference
- Saved searches
- Use saved searches to filter your results more quickly
- License
- mdaverde/kernel-modules-rust
- Name already in use
- Sign In Required
- Launching GitHub Desktop
- Launching GitHub Desktop
- Launching Xcode
- Launching Visual Studio Code
- Latest commit
- Git stats
- Files
- README.md
Rust Kernel Module: Getting Started
Linux-next has Rust support for a while. While it’s still not really in the mainline yet, the whole Rust stack in Linux is pretty solid already. From compiling to debugging, I can confirm the development workflow for Rust kernel modules is surprisingly seamless. In this post, I’m going to introduce you how to setup the development environment for Linux Kernel. So you can build the modules and test it.
I know setup whole kernel is incredibly cumbersome, and theoretically we can just build the modules from linux headers. In fact, the old project for Rust kernel module actually did this approach already. But testing programs without any kind of sandbox is dangerous and error prone. We should build common sense that sandbox is crucial for any project IMHO. Just bear with me throughout these steps. It’s going to be easier and more fun after all of these.
Getting Started
To get the source code, we can clone the Linux repository from Rust-for-Linux organization:
$ git clone https://github.com/Rust-for-Linux/linux.git
Dependencies
Next we need to prepare all dependencies required. Quick start guide from Documentation/rust also lists all the requirements already. Feel free to follow it instead for this step.
System Requirements
The first one is of course all dependencies for compiling the kernel. For some distributions, they might already ship some of the packages. I’ll only provide commands for debian-related distributions. But others should follow the same:
$ sudo apt install libncurses-dev flex bison openssl libssl-dev \ dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf
QEMU & GDB
And then is tools for testing and debugging, this includes all dependencies required from QEMU and GDB. In later post, we’ll show how to run and debug kernel via attaching QEMU to GDB.
$ sudo apt install qemu qemu-system qemu-kvm libvirt-daemon-system \ libvirt-clients bridge-utils $ sudo apt install gdb
LLVM (or libclang)
For simplicity, we will compile the kernel with LLVM. This is because we will use bindgen to help Rust connect to the C code which requires libclang . We will compile the kernel with LLVM=1 , or if your distribution doesn’t support full toolchain, use with CC=clang . Again, most distributions should ship with LLVM already. But if you are afraid of missing anything, run the following command:
$ sudo apt install llvm lld libclang-dev
Rust
For your sanity, I really recommend download and install Rust from the official site and remove other versions installed from other package managers. But if you insist, Documentation/rust should also cover that. For the time of this post, Linux is still stick to a certain Rust version instead of stable. To get the toolchain of that version, run the following commands:
$ rustup override set $(scripts/min-tool-version.sh rustc) $ rustup component add rust-src # Rust standard library source
And we also need to install bindgen :
$ cargo install --locked --version $(scripts/min-tool-version.sh bindgen) bindgen
Compiling Kernel
Alright, this is where the fun begins! We are ready to compile the Linux kernel. Before we start, let’s check again and make sure the toolchain is correct:
This will use the same logic by Kconfig to determine if RUST_IS_AVAILABLE should be enable. When you get a fresh kernel, it usually starts with make menuconfig to create the .config file. While we can use it to toggle the Rust support and all modules we want to build, we can also copy existing one from its CI workflow. We could get the most similar configurations from the upstream. In this post, I will assume you are under x86_64 architecture. If it’s not, please take a look of its ci.yaml and choose the closest possible config for you.
$ cp .github/workflows/kernel-x86_64-debug.config .config
Take a coffee or go for a walk. It will take some time to complete. Once the compilation is done, you should found the kernel binary image under arch/x86/boot/bzImage in Linux source directory.
Build initramfs
Now we have the kernel, we could run with QMEU right? Well not quite, we still need a root filesystem for the kernel. Starting from here, there could be multiple approaches. Depends on your need, it could be totally different. I would say there’s no one true ring to rule all of them. For example, You can use Buildroot to setup as your drive. For us, we can just follow how CI does: Building initramfs. If you don’t know what is initramfs, here’s the quote from Linux kernel documentation:
All 2.6 Linux kernels contain a gzipped “cpio” format archive, which is extracted into rootfs when the kernel boots up. After extracting, the kernel checks to see if rootfs contains a file “init”, and if so it executes it as PID 1. If found, this init process is responsible for bringing the system the rest of the way up, including locating and mounting the real root device (if any). If rootfs does not contain an init program after the embedded cpio archive is extracted into it, the kernel will fall through to the older code to locate and mount a root partition, then exec some variant of /sbin/init out of that.
I hope you got it, because I’m going to introduce another tool we need: busybox . And I’ll also drop another quote for you to understand:
BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system.
Whew that’s quite a read. Don’t worry about not fully understanding what they do. Because we only care about the setup and how to create the image for now. We’ll come back to this if we really need some advanced configurations. The project itself already offer a script for us to create image:
$ wget https://www.busybox.net/downloads/binaries/1.35.0-x86_64-linux-musl/busybox $ usr/gen_init_cpio .github/workflows/qemu-initramfs.desc > qemu-initramfs.img
qemu-initramfs.desc might contain modules we didn’t build. Just remove those lines if you get the errors.
Running on QEMU
Alright! This time is real. We are going to start QEMU on the compiled kernel:
In case you don’t know how to exit vim qemu: Ctrl+A then X
$ sudo qemu-system-x86_64 \ -kernel arch/x86/boot/bzImage \ -initrd qemu-initramfs.img \ -M pc \ -m 4G \ -cpu Cascadelake-Server \ -smp $(nproc) \ -nographic \ -vga none \ -no-reboot \ -append 'console=ttyS0'
See QEMU’s Invocation if you want to understand what each parameter does. If everything runs smoothly, it should look like this:
Hooray! You just built a Linux kernel with Rust kernel modules loaded!
Debugging with GDB
Before we end the post, there’s one more thing we haven’t done yet. Running the kernel is one thing, how to debug it is another. While it’s definitely a huge topic, I would like to give an entry point to let everyone know how to start the first step at least.
To enable debugging, we need to make sure option CONFIG_GDB_SCRIPTS is on and CONFIG_DEBUG_INFO_REDUCED is off when building kernel. If you configure with make menucofig , you can use / to search these options and find their path to toggle. Build the kernel with make LLVM=1 -j$(nproc) will create the kernel with symbols we need. With incremental compilations, it should take way lesser time to complete. This is also true when we focus on kernel modules later. We just need to compile the modules afterwards.
Once it’s finish again, you should see vmlinux and vmlinux-gdb.py in the root of project. vmlinux is the target for GDB and vmlinux-gdb.py is pre-defined GDB helpers. To get the commands ( lx-* ) defined in vmlinux-gdb.py , we add the script file to GDB’s auto load path. Some usages of the commands can be found in GDB Kernel Debugging Page.
$ echo "add-auto-load-safe-path path/to/vmlinux-gdb.py" >> ~/.gdbinit
Let’s run the kernel on QMEU again, but this time with a few more parameters added:
$ sudo qemu-system-x86_64 \ -kernel arch/x86/boot/bzImage \ -initrd qemu-initramfs.img \ -M pc \ -m 4G \ -cpu Cascadelake-Server \ -smp $(nproc) \ -nographic \ -vga none \ -no-reboot \ -append 'console=ttyS0 nokaslr' \ -s -S
We add nokaslr boot parameter because GDB doesn’t work well with KASLR on. We also use -s -S combination to hold QEMU from booting the kernel until a GDB instance is attached. Now let’s create another shell window to start GDB and attach then to QEMU. Starting from here we can now set breakpoints and debugging the kernel.
(gdb) target remote :1234 # Attach to QEMU (gdb) hbreak start_kernel (gdb) c (gdb) b mm_alloc (gdb) c (gdb) lx-dmesg # Display kernel dmesg log in GDB shell (gdb) .
Conclusion
Okay we are finally done with building our first kernel. While this is just the first step, I believe this is the most difficult one and it will be more and more interesting from now on. In the next post, we’ll start exploring the joy of writing kernel modules!
Reference
Saved searches
Use saved searches to filter your results more quickly
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.
Linux kernel modules written in Rust
License
mdaverde/kernel-modules-rust
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Name already in use
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Sign In Required
Please sign in to use Codespaces.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching Xcode
If nothing happens, download Xcode and try again.
Launching Visual Studio Code
Your codespace will open once ready.
There was a problem preparing your codespace, please try again.
Latest commit
Git stats
Files
Failed to load latest commit information.
README.md
Linux kernel modules written in Rust
A collection of in-progress experimental Linux kernel modules written for the Rust for Linux project
To run the out-of-tree modules here you’ll need to run a custom kernel with the changes developed in mdaverde/linux which will continuously be rebasing off the upstream R4L fork.
This project uses just and zx for project-wide task management but installing a specific module should just require make.
The modules listed here have only been tested on an Ubuntu 21.04 x86_64 VM
- current — logs ( dmesg ) information about the task context in which the module is running in (e.g. the module insert process)
- proc_iter — logs attributes of every task_struct (except swapper/0 ) currently running
- mem_layout — summarizes memory layout of the running kernel
- bsa — custom wrapper around a few of the kernel page allocation APIs and logs physical continuity
- kmalloc_box — custom alloc::Allocator (nightly) wrapped around kmalloc() and kfree() used with Box::try_new_in
mod_template/ is meant to be a starting template for future modules
This repo is meant to be experimental and a showcase of potential LKM functionality with Rust. This project assumes you have all the same dependencies as R4L installed and can compile/install custom kernels.
$ just --list Available recipes: build module=DEFAULT_MODULE clean module=DEFAULT_MODULE create module default fmt rust-analyzer vars $ just fmt # runs rustfmt */*.rs $ just build # builds all modules $ just build kmalloc_box # builds specific module $ just create new_module # start new module
To install a specific module
$ cd ./current $ make KERNELDIR=/to/rust/kernel LLVM=1 modules $ sudo insmod ./current.ko # install module