- Linux Kernel Selftests¶
- Running the selftests (hotplug tests are run in limited mode)¶
- Kernel Testing Guide¶
- Writing and Running Tests¶
- The Difference Between KUnit and kselftest¶
- Code Coverage Tools¶
- Dynamic Analysis Tools¶
- Kernel Testing Guide¶
- Writing and Running Tests¶
- The Difference Between KUnit and kselftest¶
- Code Coverage Tools¶
- Dynamic Analysis Tools¶
Linux Kernel Selftests¶
The kernel contains a set of «self tests» under the tools/testing/selftests/ directory. These are intended to be small tests to exercise individual code paths in the kernel. Tests are intended to be run after building, installing and booting a kernel.
Kselftest from mainline can be run on older stable kernels. Running tests from mainline offers the best coverage. Several test rings run mainline kselftest suite on stable releases. The reason is that when a new test gets added to test existing code to regression test a bug, we should be able to run that test on an older kernel. Hence, it is important to keep code that can still test an older kernel and make sure it skips the test gracefully on newer releases.
You can find additional information on Kselftest framework, how to write new tests using the framework on Kselftest wiki:
On some systems, hot-plug tests could hang forever waiting for cpu and memory to be ready to be offlined. A special hot-plug target is created to run the full range of hot-plug tests. In default mode, hot-plug tests run in safe mode with a limited scope. In limited mode, cpu-hotplug test is run on a single cpu as opposed to all hotplug capable cpus, and memory hotplug test is run on 2% of hotplug capable memory instead of 10%.
kselftest runs as a userspace process. Tests that can be written/run in userspace may wish to use the Test Harness. Tests that need to be run in kernel space may wish to use a Test Module.
Running the selftests (hotplug tests are run in limited mode)¶
$ make headers $ make -C tools/testing/selftests
Kernel Testing Guide¶
There are a number of different tools for testing the Linux kernel, so knowing when to use each of them can be a challenge. This document provides a rough overview of their differences, and how they fit together.
Writing and Running Tests¶
The bulk of kernel tests are written using either the kselftest or KUnit frameworks. These both provide infrastructure to help make running tests and groups of tests easier, as well as providing helpers to aid in writing new tests.
If you’re looking to verify the behaviour of the Kernel — particularly specific parts of the kernel — then you’ll want to use KUnit or kselftest.
The Difference Between KUnit and kselftest¶
KUnit ( KUnit — Linux Kernel Unit Testing ) is an entirely in-kernel system for “white box” testing: because test code is part of the kernel, it can access internal structures and functions which aren’t exposed to userspace.
KUnit tests therefore are best written against small, self-contained parts of the kernel, which can be tested in isolation. This aligns well with the concept of ‘unit’ testing.
For example, a KUnit test might test an individual kernel function (or even a single codepath through a function, such as an error handling case), rather than a feature as a whole.
This also makes KUnit tests very fast to build and run, allowing them to be run frequently as part of the development process.
There is a KUnit test style guide which may give further pointers in Test Style and Nomenclature
kselftest ( Linux Kernel Selftests ), on the other hand, is largely implemented in userspace, and tests are normal userspace scripts or programs.
This makes it easier to write more complicated tests, or tests which need to manipulate the overall system state more (e.g., spawning processes, etc.). However, it’s not possible to call kernel functions directly from kselftest. This means that only kernel functionality which is exposed to userspace somehow (e.g. by a syscall, device, filesystem, etc.) can be tested with kselftest. To work around this, some tests include a companion kernel module which exposes more information or functionality. If a test runs mostly or entirely within the kernel, however, KUnit may be the more appropriate tool.
kselftest is therefore suited well to tests of whole features, as these will expose an interface to userspace, which can be tested, but not implementation details. This aligns well with ‘system’ or ‘end-to-end’ testing.
For example, all new system calls should be accompanied by kselftest tests.
Code Coverage Tools¶
The Linux Kernel supports two different code coverage measurement tools. These can be used to verify that a test is executing particular functions or lines of code. This is useful for determining how much of the kernel is being tested, and for finding corner-cases which are not covered by the appropriate test.
Using gcov with the Linux kernel is GCC’s coverage testing tool, which can be used with the kernel to get global or per-module coverage. Unlike KCOV, it does not record per-task coverage. Coverage data can be read from debugfs, and interpreted using the usual gcov tooling.
kcov: code coverage for fuzzing is a feature which can be built in to the kernel to allow capturing coverage on a per-task level. It’s therefore useful for fuzzing and other situations where information about code executed during, for example, a single syscall is useful.
Dynamic Analysis Tools¶
The kernel also supports a number of dynamic analysis tools, which attempt to detect classes of issues when they occur in a running kernel. These typically each look for a different class of bugs, such as invalid memory accesses, concurrency issues such as data races, or other undefined behaviour like integer overflows.
Some of these tools are listed below:
- kmemleak detects possible memory leaks. See Kernel Memory Leak Detector
- KASAN detects invalid memory accesses such as out-of-bounds and use-after-free errors. See The Kernel Address Sanitizer (KASAN)
- UBSAN detects behaviour that is undefined by the C standard, like integer overflows. See The Undefined Behavior Sanitizer — UBSAN
- KCSAN detects data races. See The Kernel Concurrency Sanitizer (KCSAN)
- KFENCE is a low-overhead detector of memory issues, which is much faster than KASAN and can be used in production. See Kernel Electric-Fence (KFENCE)
- lockdep is a locking correctness validator. See Runtime locking correctness validator
- There are several other pieces of debug instrumentation in the kernel, many of which can be found in lib/Kconfig.debug
These tools tend to test the kernel as a whole, and do not “pass” like kselftest or KUnit tests. They can be combined with KUnit or kselftest by running tests on a kernel with these tools enabled: you can then be sure that none of these errors are occurring during the test.
Some of these tools integrate with KUnit or kselftest and will automatically fail tests if an issue is detected.
© Copyright The kernel development community.
Kernel Testing Guide¶
There are a number of different tools for testing the Linux kernel, so knowing when to use each of them can be a challenge. This document provides a rough overview of their differences, and how they fit together.
Writing and Running Tests¶
The bulk of kernel tests are written using either the kselftest or KUnit frameworks. These both provide infrastructure to help make running tests and groups of tests easier, as well as providing helpers to aid in writing new tests.
If you’re looking to verify the behaviour of the Kernel — particularly specific parts of the kernel — then you’ll want to use KUnit or kselftest.
The Difference Between KUnit and kselftest¶
KUnit ( KUnit — Unit Testing for the Linux Kernel ) is an entirely in-kernel system for “white box” testing: because test code is part of the kernel, it can access internal structures and functions which aren’t exposed to userspace.
KUnit tests therefore are best written against small, self-contained parts of the kernel, which can be tested in isolation. This aligns well with the concept of ‘unit’ testing.
For example, a KUnit test might test an individual kernel function (or even a single codepath through a function, such as an error handling case), rather than a feature as a whole.
This also makes KUnit tests very fast to build and run, allowing them to be run frequently as part of the development process.
There is a KUnit test style guide which may give further pointers in Test Style and Nomenclature
kselftest ( Linux Kernel Selftests ), on the other hand, is largely implemented in userspace, and tests are normal userspace scripts or programs.
This makes it easier to write more complicated tests, or tests which need to manipulate the overall system state more (e.g., spawning processes, etc.). However, it’s not possible to call kernel functions directly from kselftest. This means that only kernel functionality which is exposed to userspace somehow (e.g. by a syscall, device, filesystem, etc.) can be tested with kselftest. To work around this, some tests include a companion kernel module which exposes more information or functionality. If a test runs mostly or entirely within the kernel, however, KUnit may be the more appropriate tool.
kselftest is therefore suited well to tests of whole features, as these will expose an interface to userspace, which can be tested, but not implementation details. This aligns well with ‘system’ or ‘end-to-end’ testing.
For example, all new system calls should be accompanied by kselftest tests.
Code Coverage Tools¶
The Linux Kernel supports two different code coverage measurement tools. These can be used to verify that a test is executing particular functions or lines of code. This is useful for determining how much of the kernel is being tested, and for finding corner-cases which are not covered by the appropriate test.
Using gcov with the Linux kernel is GCC’s coverage testing tool, which can be used with the kernel to get global or per-module coverage. Unlike KCOV, it does not record per-task coverage. Coverage data can be read from debugfs, and interpreted using the usual gcov tooling.
kcov: code coverage for fuzzing is a feature which can be built in to the kernel to allow capturing coverage on a per-task level. It’s therefore useful for fuzzing and other situations where information about code executed during, for example, a single syscall is useful.
Dynamic Analysis Tools¶
The kernel also supports a number of dynamic analysis tools, which attempt to detect classes of issues when they occur in a running kernel. These typically each look for a different class of bugs, such as invalid memory accesses, concurrency issues such as data races, or other undefined behaviour like integer overflows.
Some of these tools are listed below:
- kmemleak detects possible memory leaks. See Kernel Memory Leak Detector
- KASAN detects invalid memory accesses such as out-of-bounds and use-after-free errors. See The Kernel Address Sanitizer (KASAN)
- UBSAN detects behaviour that is undefined by the C standard, like integer overflows. See The Undefined Behavior Sanitizer — UBSAN
- KCSAN detects data races. See The Kernel Concurrency Sanitizer (KCSAN)
- KFENCE is a low-overhead detector of memory issues, which is much faster than KASAN and can be used in production. See Kernel Electric-Fence (KFENCE)
- lockdep is a locking correctness validator. See Runtime locking correctness validator
- There are several other pieces of debug instrumentation in the kernel, many of which can be found in lib/Kconfig.debug
These tools tend to test the kernel as a whole, and do not “pass” like kselftest or KUnit tests. They can be combined with KUnit or kselftest by running tests on a kernel with these tools enabled: you can then be sure that none of these errors are occurring during the test.
Some of these tools integrate with KUnit or kselftest and will automatically fail tests if an issue is detected.
© Copyright The kernel development community.