Timer linux kernel example

Timer linux kernel example

Kernel code can tell if it is running in interrupt context by calling the function in_interrupt( ) , which takes no parameters and returns nonzero if the processor is currently running in interrupt context, either hardware interrupt or software interrupt.

A function related to in_interrupt( ) is in_atomic( ) . Its return value is nonzero whenever scheduling is not allowed; this includes hardware and software interrupt contexts as well as any time when a spinlock is held. In the latter case, current may be valid, but access to user space is forbidden, since it can cause scheduling to happen. Whenever you are using in_interrupt( ) , you should really consider whether in_atomic( ) is what you actually mean. Both functions are declared in

One other important feature of kernel timers is that a task can reregister itself to run again at a later time. This is possible because each timer_list structure is unlinked from the list of active timers before being run and can, therefore, be immediately re-linked elsewhere. Although rescheduling the same task over and over might appear to be a pointless operation, it is sometimes useful. For example, it can be used to implement the polling of devices.

It’s also worth knowing that in an SMP system, the timer function is executed by the same CPU that registered it, to achieve better cache locality whenever possible. Therefore, a timer that reregisters itself always runs on the same CPU.

An important feature of timers that should not be forgotten, though, is that they are a potential source of race conditions, even on uniprocessor systems. This is a direct result of their being asynchronous with other code. Therefore, any data structures accessed by the timer function should be protected from concurrent access, either by being atomic types or by using spinlocks.

7.4.1. The Timer API

The kernel provides drivers with a number of functions to declare, register, and remove kernel timers. The following excerpt shows the basic building blocks:

#include struct timer_list < /* . */ unsigned long expires; void (*function)(unsigned long); unsigned long data; >; void init_timer(struct timer_list *timer); struct timer_list TIMER_INITIALIZER(_function, _expires, _data); void add_timer(struct timer_list * timer); int del_timer(struct timer_list * timer);

The data structure includes more fields than the ones shown, but those three are the ones that are meant to be accessed from outside the timer code iteslf. The expires field represents the jiffies value when the timer is expected to run; at that time, the function function is called with data as an argument. If you need to pass multiple items in the argument, you can bundle them as a single data structure and pass a pointer cast to unsigned long, a safe practice on all supported architectures and pretty common in memory management (as discussed in Chapter 15). The expires value is not a jiffies_64 item because timers are not expected to expire very far in the future, and 64-bit operations are slow on 32-bit platforms.

The structure must be initialized before use. This step ensures that all the fields are properly set up, including the ones that are opaque to the caller. Initialization can be performed by calling init_timer or assigning TIMER_INITIALIZER to a static structure, according to your needs. After initialization, you can change the three public fields before calling add_timer . To disable a registered timer before it expires, call del_timer .

Читайте также:  Linux when file was created

The jit module includes a sample file, /proc/jitimer (for «just in timer»), that returns one header line and six data lines. The data lines represent the current environment where the code is running; the first one is generated by the read file operation and the others by a timer. The following output was recorded while compiling a kernel:

phon% cat /proc/jitimer time delta inirq pid cpu command 33565837 0 0 1269 0 cat 33565847 10 1 1271 0 sh 33565857 10 1 1273 0 cpp0 33565867 10 1 1273 0 cpp0 33565877 10 1 1274 0 cc1 33565887 10 1 1274 0 cc1

In this output, the time field is the value of jiffies when the code runs, delta is the change in jiffies since the previous line, inirq is the Boolean value returned by in_interrupt , pid and command refer to the current process, and cpu is the number of the CPU being used (always 0 on uniprocessor systems).

If you read /proc/jitimer while the system is unloaded, you’ll find that the context of the timer is process 0, the idle task, which is called «swapper» mainly for historical reasons.

The timer used to generate /proc/jitimer data is run every 10 jiffies by default, but you can change the value by setting the tdelay (timer delay) parameter when loading the module.

The following code excerpt shows the part of jit related to the jitimer timer. When a process attempts to read our file, we set up the timer as follows:

unsigned long j = jiffies; /* fill the data for our timer function */ data->prevjiffies = j; data->buf = buf2; data->loops = JIT_ASYNC_LOOPS; /* register the timer */ data->timer.data = (unsigned long)data; data->timer.function = jit_timer_fn; data->timer.expires = j + tdelay; /* parameter */ add_timer(&data->timer); /* wait for the buffer to fill */ wait_event_interruptible(data->wait, !data->loops);

The actual timer function looks like this:

void jit_timer_fn(unsigned long arg) < struct jit_data *data = (struct jit_data *)arg; unsigned long j = jiffies; data->buf += sprintf(data->buf, "%9li %3li %i %6i %i %s\n", j, j - data->prevjiffies, in_interrupt( ) ? 1 : 0, current->pid, smp_processor_id( ), current->comm); if (--data->loops) < data->timer.expires += tdelay; data->prevjiffies = j; add_timer(&data->timer); > else < wake_up_interruptible(&data->wait); > >

The timer API includes a few more functions than the ones introduced above. The following set completes the list of kernel offerings:

int mod_timer(struct timer_list *timer, unsigned long expires);

Updates the expiration time of a timer, a common task for which a timeout timer is used (again, the motor-off floppy timer is a typical example). mod_timer can be called on inactive timers as well, where you normally use add_timer .

int del_timer_sync(struct timer_list *timer);

Works like del_timer , but also guarantees that when it returns, the timer function is not running on any CPU. del_timer_sync is used to avoid race conditions on SMP systems and is the same as del_timer in UP kernels. This function should be preferred over del_timer in most situations. This function can sleep if it is called from a nonatomic context but busy waits in other situations. Be very careful about calling del_timer_sync while holding locks; if the timer function attempts to obtain the same lock, the system can deadlock. If the timer function reregisters itself, the caller must first ensure that this reregistration will not happen; this is usually accomplished by setting a «shutting down» flag, which is checked by the timer function.

Читайте также:  Поддерживает ли линукс ntfs

int timer_pending(const struct timer_list * timer);

Returns true or false to indicate whether the timer is currently scheduled to run by reading one of the opaque fields of the structure.

7.4.2. The Implementation of Kernel Timers

Although you won’t need to know how kernel timers are implemented in order to use them, the implementation is interesting, and a look at its internals is worthwhile.

The implementation of the timers has been designed to meet the following requirements and assumptions:

  • Timer management must be as lightweight as possible.
  • The design should scale well as the number of active timers increases.
  • Most timers expire within a few seconds or minutes at most, while timers with long delays are pretty rare.
  • A timer should run on the same CPU that registered it.

The solution devised by kernel developers is based on a per-CPU data structure. The timer_list structure includes a pointer to that data structure in its base field. If base is NULL, the timer is not scheduled to run; otherwise, the pointer tells which data structure (and, therefore, which CPU) runs it. Per-CPU data items are described in Section 8.5 in Section 7.1.1.

Whenever kernel code registers a timer (via add_timer or mod_timer ), the operation is eventually performed by internal_add_timer (in kernel/timer.c) which, in turn, adds the new timer to a double-linked list of timers within a «cascading table» associated to the current CPU.

The cascading table works like this: if the timer expires in the next to 255 jiffies, it is added to one of the 256 lists devoted to short-range timers using the least significant bits of the expires field. If it expires farther in the future (but before 16,384 jiffies), it is added to one of 64 lists based on bits 9-14 of the expires field. For timers expiring even farther, the same trick is used for bits 15-20, 21-26, and 27-31. Timers with an expire field pointing still farther in the future (something that can happen only on 64-bit platforms) are hashed with a delay value of 0xffffffff, and timers with expires in the past are scheduled to run at the next timer tick. (A timer that is already expired may sometimes be registered in high-load situations, especially if you run a preemptible kernel.)

When _ _run_timers is fired, it executes all pending timers for the current timer tick. If jiffies is currently a multiple of 256, the function also rehashes one of the next-level lists of timers into the 256 short-term lists, possibly cascading one or more of the other levels as well, according to the bit representation of jiffies.

This approach, while exceedingly complex at first sight, performs very well both with few timers and with a large number of them. The time required to manage each active timer is independent of the number of timers already registered and is limited to a few logic operations on the binary representation of its expires field. The only cost associated with this implementation is the memory for the 512 list heads (256 short-term lists and 4 groups of 64 more lists)—i.e., 4 KB of storage.

The function _ _run_timers , as shown by /proc/jitimer, is run in atomic context. In addition to the limitations we already described, this brings in an interesting feature: the timer expires at just the right time, even if you are not running a preemptible kernel, and the CPU is busy in kernel space. You can see what happens when you read /proc/jitbusy in the background and /proc/jitimer in the foreground. Although the system appears to be locked solid by the busy-waiting system call, the kernel timers still work fine.

Keep in mind, however, that a kernel timer is far from perfect, as it suffers from jitter and other artifacts induced by hardware interrupts, as well as other timers and other asynchronous tasks. While a timer associated with simple digital I/O can be enough for simple tasks like running a stepper motor or other amateur electronics, it is usually not suitable for production systems in industrial environments. For such tasks, you’ll most likely need to resort to a real-time kernel extension.

Источник

How to use timers in Linux kernel device drivers?

I want to implement a counter in Linux device drivers which increments after every fixed interval of time. I want to do this with the help of timers. A sample code snippet would be very useful.

How often do you want to increment this? Why you even need this? (there are some counters here already, e.g. jiffles )

@Pat The OP is asking how to create a timer in kernel space. While setitimer sets a timer in user space.

3 Answers 3

There is a small example of how to use Linux kernel timers (included it here for convenience, comments are from myself, removed printk messages)

#include #include #include MODULE_LICENSE("GPL"); static struct timer_list my_timer; void my_timer_callback( unsigned long data ) < /* do your timer stuff here */ >int init_module(void) < /* setup your timer to call my_timer_callback */ setup_timer(&my_timer, my_timer_callback, 0); /* setup timer interval to 200 msecs */ mod_timer(&my_timer, jiffies + msecs_to_jiffies(200)); return 0; >void cleanup_module(void) < /* remove kernel timer when unloading module */ del_timer(&my_timer); return; >

My «edit queue is full», so I can’t post this fixed link to the IBM blog post: developer.ibm.com/tutorials/l-timers-list

Around Linux kernel 4.15 release, void setup_timer(timer, function, data); became obsolete with an intent to remove it completely.

Instead, now we have to use

void timer_setup( struct timer_list *timer, void (*callback)(struct timer_list *), unsigned int flags ); 

This can be found in linux/timer.h file.

Here’s a full example of module_with_timer.c

#include #include #include #include MODULE_LICENSE("GPL"); static struct timer_list my_timer; void my_timer_callback(struct timer_list *timer) < printk(KERN_ALERT "This line is printed after 5 seconds.\n"); >static int init_module_with_timer(void) < printk(KERN_ALERT "Initializing a module with timer.\n"); /* Setup the timer for initial use. Look in linux/timer.h for this function */ timer_setup(&my_timer, my_timer_callback, 0); mod_timer(&my_timer, jiffies + msecs_to_jiffies(5000)); return 0; >static void exit_module_with_timer(void) < printk(KERN_ALERT "Goodbye, cruel world!\n"); del_timer(&my_timer); >module_init(init_module_with_timer); module_exit(exit_module_with_timer); 
obj-m = module_with_timer.o # Get the current kernel version number KVERSION = $(shell uname -r) all: make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules clean: make -C /lib/modules/$(KVERSION)/build M=$(PWD) clean 

Note: In real life programming, it is better to check the version of the kernel we are compiling for and then use an then appropriately start the timer.

Источник

Оцените статью
Adblock
detector