Linux lowlatency vs generic

Why choose a low latency kernel over a generic or real-time kernel?

After installing Ubuntu Studio 12.04, I have found that it uses a low latency kernel. I searched for an explanation and how to change back it to a real-time or generic one. However, it looks like this part of Linux hasn’t been covered to explain the details. Q: Why choose a low latency kernel over a generic or real-time one? PS: I have already read the answers from this question and this post.

5 Answers 5

  • If you do not require low latency for your system then please use the -generic kernel.
  • If you need a low latency system (e.g. for recording audio) then please use the -preempt kernel as a first choice. This reduces latency but doesn’t sacrifice power saving features. It is available only for 64 bit systems (also called amd64).
  • If the -preempt kernel does not provide enough low latency for your needs (or you have an 32 bit system) then you should try the -lowlatency kernel.
  • If the -lowlatency kernel isn’t enough then you should try the -rt kernel
  • If the -rt kernel isn’t enough stable for you then you should try the -realtime kernel

So it depends on what you will do with your studio distro. For most users needing fast end-user response time generic will just do fine, for others who need to do professional video editing where even a simple frame drop is unacceptable the real-time kernel is needed.

For a more exhaustive easy-to-understand blog post, read this link

Well the tests mentioned there talks by themselves. If Ubuntu team have chosen Latency in the first place, it must be a reason to it. SO you wanted to know the differences, now you do. Problem solved ?

No.. I dont think the problem is solved. If your answer does anything, it increases my curiosity more.

That blog post doesn’t present any fact, it’s only theory. It’s the way it works, actually: the processor «stops» more frequently to see if there are some processes requiring immediate attention. That means that those processes will be executed before the others, so you won’t skip frames when encoding, or have huge delay times between mouse clicks and enemy deaths. It doesn’t mean that all processes will end sooner: actually the CPU is losing a bigger portion of its time deciding what process will be executed next, and doing the context switch. So the total execution time is longer, and that’s why no one runs a preemptible kernel on webserver or database machines. But a preemptible 300Hz (or even 1000Hz) kernel is the best for gameservers.

But nowadays processors have many cores, so when there are few processes requiring attention they can easily be allocated on a different core rather than waiting for a core to take it.

(stackexchange requires me references/personal experience: I’m an electronic engineer, bloodthirsty noobgamer mantaining several gameservers at http://www.gamezoo.it ).

So, as a rule of thumb, I’d say: if your processor is a powerful number-crunching high-frequency quad-core AND you don’t usually open tons of webpages while encoding/decoding/gaming (huh), you could just try the generic (or i686, or amd64 if they exist) kernel and have the highest possible throughput (i.e., the raw number-crunching the processor is able to do). If you experience problems (they should really be minor) or your machine is slightly less powerful than the top of the market, go for the -preempt.

Читайте также:  Linux две звуковых карты

If you’re on a low-end machine that has only one or two cores, then try the -lowlatency. You could also try the -realtime, but you’ll find that it tends to block processes until the «real-time» ones have finished their job. I believe the realtime kernel isn’t the «vanilla» one, but has the CONFIG_PREEMPT_RT patch applied. I think that realtime kernels are only for those who have to build a single application on embedded systems, so usual desktop users shouldn’t have real benefits because they usually run a fair number of applications at the same time.

Finally, the most relevant kernel options if you want to recompile your kernel yourself to have a low-latency desktop are:

To add some powersaving you may check this one:

I noticed you mention maintaining servers, I’m trying to figure out the best kernel for valve source dedicated server (CSGO specifically). most CS threads I find are related to goldsrc which needed 1000Hz kernel. With srcds, is lowlatency bad? if it doesn’t matter I’ll just stick to lowlatency as that’s what I have right now (I isolate the cpu cores for 128 tick srcds servers as it doesn’t really benefit from multi threading anyway).

Useful to know thid tips, I will totally change to preempt. I’m not in such a hurry thay I want my kernel yo act like a filthy pirate.

I have this old laptop with dual AMD A6-4400M at 1600MHz, that I use sparingly when I’m out of the office, mainly to read email and browse casual web sites. There was something, possibly connected with software updates, which makes it unresponsive. Something like typing a dozen characters without seeing the first one. Often the widget asking whether I should force-quit a process.

After sudo apt-get install linux-lowlatency and reboot, it became smooth and responsive. (uname -r 5.0.0-20-lowlatency.) Wonderful, I should have switched years ago. Let me stress Seven’s answer: unless you want to squeeze the max out of a number crunching server, go for the -preempt!

This was my hunch, and why I was searching this question. Exactly the answer I was looking for, thanks for confirming, I’ll now try this out myself 😀 — Also answer askubuntu.com/a/1244714/49478 was nice!

There are three major performance parameters governing kernel optimization, one of which rarely gets discussed in context of the other two and almost never gets benchmarked. This has reduced and skewed our perception of performance:

  1. Throughput: What gets benchmarked and discussed by the media. A measure of how much data can make it through the processor in a given timeframe.
  2. Energy efficiency: What also often gets benchmarked; depending on how you measure it, how costly (in energy or heat) it is to process a certain amount of data or to keep a system running over a given period of time. This has implications for both battery life in laptops and the cost of running servers or other «always on» hardware.
  3. Latency: Almost never gets benchmarked, but just as important. It is a measure of the average amount of time it takes for a signal or data to travel through a path, usually measured in milliseconds. The word «jitter» describes variations in latency over time, and is a sub-component of latency performance.
Читайте также:  Windows linux development environment

This is a classic case of «here’s 3 options, pick any two.» You can have throughput and energy efficiency, but will sacrifice latency. You can have throughput and low latency, but will sacrifice energy efficiency. You can have low latency and energy efficiency, but that will sacrifice throughput.

One example of this tradeoff is race-to-sleep: https://en.wikipedia.org/wiki/Race_to_sleep (no longer exists for some reason, but https://www.quora.com/What-is-race-to-halt-strategy-to-make-a-processor-energy-efficient does). Unfortunately, it still takes CPUs several ms to «wake up» from various levels of sleep states (the deeper the sleep, the longer it takes to wake up), which makes CPU frequency scaling horrible for latency performance and probably one of the single biggest offenders. This is why even Mac OS systems benefit immensely from disabling CPU frequency scaling to prevent buffer overruns and underruns (a buffer overrun is when a process takes too long to finish and reroute, and so the last bit gets discarded as the buffer gets refilled; whereas an underrun is a failure to sufficiently fill the buffer in time for processing. both are commonly called «xruns» and indicate potential data loss)

Another is logical cores and symmetric multi-threading, which is a subset of «race to sleep» by using all cores more efficiently, but at the cost of increasing the time it takes for any individual process to complete. This is also good for energy efficiency and throughput, but not latency, which is concerned with how reliably and quickly individual «mission critical» tasks can complete, not a sum total group of tasks.

-generic: for any use case that does not deal with latency and the guaranteed routing of a certain amount of information into the processor and out to its destination. Generic in general provides greatest throughput performance as well as energy efficiency, but deprioritizes latency

the -lowlatency, -rt and -realtime kernels provided varying shades of increased attention to latency, with increasing sacrifice or deprioritization of throughput and/or energy efficiency. Use cases determine which are most appropriate for which circumstances, and these aren’t the only choices. For example, there is also https://liquorix.net/ which claims to be optimized for most common usage scenarios by making small sacrifices in throughput for relatively large gains in latency performance.

There are components of this question that confusingly overlap with other questions, and parts of this question are becoming obsolete as the performance distinctions between kernel lines are disappearing. For example, the -generic kernel has included many low latency optimizations (such as PREEMPT) that make the other specialized kernels even more specialized. It’s hard to give specific numbers, but on my system, -generic can now handle latencies down to about 20ms with some reliability (minimal or no buffer overruns or underruns).

Читайте также:  Труконф сервер для линукс

I honestly think this fragmentation exists from

  1. a byproduct of open source software, where you get forking of kernel lines and software generally to optimize them for specific use cases (yes, «-generic» is a specific use case, just one that specifically happens to cover most use cases 🙂 and
  2. Almost completely ignoring the importance of low latency performance tuning to the -generic user’s experience in both Linux and Windows paradigms.

Источник

Ubuntu Wiki

Tests to determine the difference in reliable low latency performance between linux-generic and linux-lowlatency kernels on Ubuntu Natty.

Quick Instructions

Start jackd with the lowest latency frames/period possible without getting xruns. Use Ardour/jack for at least 10-15 minutes while compressing files and opening graphic editing programs to determine that you have reliable performance. Commit results, either by adding them Here or mailing them to ubuntustudio or ubuntustudio-devel mail lists.

Instructions on Testing

Prerequisites

  • Ubuntu Studio 11.04 Natty, or Ubuntu 11.04 Natty (Vanilla) installed
  • jackd2, Ardour, linux-generic and linux-lowlatency installed
  • User has realtime privilege

To add linux-lowlatency do: sudo add-apt-repository ppa:abogani/ppa && sudo apt-get update && sudo apt-get install linux-lowlatency

Jack Settings

We are not looking to measure exact latency. The point of this test is to compare -generic and -lowlatency kernels. To do that, we need to make sure we are using the same jack settings on both tests, and using the same audio program. The only jack setting we change in between tests is frames/period.

Translates as:

-d alsa = using alsa driver

hw:1 = using second card in the list of devices (first being 0)

-r 44100 = using 44100kHz samplerate

-n 2 = Using 2 period/buffer

-p 64 = Using 64 frames/period

The resulting line we’ll be looking for in the terminal after launching jackd would be:

configuring for 44100Hz, period = 128 frames (2.9 ms), buffer = 2 periods

Note the (2.9 ms) value. (need to check if this value is the same whether running from terminal or viewing the ms value from qjackctl settings window).

Doing the Test

  • Run jackd with Ardour at the lowest frames/period setting you can without getting xruns for at least 10-15 minutes.
  • Only use record and play functions on Ardour
  • Open GIMP and Inkscape during session
  • Compress a folder that contains a lot of files, like a source folder. The longer it takes, the better
  • If by then you have had no xruns, note the latency value and test the next kernel using the same procedure.
  • Date = 2011-xx-xx
  • Kernel = (-generic / -lowlatency )
  • Kernel version = x-x-xx-x
  • arch = (i386 / AMD64)
  • audio device = (type / name)
  • session = (type of session)
  • lowest ms value = (x.x ms)

Committing the results

Results can be added directly Here, or mailed to either ubuntustudio or ubuntustudio-devel mail lists. Just be sure to include all relevant information.

generic_vs_lowlatency_testing (последним исправлял пользователь 90-230-166-102-no35 2011-04-20 05:00:58)

The material on this wiki is available under a free license, see Copyright / License for details.

Источник

Оцените статью
Adblock
detector