Linux cache buffer size

Arch Linux

every time I copy a file which is bigger or similiar in size to my total RAM (4gb) I notice very low responsibility from firefox (which is totally unresponsive, can’t switch tabs or scroll for 30-60s). Of course my free memory is very low (something like 50-100mb) and I notice some swap usage. AFAIK linux caches everthing that is being copied, but in case of such big files it seems unnecessary.

Is there a way to reduce max buffer size?

I know that buffering is good in general, but I get a feeling that firefox is giving up ram and he has to read everything again from disk which slows him down. I always have many tabs open, so often it has around 30% of memory.

I searched many times on how to reduce buffer sizes, but I’ve always found only articles with «buffering is always good and never an issue» attitude.

I would be very happy to hear any suggestrions,

#2 2011-10-10 21:58:33

Re: How to reduce max buffer/cache size?

You can launch the copy operation (better yet, the shell or you filemanager) in a cgroup and limit the ram it is allowed to use.

More Documentation to be found in: /Documentation/cgroups/memory.txt

I’m sorry that i can’t provide you with more information, since this is just an idea that i had floating in my head for some time now. Having 4GB of ram didn’t make it necessary for me to implement it.

#3 2011-10-11 16:04:48

Re: How to reduce max buffer/cache size?

With 4 GB RAM you could probably disable swap altogether. If not, you might want to set swappiness to a low value (I like 0) and/or read that article: http://rudd-o.com/en/linux-and-free-sof … o-fix-that

Last edited by stqn (2011-10-11 16:05:52)

#4 2011-10-11 20:18:50

Re: How to reduce max buffer/cache size?

This seems a popular problem, going back years. The default Linux setup is bad for responsiveness, it seems.

Читайте также:  Hardware acceleration nvidia linux

Here’s the summary of what I do:

Firstly, install a BFS-patched kernel, for a better kernel scheduler, and also so that the ionice and schedtool commands will work. Bonus points for switching to BFQ while you’re at it — or stick with CFQ, which also supports ionice.

In /etc/fstab, use commit=60 rather than default of 5 seconds, and also noatime, e.g.:

UUID=73d55f23-fb9d-4a36-bb25-blahblah / ext4 defaults,noatime,nobarrier,commit=60 1 1
# From http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that vm.swappiness=0 # https://lwn.net/Articles/572921/ vm.dirty_background_bytes=16777216 vm.dirty_bytes=50331648
alias verynice="ionice -c3 nice -n 15"

In /etc/security/limits.d/ — see post. Read CK’s excellent blog article, for info.

In your cp command, add the word verynice to the start, to stop the large batch copy from having the same priority as your UI.

Compile sqlite without fsync, to make e.g. firefox smoother.

Potentially use threadirqs to prioritize the interrupt-handling.

Edit: Updated vm.swappiness from 0 to 10, from CK’s blog.
Edit2: Also see patch and e.g. nr_requests in thread.
Edit3: Using nice instead of schedtool — not sure whether schedtool can hog the CPU.
Edit4: Added threadirqs.
Edit5: Tweaked sysctl.conf settings.
Edit6: Added nobarrier option to mount, and sqlite’s fsync.
Edit7: Removed swap comment — I do use a swapfile, these days, mainly because firefox needs so much virtual RAM to compile.

Last edited by brebs (2014-03-10 09:51:34)

#5 2011-10-17 20:59:36

Re: How to reduce max buffer/cache size?

Thank you all for the answers!

I’m sorry for replying so late, but I didn’t subscribe to this topic (didn’t know it does not happen automatically).
In the mean time I’ve came up with installing bfs-patched kernel myself and it almost got rid of the problem. I’ll try to apply your ideas in close future, as they look interesting.

@brebs:
While I was searching for solution to this problem I’ve found some opinions that disabling swap will not lead to any performace gains and can make system unresponsive at all. Did you have any problems? I was thinking about reducing swapinness as much as possible, but leaving swap there just in case.

Источник

Is there a way to limit buffer cache size in linux?

I’ve got program consuming a lot of memory during I/O operations, which is a side effect of performing loads of them. When I run program using direct_io the memory issue is gone but time it takes for program to finish the job is four times grater. Is there any way to reduce buffer cache (kernel buffer used during I/O operations) maximum size? Preferably without changing kernel sources. I’ve tried reducing /proc/sys/vm/ dirty_bytes etc. But that doesn’t seem to be making any noticeable difference. UPDATE: using echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches During program runtime temporarily reduces amount of used memory. Can I somehow limit pagecache, dentries and inodes instead of constantly freeing them? That would probably solve my problem. Haven’t noticed that before but problem occurs with every I/O operation, not just partitioning. It looks like linux is caching everything going through I/O up to a certain point when he reaches almost maximum memory available, leaving 4 MB free memory. So there’s some sort of upper limit for how much memory can be cached for I/O. But I’m unable to find where it is. Getting kind off desperate. If I can’t divide it by 2 somewhere in kernel sources I’ll gladly do so. 12-12-2016 Update: I’ve gave up on fixing that but something caught my attention and reminded me of this problem. I have old failing hdd at home and it wastes resources like crazy when I try to do something with it. Is it possible that might be the case of just failing HDD? HDD in question, died within month from when my problem occurred. If this is the case I’ve got my answer.

Читайте также:  Принципы работы операционной системы linux

What is the actual problem you’re trying to solve? Why would you prefer to have memory wasted rather than being used to cache data that might be needed later?

I’m running certain program on embedded platform while system performing different operations in a background. Say my platform is 256 MB, program uses 100 MB and I’m in danger of running OOM. I thought there is some mechanism that doesn’t allow allocating new buffers if there’s chance of running OOM. Recent system failures ensured me that such mechanism either doesn’t exist or isn’t turned on.

It sounds like your question is based on misconceptions about how memory works on modern operating systems. What you’re asking how to do is what the OS already does — when it needs memory for other things, it reduces the buffer cache size.

Tell us more about the type of I/O operations you’re doing. Are they writes to a hard drive? The solution is likely to minimize the amount of unwritten data waiting to be written by throttling the writing process as needed. A quick fix might be to limit the number of dirty pages. See here.

This was unfairly downvoted. Most users blindly assume that buffer caches using 100% of the RAM is always a good thing. To anyone having the issue where the system becomes completely unresponsive because freeing the caches for other applications takes a frustrating amount of time, I recommend the first answer on unix.stackexchange.com/questions/253816 and staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/… . There were a couple patches proposed over the years, but none of them seem to have ever made it upstream.

Читайте также:  Linux настройка ввода вывода

Источник

What do the «buff/cache» and «avail mem» fields in top mean?

enter image description here

Within the output of top, there are two fields, marked «buff/cache» and «avail Mem» in the memory and swap usage lines: What do these two fields mean? I’ve tried Googling them, but the results only bring up generic articles on top, and they don’t explain what these fields signify.

3 Answers 3

top ’s manpage doesn’t describe the fields, but free ’s does:

buffers

Memory used by kernel buffers ( Buffers in /proc/meminfo )

cache

Memory used by the page cache and slabs ( Cached and SReclaimable in /proc/meminfo )

buff/cache

Sum of buffers and cache

available

Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use ( MemAvailable in /proc/meminfo , available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free)

Basically, “buff/cache” counts memory used for data that’s on disk or should end up there soon, and as a result is potentially usable (the corresponding memory can be made available immediately, if it hasn’t been modified since it was read, or given enough time, if it has); “available” measures the amount of memory which can be allocated and used without causing more swapping (see How can I get the amount of available memory portably across distributions? for a lot more detail on that).

Источник

Оцените статью
Adblock
detector