What is the difference between buffer and cache memory in Linux?
«Buffers» represent how much portion of RAM is dedicated to cache disk blocks. «Cached» is similar like «Buffers», only this time it caches pages from file reading.
Solution 2
Buffers are associated with a specific block device, and cover caching of filesystem metadata as well as tracking in-flight pages. The cache only contains parked file data. That is, the buffers remember what’s in directories, what file permissions are, and keep track of what memory is being written from or read to for a particular block device. The cache only contains the contents of the files themselves.
Solution 3
Short answer: Cached is the size of the page cache. Buffers is the size of in-memory block I/O buffers. Cached matters; Buffers is largely irrelevant.
Long answer: Cached is the size of the Linux page cache, minus the memory in the swap cache, which is represented by SwapCached (thus the total page cache size is Cached + SwapCached). Linux performs all file I/O through the page cache. Writes are implemented as simply marking as dirty the corresponding pages in the page cache; the flusher threads then periodically write back to disk any dirty pages. Reads are implemented by returning the data from the page cache; if the data is not yet in the cache, it is first populated. On a modern Linux system, Cached can easily be several gigabytes. It will shrink only in response to memory pressure. The system will purge the page cache along with swapping data out to disk to make available more memory as needed.
Buffers are in-memory block I/O buffers. They are relatively short-lived. Prior to Linux kernel version 2.4, Linux had separate page and buffer caches. Since 2.4, the page and buffer cache are unified and Buffers is raw disk blocks not represented in the page cache—i.e., not file data. The Buffers metric is thus of minimal importance. On most systems, Buffers is often only tens of megabytes.
Solution 4
It’s not ‘quite’ as simple as this, but it might help understand:
Buffer is for storing file metadata (permissions, location, etc). Every memory page is kept track of here.
Cache is for storing actual file contents.
Solution 5
Explained by Red Hat:
Cache Pages:
A cache is the part of the memory which transparently stores data so that future requests for that data can be served faster. This memory is utilized by the kernel to cache disk data and improve i/o performance.
The Linux kernel is built in such a way that it will use as much RAM as it can to cache information from your local and remote filesystems and disks. As the time passes over various reads and writes are performed on the system, kernel tries to keep data stored in the memory for the various processes which are running on the system or the data that of relevant processes which would be used in the near future. The cache is not reclaimed at the time when process get stop/exit, however when the other processes requires more memory then the free available memory, kernel will run heuristics to reclaim the memory by storing the cache data and allocating that memory to new process.
When any kind of file/data is requested then the kernel will look for a copy of the part of the file the user is acting on, and, if no such copy exists, it will allocate one new page of cache memory and fill it with the appropriate contents read out from the disk.
The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere in the disk. When some data is requested, the cache is first checked to see whether it contains that data. The data can be retrieved more quickly from the cache than from its source origin.
SysV shared memory segments are also accounted as a cache, though they do not represent any data on the disks. One can check the size of the shared memory segments using ipcs -m command and checking the bytes column.
Buffers are the disk block representation of the data that is stored under the page caches. Buffers contains the metadata of the files/data which resides under the page cache. Example: When there is a request of any data which is present in the page cache, first the kernel checks the data in the buffers which contain the metadata which points to the actual files/data contained in the page caches. Once from the metadata the actual block address of the file is known, it is picked up by the kernel for processing.
In Linux, what is the difference between «buffers» and «cache» reported by the free command?
It’s more like metadata which you find in buffers, it is not related to io buffers. Some of the kernel buffers are accounted for in the slab allocator but do not count to buffers or cache memory at all.
5 Answers 5
The «cached» total will also include some other memory allocations, such as any tmpfs filesytems. To see this in effect try:
mkdir t mount -t tmpfs none t dd if=/dev/zero of=t/zero.file bs=10240 count=10240 sync; echo 3 > /proc/sys/vm/drop_caches; free -m umount t sync; echo 3 > /proc/sys/vm/drop_caches; free -m
and you will see the «cache» value drop by the 100Mb that you copied to the ram-based filesystem (assuming there was enough free RAM, you might find some of it ended up in swap if the machine is already over-committed in terms of memory use). The «sync; echo 3 > /proc/sys/vm/drop_caches» before each call to free should write anything pending in all write buffers (the sync) and clear all cached/buffered disk blocks from memory so free will only be reading other allocations in the «cached» value.
The RAM used by virtual machines (such as those running under VMWare) may also be counted in free’s «cached» value, as will RAM used by currently open memory-mapped files (this will vary depending on the hypervisor/version you are using and possibly between kernel versions too).
So it isn’t as simple as «buffers counts pending file/network writes and cached counts recently read/written blocks held in RAM to save future physical reads», though for most purposes this simpler description will do.
+1 for interesting nuances. This is the kind of information I’m looking for. In fact, I suspect that the figures are so convoluted, so involved in so many different activities, that they are at best general indicators.
I don’t think the RAM used by virtual machines is counted as «cached», at least for qemu-kvm. I notice that on my KVM host, the cache value is not only too small to be correct (at 1.9 Gig), but it doesn’t change if I destroy/start one of my VMs. It also doesn’t change if I perform the tmpfs mount trick on one of the VMs. I created an 800Meg tmpfs partition there and «cached» showed the proper values on the VM but it did not change on the VM host. But the «used» value did shrink/grow when I destroyed/started my VM.
@MikeS: How different virtualisation solutions handle memory is liable to vary, in fact how the kernel measures various uses of memory may change between major versions.
@MikeS: With regard to «perform the tmpfs mount trick on one of the VMs» — I that will not affect the host readings if they are not showing other mem used by the VM. I do see the effect in a KVM VM itself: before dd free = 2020, after dd free = 1899, after drop fs free = 2001 (the 19Mb difference will be due to other processes on the VM, it was not idle when I ran the test). The host may not see the change: the memory is probably still allocated to the VM even though it is free for use by processes in the VM.
Tricky Question. When you calculate free space you actually need to add up buffer and cache both. This is what I Could find
A buffer is something that has yet to be «written» to disk. A cache is something that has been «read» from the disk and stored for later use.
Actually, if you’re interested in reclaimable space (that is, RAM that can be used for programs if needed), you have to calculate Cached — Shmem from /proc/meminfo . This is because kernel includes Shmem into Cached for historical reasons but Shmem cannot be dropped even if system is running out of memory. Biggest users of Shmem are usually X server (including GPU), Java JVM and PostgreSQL ( shared_buffers ).
I was looking for more clear description about buffer and i found in «Professional Linux® Kernel Architecture 2008»
Chapter 16: Page and Buffer Cache
Interaction
Setting up a link between pages and buffers serves little purpose if there are no benefits for other parts of the kernel. As already noted, some transfer operations to and from block devices may need to be performed in units whose size depends on the block size of the underlying devices, whereas many parts of the kernel prefer to carry out I/O operations with page granularity as this makes things much easier — especially in terms of memory management. In this scenario, buffers act as intermediaries between the two worlds.
Explained by RedHat:
A cache is the part of the memory which transparently stores data so that future requests for that data can be served faster. This memory is utilized by the kernel to cache disk data and improve i/o performance.
The Linux kernel is built in such a way that it will use as much RAM as it can to cache information from your local and remote filesystems and disks. As the time passes over various reads and writes are performed on the system, kernel tries to keep data stored in the memory for the various processes which are running on the system or the data that of relevant processes which would be used in the near future. The cache is not reclaimed at the time when process get stop/exit, however when the other processes requires more memory then the free available memory, kernel will run heuristics to reclaim the memory by storing the cache data and allocating that memory to new process.
When any kind of file/data is requested then the kernel will look for a copy of the part of the file the user is acting on, and, if no such copy exists, it will allocate one new page of cache memory and fill it with the appropriate contents read out from the disk.
The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere in the disk. When some data is requested, the cache is first checked to see whether it contains that data. The data can be retrieved more quickly from the cache than from its source origin.
SysV shared memory segments are also accounted as a cache, though they do not represent any data on the disks. One can check the size of the shared memory segments using ipcs -m command and checking the bytes column.
Buffers are the disk block representation of the data that is stored under the page caches. Buffers contains the metadata of the files/data which resides under the page cache. Example: When there is a request of any data which is present in the page cache, first the kernel checks the data in the buffers which contain the metadata which points to the actual files/data contained in the page caches. Once from the metadata the actual block address of the file is known, it is picked up by the kernel for processing.