What does -/+ buffers/cache means in the output of free? [closed]
I know this is a lame question, but I want to understand why CentOS consumes my Physical memory when certain process stopped. Suppose I have opened a file of 10GB then it consumes 10GB in ram and buffers when I finished the process and closes the file, then CentOS still holds my 10GB of ram. When I run free command it gives me following details:
11.4 GB are used in buffers: 6336 in cached: 49168 -/+ buffers/cache: 11.4GB used
First, read Linux ate my ram!!. Then read Linux ate my ram!!. Then, if you still have questions, tell us what you mean by “I try to free up my memory”. If you mean that you want to empty the cache, read Linux ate my ram!! and don’t stop until you understand it. If that’s not what you’re asking, why is it surprising that your system is consuming 11.4GB of memory? How much memory are your processes using?
1 Answer 1
First of all you don’t need to free up any buffers or cache yourself unless you have a specific requirement. Linux saves caches for improving the performance of memory access. Buffers are just temporary location and both cache and buffers will keep changing depending on the tasks which Linux is doing.
Incase the link doesn’t work in future here is little more expalantion. Below is the output of free -m
total used free shared buffers cached Mem: 7753 2765 4987 0 24 570 -/+ buffers/cache: 2171 5582 Swap: 8191 0 8191
Total used RAM is (2765) - (24 + 570) = 2171 [ - in the second line] Total Free RAM is (7753 - 2765) + (24 + 570 ) = 5582 [ + in the second line ]
Buffer and Cache Memory in Linux
The Kubernetes ecosystem is huge and quite complex, so it’s easy to forget about costs when trying out all of the exciting tools.
To avoid overspending on your Kubernetes cluster, definitely have a look at the free K8s cost monitoring tool from the automation platform CAST AI. You can view your costs in real time, allocate them, calculate burn rates for projects, spot anomalies or spikes, and get insightful reports you can share with your team.
Connect your cluster and start monitoring your K8s costs right away:
1. Overview
In this tutorial, we’ll learn about buffer memory and cache memory and the differences between them. As we know, the use of Linux file system buffer and cache makes input and output (I/O) operations faster.
But, before we talk about the differences between buffer and cache, we need to understand what they are and how they operate.
2. Buffer
Buffering is the process of preloading data into a reserved area of memory called buffer memory.
Buffer memory is a temporary storage area in the main memory (RAM) that stores data transferring between two or more devices or between an application and a device. Buffering compensates for the difference in transfer speeds between the sender and receiver of the data.
Systems automatically create buffers in the RAM whenever there are varying transmission speeds between applications or devices transferring data. The buffer accumulates the bytes of data received from the sender and serves it to the receiver when ready.
Buffers are useful when printing long documents. The system automatically creates a buffer and fills it with the document’s data so that it doesn’t have to wait for the printer before processing the next page.
In computer networking, buffers are useful in data fragmentation and reassembly. From the sender’s side, the system breaks down a large chunk of data into small packets and sends them over the network. On the receiver’s end, the system creates a buffer that collects the fragmented packets of data and reassembles them to their original format.
Buffering supports copy semantics for an application’s input or output (IO). Let’s assume an application has a buffer to write to the hard disk and that it executes the write() system call. If the application changes the buffer data before the system call returns, copy semantics offers a version of the data at the time of the last system call.
Buffers are executed in three capacities:
- bounded capacity: buffer memory size is limited, blocks senders if full
- unbounded capacity: buffer memory size is unlimited, accepts any amount of data from senders
- zero capacity: buffer memory size is zero, blocks all senders until the data is received
3. Cache
Caching is the process of temporarily storing a copy of a given resource so that subsequent requests to the same resource are processed faster.
Cache memory is a fast, static random access memory (SRAM) that a computer chip can access more efficiently than the standard dynamic random access memory (DRAM). It can exist in either RAM or a hard disk. Caching in RAM is referred to as memory caching, while caching in a hard disk is referred to as disk caching.
Disk caching is advantageous because cached data in the hard disk isn’t lost if the system crashes. However, data access in disk caching is slower in comparison to memory caching.
Disk cache sizes can range from 128 MB in normal hard disks to 1 GB in solid-state disks (SSDs).
Whenever a program requests data from the hard disk, the system first checks the cache memory. It only checks the hard disk if the requested data isn’t present in the cache memory. This greatly improves data processing because accessing it directly from the hard disk is slower.
Caches can support random access to large stores of information. Moreover, cache memory also uses complex algorithms that help to decide what data to keep or delete.
We can use the free command to check the sizes of the buffer memory and cache memory:
$ free -h total used free shared buff/cache available Mem: 7.6Gi 6.4Gi 170Mi 402Mi 1.1Gi 573Mi Swap: 2.0Gi 589Mi 1.4Gi
We’re passing the -h flag to display the results in a human-readable format. In this case, the total size of buffer memory and cache memory in the RAM is 1.1GB.
Cache memory may have a small size relative to RAM, but it has a significant effect on the general performance of the system.
4. Key Differences Between Buffer and Cache
Let’s look at some of the key differences between buffer and cache:
Buffer | Cache |
---|---|
Exists only in the RAM | Exists in either RAM or hard disk |
Compensates for the difference in speed between two programs exchanging data | Quickens the access of data that is frequently requested |
Stores the original data | Stores a copy of the original data |
A normal temporary storage location in the RAM | A fast storage location in the RAM or hard disk |
Made of dynamic RAM | Made of static RAM |
5. Conclusion
In this article, we’ve learned about buffer memory and cache memory and some differences between them.
What do the «buff/cache» and «avail mem» fields in top mean?
Within the output of top, there are two fields, marked «buff/cache» and «avail Mem» in the memory and swap usage lines: What do these two fields mean? I’ve tried Googling them, but the results only bring up generic articles on top, and they don’t explain what these fields signify.
3 Answers 3
top ’s manpage doesn’t describe the fields, but free ’s does:
buffers
Memory used by kernel buffers ( Buffers in /proc/meminfo )
cache
Memory used by the page cache and slabs ( Cached and SReclaimable in /proc/meminfo )
buff/cache
Sum of buffers and cache
available
Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use ( MemAvailable in /proc/meminfo , available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free)
Basically, “buff/cache” counts memory used for data that’s on disk or should end up there soon, and as a result is potentially usable (the corresponding memory can be made available immediately, if it hasn’t been modified since it was read, or given enough time, if it has); “available” measures the amount of memory which can be allocated and used without causing more swapping (see How can I get the amount of available memory portably across distributions? for a lot more detail on that).