Linux disk write caching

Write-through RAM disk, or massive caching of file system? [closed]

I have a program that is very heavily hitting the file system, reading and writing to a set of working files. The files are several gigabytes in size, but not so large as to not fit on a RAM disk. The machines this program runs on are typically Ubuntu Linux boxes. Is there a way to configure the file manager to have a very very large cache, and even to cache writes so they hit the disk later? Or is there a way to create a RAM disk that writes-through to real disk?

4 Answers 4

By default, Linux will use free RAM (almost all of it) to cache disk accesses, and will delay writes. The heuristics used by the kernel to decide the caching strategy are not perfect, but beating them in a specific situation is not easy. Also, on journalling filesystems (i.e. all the default filesystems nowadays), actual writes to the disk will be performed in a way which is resilient the crashes; this implies a bit of overhead. You may want to try to fiddle with filesystem options. E.g., for ext3 , try mounting with data=writeback or even async (these options may improve filesystem performance, at the expense of reduced resilience towards crashes). Also, use noatime to reduce filesystem activity.

Programmatically, you might also want to perform disk accesses through memory mappings (with mmap ). This is a bit hand-on, but it gives more control about data management and optimization.

vmtouch is also useful if you really do want to force the kernel to keep stuff cached (as I do currently)

Источник

How to Disable Disk Write Caching in Ubuntu To Prevent Data Loss

Ubuntu Prevent Data Loss

This simple tutorial is going to show you how to disable disk write caching in Ubuntu to prevent data loss when you may experience power failure.

Enable write caching improves disk performance, but a power outage or equipment failure might result in data loss or corruption. It’s recommended only for disks with a backup power supply.

Some third-party programs require disk write caching to be enabled or disabled. If your disk are used for Event Store databases, it’s highly recommended to disable disk caching to help ensure that data is durable when the machine might experience a power, device or system failure.

In Ubuntu, it’s easy to check out whether disk caching is enabled on your disk or not by running below command:

Replace /dev/sda with your device and you’ll see below similar outputs:

Model=WDC WD3200BPVT-22JJ5T0, FwRev=01.01A01, SerialNo=WD-WX61EC1KZK99
Config=< HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq >
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
BuffType=unknown, BuffSize=8192kB, MaxMultSect=16, MultSect=off
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=625142448
IORDY=on/off, tPIO=, tDMA=
PIO modes: pio0 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
AdvancedPM=yes: unknown setting WriteCache=enabled
Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7

* signifies the current active mode

The words in red, WriteCache=enabled, means caching is enabled! To disable it, edit the “/etc/hdparm.conf” with your favorite editor, here I use vi as example:

Читайте также:  Unzipping zip files in linux

Uncomment the line “#write_cache = off” (without quotes) by removing the # at its beginning. So it looks like:

# -W Disable/enable the IDE drive’s write-caching feature
write_cache = off

After that, restart your computer and check out the write caching status again to make sure it’s disabled.

permalink

Ji m

I’m a freelance blogger who started using Ubuntu in 2007 and wishes to share my experiences and some useful tips with Ubuntu beginners and lovers. Please comment to remind me outdated tutorial! And, notify me if you find any typo/grammar/language mistakes. English is not my native language. Contact me via [email protected] Buy me a coffee: https://ko-fi.com/ubuntuhandbook1

6 responses to How to Disable Disk Write Caching in Ubuntu To Prevent Data Loss

Hi Ji m,
Interesting article ! I’m right now on one my Linux distros “LMDE 201303” and I just checked on my partition LMDE thanks to : sudo hdparm -i /dev/sda2 and I found this line : # -W Disable/enable the IDE drive’s write-caching feature #write_cache = off.
Looks like the job’s already done on that distro. I’ll check that on my Ubuntu 12.04. Thanks a lot anyway !

Источник

Linux disk caching mechanism

I recently encountered a disk-related online failure to summarize some of my previously unknown knowledge about Linux disk caching.

In general, there are two reasons for the appearance of disk caching. First, the speed of accessing disk is much slower than the speed of accessing memory, which can be improved by caching disk contents in memory. Second, according to the principle of program locality, once the data has been accessed, it is likely to be accessed again in a short time, so caching disk contents in memory can improve the speed of the program.

Locality principle

Principle of Program locality: The execution of a program exhibits the law of locality, that is, the execution of the entire program is limited to a certain part of the program for a certain period of time. Accordingly, the storage that execution accesses is limited to a memory region, and locality, in particular, usually comes in two forms: temporal locality and spatial locality.

Time locality: a memory location that has been referenced once will be referenced many times in the future.

Spatial locality: If a memory location is referenced, its nearby locations will also be referenced in the future.

Page caching

In Linux, to reduce I/O operations on disks, the contents of opened disks are cached in physical memory. In this way, access to disks is converted into access to memory, effectively improving the speed of programs. Linux uses physical memory to cache the contents of disk, called page cache.

A page cache is made up of physical pages in memory whose contents correspond to physical blocks on disk. The size of the page cache adjusts dynamically according to the amount of free memory on the system, and it can expand by occupying memory or shrink itself to relieve memory usage.

Читайте также:  Creating sudo user linux

Before the advent of virtual memory, operating systems used the block cache family, but after the advent of virtual memory, operating systems managed IO with greater granularity, so they adopted the page cache mechanism, which is a page-based, file-oriented caching mechanism.

Page cache read

When reading a file, the Linux system preferentially reads the file from the page cache. If the page cache does not exist, the Linux system first reads the file from the disk and updates the file to the page cache, and then reads the file from the page cache and returns the file. The general process is as follows:

  1. The process invokes the library function read to initiate a file read request
  2. The kernel checks the list of open files and invokes the read interface provided by the file system
  3. Find the file’s inode and figure out which page to read
  4. Search the corresponding page cache through inode, 1) if the page cache node hits, return the file content directly; 2) If there is no corresponding page cache, a page fault is generated. The system creates a new empty page cache, reads the file contents from disk, updates the page cache, and then repeats step 4
  5. Read file return

Therefore, all reads of file contents, whether or not they initially hit the page cache, ultimately come directly from the page cache.

Write to the page cache

Because of the page cache, when a process calls write, the update to the file is simply written to the file’s page cache, then the corresponding page is marked as dirty, and the whole process ends. The Linux kernel periodically writes the dirty pages back to disk and then cleans up the dirty flags.

Because write operations only write changes to the page cache, the process does not block until disk I/O occurs, and if the computer crashes at this point, the write changes may not occur on disk. Therefore, for some strict write operations, such as data systems, you need to actively call fsync and other operations to synchronize changes to disk in time. Read operations, on the other hand, usually block until the process reads data. To reduce this delay, Linux uses a technique called «preread», where the kernel reads more pages from disk into the page cache.

To write a thread

Writing back to the page cache is done by a separate thread in the kernel, which can write back in one of three ways:

  1. The free memory is lower than the threshold. When the free memory is insufficient, part of the cache needs to be released. Since only non-dirty pages can be released, all dirty pages need to be written back to disk to make them recyclable and clean.
  2. The processing time of dirty pages in memory exceeds the threshold. This is to ensure that dirty pages do not remain in memory indefinitely, reducing the risk of data loss.
  3. When the user process invokes sync and fsync system calls. This function provides a mandatory write back method for user processes to meet stringent write back requirements.
Читайте также:  Linux bash for find

Implementation of write back threads

The name of the version instructions
bdflush Prior to 2.6 The bdFlush kernel thread runs in the background, and there is only one bdFlush thread in the system, which is woken up when memory consumption falls below a certain threshold. Kupdated Runs periodically to write back to dirty pages. However, there is only one BdFlush thread in the entire system. When a heavy write back task is performed, the BdFlush thread may block the I/O of one disk, causing I/O write backs of other disks to fail in a timely manner.
pdflush Version 2.6 Introduction The number of PDFlush threads is dynamic and depends on the I/O load on the system. It is oriented towards global tasks for all disks in the system. However, because PDFlush is for all disks, it is possible for multiple PDFlush threads to block on a congested disk, causing I/O write backs on other disks to fail.
Flusher thread Introduced later than 2.6.32 The number of flusher threads is not unique, and the flusher thread is not flusher for all disks. Each flusher thread corresponds to one disk

Collection of page cache

The replacement logic for page caching in Linux is a modified LRU implementation, also known as the double-chained policy. Instead of maintaining one LRU list, Linux maintains two lists: active and inactive. Pages on an active list are considered «hot» and will not be paged out, while pages on an inactive list are paged out. Pages in the active list must be in the inactive list at the time they are accessed. Both lists are maintained by pseudo-LRU rules: pages are added from the tail and removed from the head, like queues. The two lists need to be balanced — if the active list becomes too many and exceeds the inactive list, the head page of the active list is moved back to the inactive list and can be reclaimed again. Double — linked list strategy solves the dilemma of only one access in traditional LRU algorithm. It is also easier to implement pseudo LRU semantics. This double-linked list scheme is also called LRU/2. More commonly n linked lists are called LRU/ N.

conclusion

In this case, the root cause of the online failure is that temporary files are cached in the business logic. If a temporary file is deleted within a short period of time after creation, operations on the file are carried out in the page cache and will not actually be written back to disk. When a program is slow to respond to problems, temporary files may survive longer and be written back to disk, resulting in excessive disk pressure and affecting the entire system.

Источник

Оцените статью
Adblock
detector