Linux mount no sync

Difference between ‘sync’ and ‘async’ mount options

What is the difference between sync and async mount options from the end-user point of view? Is file system mounted with one of these options works faster than if mounted with another one? Which option is the default one, if none of them is set? man mount says that sync option may reduce lifetime of flash memory, but it may by obsolete conventional wisdom. Anyway this concerns me a bit, because my primary hard drive, where partitions / and /home are placed, is SSD drive. Ubuntu installer (14.04) have not specified sync nor async option for / partition, but have set async for /home by the option defaults . Here is my /etc/fstab , I added some additional lines (see comment), but not changed anything in lines made by installer:

# / was on /dev/sda2 during installation UUID=7e4f7654-3143-4fe7-8ced-445b0dc5b742 / ext4 errors=remount-ro 0 1 # /home was on /dev/sda3 during installation UUID=d29541fc-adfa-4637-936e-b5b9dbb0ba67 /home ext4 defaults 0 2 # swap was on /dev/sda4 during installation UUID=f9b53b49-94bc-4d8c-918d-809c9cefe79f none swap sw 0 0 # here goes part written by me: # /mnt/storage UUID=4e04381d-8d01-4282-a56f-358ea299326e /mnt/storage ext4 defaults 0 2 # Windows C: /dev/sda1 UUID=2EF64975F6493DF9 /mnt/win_c ntfs auto,umask=0222,ro 0 0 # Windows D: /dev/sdb1 UUID=50C40C08C40BEED2 /mnt/win_d ntfs auto,umask=0222,ro 0 0 

So if my /dev/sda is SSD, should I — for the sake of reducing wear — add async option for / and /home file systems? Should I set sync or async option for additional partitions that I defined in my /etc/fstab ? What is recommended approach for SSD and HDD drives?

3 Answers 3

async is the opposite of sync , which is rarely used. async is the default, you don’t need to specify that explicitly in releases of nfs-utils up to and including 1.0.0. In all releases after 1.0.0, sync is the default, and async must be explicitly requested if needed.

The option sync means that all changes to the according filesystem are immediately flushed to disk; the respective write operations are being waited for. For mechanical drives that means a huge slow down since the system has to move the disk heads to the right position; with sync the userland process has to wait for the operation to complete. In contrast, with async the system buffers the write operation and optimizes the actual writes; meanwhile, instead of being blocked the process in userland continues to run. (If something goes wrong, then close() returns -1 with errno = EIO .)

SSD: I don’t know how fast the SSD memory is compared to RAM memory, but certainly it is not faster, so sync is likely to give a performance penalty, although not as bad as with mechanical disk drives. As of the lifetime, the wisdom is still valid, since writing to a SSD a lot «wears» it off. The worst scenario would be a process that makes a lot of changes to the same place; with sync each of them hits the SSD, while with async (the default) the SSD won’t see most of them due to the kernel buffering.

In the end of the day, don’t bother with sync , it’s most likely that you’re fine with async .

in the case that a local application is deleting and writing to the mounted drive (pointing to an external Windows box); is there potential that the default, async mode is unsafe? The scenario is a polling app, looking in one folder on the mount, pocessing the sub folders then deleting them.

Читайте также:  Bitvise ssh client linux

@HellishHeat You should ask this as a separate question with sufficient details of the scenario you have in mind.

What is the speed of different storage layers: ram is nanosecond, flash is microseconds ( 10’s for writes, about 100 for reads ), rotational disk is milliseconds ( 5 ms best case, 10 to 100ms if the disk queue is backed up and the accesses are random ). Writes to a single location on a flash device may write to a capacitor backed SRAM and not be written all the way to NAND. Thus it is hard to determine either wear or speed impact.

@ini You may risk a loss of data with async . Yet, if this is an issue, then sync is not the answer — the performance penalty of sync is simply prohibitive.

Words of caution: using the ‘async’ mount option might not be the best idea if you have a mount that is constantly being written to (ex. valuable logs, security camera recordings, etc.) and you are not protected from sudden power outages. It might result in missing records or incomplete (useless) data. Not-so-smart example: imagine a thief getting into a store and immediately cutting the camera power cable. The video recording of the break-in was recorded but might not have been flushed/synced to the disk since it (or parts of it) might have been buffered in memory instead, thus got lost when the camera lost power.

Modern servers have battery backed disk caches in RAID controllers, which will prevent from data loss even in case of a power loss.

The OS should anyway ensure that when you shutdown, that everything gets written to the ssd/hdd. In the case of a power-outage then you might loose some data. Is what I’m saying correct?

Battery-based cache in some disks is really not a reason to not optimize for power loss 1) that’s only in expensive professional servers. Not all users will have this 2) it will only save you in the situation where the data has even reached the disk controller at all. In many cases it will be stuck in OS cache, long before the controller will ever see that data — and that will be lost in an event of a power failure.

@Ini, I’m not 100% sure, so I’d appreciate confirmation or refutation of my words, but as far as I’m concerned, there are no time limits in the Linux caching/buffering algorithms, there only are memory limits. OS does not care how long is data buffered, it cares how big the buffer is. So, the flush to a file system happens when the buffer gets filled with changes (or when a partition is unmounted, a machine is shut down or hibernated, etc)

for what it’s worth, as of 2022 and RHEL 7.9

servers using self-encrypting SSD’s or a few with Dell BOSS M.2 for the linux operating system going over 100gbps HDR infiniband. by default NFS connects as sync under version 4.1 and protocol=tcp. I cannot get nfs v4.2 to work even though /cat proc/fs/nfsd/versions shows +4.2 but I don’t know how much better nfs 4.2 could be over 4.1.

Читайте также:  Linux chmod permissions execute

I tried /etc/exports with /scratch *(rw) which inherently does sync and also with /scratch *(rw,async) and saw no difference in an rsync —progress for a single nfs file copy of a 5gb tar file which averaged 460 MB/sec (max burst of 480). A local copy of same file to another folder on same server (not over the network) averaged 435 MB/sec. For reference I always get solid 112MB/sec ssh scp speed over traditional 1gbps copper.

/etc/exports on rhel-7.9 nfs-server /scratch *(rw,no_root_squash) exportfs -v on rhel-7.9 nfs-server /scratch (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash) mount on rhel 7.9 nfs-client server:/scratch on /scratch type nfs4 (rw,nosuid,noexec,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.2,local_lock=none,addr=192.168.1.1,_netdev) /etc/fstab on rhel 7.9 nfs-client 192.168.1.1:/scratch /scratch nfs4 _netdev,defaults,nosuid,noexec 0 0 

Most people use the synchronous option on the NFS server. For synchronous writes, the server replies to NFS clients only when the data has been written to stable storage. Many people prefer this option because they have little chance of losing data if the NFS server goes down or network connectivity is lost.

Asynchronous mode allows the server to reply to the NFS client as soon as it has processed the I/O request and sent it to the local filesystem; that is, it does not wait for the data to be written to stable storage before responding to the NFS client. This can save time for I/O requests and improve performance. However, if the NFS server crashes before the I/O request gets to disk, you could lose data.

Synchronous or asynchronous mode can be set when the filesystem is mounted on the clients by simply putting sync or async on the mount command line or in the file /etc/fstab for the NFS filesystem. If you want to change the option, you first have to unmount the NFS filesystem, change the option, then remount the filesystem.

If you are choosing to use asynchronous NFS mode, you will need more memory to take advantage of async, because the NFS server will first store the I/O request in memory, respond to the NFS client, and then retire the I/O by having the filesystem write it to stable storage. Therefore, you need as much memory as possible to get the best performance.

The choice between the two modes of operation is up to you. If you have a copy of the data somewhere, you can perhaps run asynchronously for better performance. If you don’t have copies or the data cannot be easily or quickly reproduced, then perhaps synchronous mode is the better option. No one can make this determination but you.

Источник

single bind mounted file gets out of sync in linux

I am bind mounting a single file on top of another one and after making changes with an editor, I don’t see the modifications in both files. However, if I make the changes with the shell using redirection, >>, e.g., I do see the changes in both files. Below is an example to demonstrate: First case:

-bash-3.00# echo foo >| foo -bash-3.00# echo bar >| bar -bash-3.00# diff foo bar 1c1 < foo --- >bar -bash-3.00# mount --bind foo bar -bash-3.00# echo modified >> foo -bash-3.00# diff foo bar -bash-3.00# umount bar 

Everything in the above case is as I expect; the two files show no differences after appending «modified» to the file «foo». However, if I perform the same test but use vi to edit foo, I get a different result. Second case:

-bash-3.00# echo foo >| foo -bash-3.00# echo bar >| bar -bash-3.00# diff foo bar 1c1 < foo --- >bar -bash-3.00# mount --bind foo bar -bash-3.00# diff foo bar -bash-3.00# vi foo # append "modified with vi" and :wq vi "foo" 2L, 21C written -bash-3.00# cat foo foo modified with vi -bash-3.00# cat bar foo -bash-3.00# diff foo bar 2d1 < modified with vi -bash-3.00# 

Here, the two files are different even though one is bind mounted onto the other. Anyone here know what is going on in this case? Thanks!

Читайте также:  Linux egrep and or

Bind-mounting a file is technically possible, but it is very strange. Normally you'd mount a directory.

1 Answer 1

What is happening is that vi is creating a new file (inode) and, effectively, undoing the bind, even though the mount is still in place. Appending uses the existing file (inode).

Take a look at the inode numbers of the files using ls -li as I step through your test(s).

$ echo foo > foo $ echo bar > bar $ ls -li foo bar # 2 inodes so 2 different files 409617 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 bar 409619 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 foo $ sudo mount --bind foo bar $ ls -li foo bar # both inodes are the same so both reference the same file (foo) 409619 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 bar 409619 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 foo $ echo mod >> foo $ ls -li foo bar # appending doesn't change the inode 409619 -rw-r--r-- 1 derek derek 8 Jul 31 12:57 bar 409619 -rw-r--r-- 1 derek derek 8 Jul 31 12:57 foo $ vi foo $ ls -li foo bar # vi has created a new file called foo (new inode) # bar still points to the old foo 409619 -rw-r--r-- 0 derek derek 8 Jul 31 12:57 bar 409620 -rw-r--r-- 1 derek derek 14 Jul 31 12:57 foo $ sudo umount bar $ ls -li foo bar # umount uncovers the original bar. original foo has no references 409617 -rw-r--r-- 1 derek derek 4 Jul 31 12:56 bar 409620 -rw-r--r-- 1 derek derek 14 Jul 31 12:57 foo 

You need to think in terms of the underlying inodes rather than file names. What are you trying to do which couldn't be done with symlinks?

I tried a variation and think you can do what you want. Take a look at the following.

$ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 17 Jul 31 19:45 a/foo 840457 -r--r--r-- 1 root root 6 Jul 31 19:41 /mnt/c/foo $ sudo mount --bind a/foo /mnt/c/foo $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 17 Jul 31 19:45 a/foo 3842157 -rw-r--r-- 1 derek derek 17 Jul 31 19:45 /mnt/c/foo $ vi /mnt/c/foo $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 22 Jul 31 20:02 a/foo 3842157 -rw-r--r-- 1 derek derek 22 Jul 31 20:02 /mnt/c/foo $ sudo umount /mnt/c/foo $ ls -li a/foo /mnt/c/foo 3842157 -rw-r--r-- 1 derek derek 22 Jul 31 20:02 a/foo 840457 -r--r--r-- 1 root root 6 Jul 31 19:41 /mnt/c/foo 

While a/foo was mounted on the read-only file /mnt/c/foo I could edit /mnt/c/foo and it changed the contents of a/foo without changing the inode.

Источник

Оцените статью
Adblock
detector