What network file sharing protocol has the best performance and reliability? [closed]
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
We have a setup with a few web servers being load-balanced. We want to have some sort of network shared storage that all of the web servers can access. It will be used as a place to store files uploaded by users. Everything is running Linux. Should we use NFS, CIFS, SMB, fuse+sftp, fuse+ftp? There are so many choices out there for network file sharing protocols, it’s very hard to pick one. We basically just want to permanently mount this one share on multiple machines. Security features are less of a concern because it won’t be network accessible from anywhere other than the servers mounting it. We just want it to work reliably and quickly. Which one should we use?
Life is a lot simpler if you add an accelerator in front of your website, e.g. squid accelerator or cloudflare. Next best thing is to write changed content to memcache or database instead of files. Shared directories is not for larger sites.
17 Answers 17
NFSv4.1 added the Parallel NFS pNFS capability, which makes parallel data access possible. I am wondering what kind of clients are using the storage if only Unix-like then I would go for NFS based on the performance figures.
The short answer is use NFS. According to this shootout and my own experience, it’s faster.
But, you’ve got more options! You should consider a cluster FS like GFS, which is a filesystem multiple computers can access at once. Basically, you share a block device via iSCSI which is a GFS filesystem. All clients (initiators in iSCSI parlance) can read and write to it. Redhat has a whitepaper . You can also use oracle’s cluster FS OCFS to manage the same thing.
The redhat paper does a good job listing the pros and cons of a cluster FS vs NFS. Basically if you want a lot of room to scale, GFS is probably worth the effort. Also, the GFS example uses a Fibre Channel SAN as an example, but that could just as easily be a RAID, DAS, or iSCSI SAN.
Lastly, make sure to look into Jumbo Frames, and if data integrity is critical, use CRC32 checksumming if you use iSCSI with Jumbo Frames.
We have a 2 server load-blanacing web cluster.We have tried the following methods for syncing content between the servers:
- Local drives on each server synced with RSYNC every 10 minutes
- A central CIFS (SAMBA) share to both servers
- A central NFS share to both servers
- A shared SAN drive running OCFS2 mounted both servers
The RSYNC solution was the simplest, but it took 10 minutes for changes to show up and RSYNC put so much load on the servers we had to throttle it with custom script to pause it every second. We were also limited to only writing to the source drive.
The fastest performing shared drive was the OCFS2 clustered drive right up until it went insane and crashed the cluster. We have not been able to maintain stability with OCFS2. As soon as more than one server accesses the same files, load climbs through the roof and servers start rebooting. This may be a training failure on our part.
The next best was NFS. It has been extremely stable and fault tolerant. This is our current setup.
SMB (CIFS) had some locking problems. In particular changes to files on the SMB server were not being seen by the web servers. SMB also tended to hang when failing over the SMB server
Our conclusion was that OCFS2 has the most potential but requires a LOT of analysis before using it in production. If you want something straight-forward and reliable, I would recommend an NFS server cluster with Heartbeat for failover.
How to configure Network File System on Linux
NFS is one of the easiest and most transparent ways to handle shared storage within an organization. Learn how to configure it on Red Hat Enterprise Linux.
The Network File System (NFS) is a protocol that allows you to set up storage locations on your network. When you have NFS set up, your users can treat a remote hard drive as if it were attached to their computer, just like they might a USB thumb drive. It’s one of the easiest and most transparent ways to handle shared storage within an organization.
Install NFS
NFS is a built-in function in Red Hat Enterprise Linux (RHEL) 9, but there’s a package of utilities that you can install on the computer serving as the NFS host and on the Linux workstations that will interface with NFS:
$ sudo dnf install nfs-utils
On your NFS host, enable and start the NFS service:
$ sudo systemctl enable --now nfs-server
You must also start the rpcbind service, which NFS uses for port mapping:
$ sudo systemctl enable --now rpcbind
Set a shared location
On your NFS host, create a location on the filesystem to share with client computers. This could be a separate drive, a separate partition, or just a place on your server. To ensure that your storage can scale as needed, I recommend using LVM. Create the location with:
$ sudo mkdir -p /nfs/exports/myshare
Export the shared location
For the NFS service to know to broadcast the existence of your myshare shared location, you must add the location to the /etc/exports file, as well as the subnet you want to have access and the global access permissions. For example, assuming your network is 192.168.122.0/24 (with the first possible address being 192.168.122.1 and the final being 192.168.122.254), then you could do this:
$ echo "/nfs/exports/myshare 192.168.122.0/24(rw)" > /etc/exports
Training & certification
Note that there is no space between the network and the directory’s access permissions.
Set ownership
Depending on where you created your shared location, its permissions may not be suitable for all users on your network. For example, I created /nfs/exports/myshare at the root partition of my server’s hard drive, so the directories are all owned by the user root , with group root having read and execute permissions. Unless your users are members of the root group, this export is of little use to them.
How you set directory permissions is up to you and depends on how you define users and groups on your systems. It’s common to manage directories by group permissions, adding users who require access to specific directories to the corresponding group. For instance, if a user is a member of the staff group, then you could set your export to staff with permissions 775:
$ sudo chown root:staff /nfs/exports/myshare $ sudo chmod 775 /nfs/exports/myshare
This grants myshare read, write, and execute permissions for all members of the staff group.
Export the exports
The NFS server maintains a table of filesystems available to clients. To update the table, run the exportfs command along with the -r command to export all directories recursively:
Configure your firewall
For clients to reach your NFS server, you must add the NFS service to your firewall with the firewall-cmd command:
$ sudo firewall-cmd --add-service nfs --permanent
Your NFS server is now active and configured for traffic.
Set up your client
Now that you’ve established a shared storage location on your network, you must configure your client machines to use it.
First, create a mount point for the NFS share:
[workstation]$ sudo mkdir /nfs/imports/myshare
And then mount the NFS volume:
[workstation]$ sudo mount -v \ -t nfs 192.168.122.17:/nfs/exports/myshare \ /nfs/imports/myshare/
You can make this a permanent and automatic process by adding the NFS volume to the client’s /etc/fstab file:
192.168.122.17:/nfs/exports/myshare /nfs/imports/myshare/ nfs rw 0 0
You can verify that an NFS volume is mounted with the mount command:
[workstation]$ sudo mount | grep -i nfs 192.168.122.17:/nfs/exports/myshare on /nfs/imports/myshare .