- NFS vs SMB – Which one to choose?
- Key points for Comparison between NFS and SMB
- NFS vs SMB
- Differences between NFS and SMB
- Conclusion
- PREVENT YOUR SERVER FROM CRASHING!
- NAS Performance: NFS vs. SMB vs. SSHFS
- NAS Setup
- SSHFS (also known as SFTP)
- NFSv4
- SMB3
- Test Methodology
- Synthetic Performance
- Sequential
- Random
- Real World Performance
- Conclusion
- Which to use NFS or Samba?
- 5 Answers 5
NFS vs SMB – Which one to choose?
SMB and NFS are network protocols of the application layer, used mainly for accessing files over the network. Since SMB is supported by Windows, many companies and home networks use it by default.
Here at Bobcares, we handle servers with NFS and SMB as a part of our Server Management Services.
Today let’s compare the performance of NFS and SMB.
Key points for Comparison between NFS and SMB
An important difference between both protocols is the way they authenticate. NFS uses the host-based authentication system. This means that every user on an authenticated machine can access a specific share. However, SMB provides a user-based authentication. Since NFSv4 it’s possible to use a Kerberos server, which extends the authentication system.
NFS vs SMB
Write operations
Files : 6998 files of 10 KB each NFS write : 37 seconds SMB write : 101 seconds Files : 240 files of 1 MB each NFS write : 23 seconds SMB write : 27 seconds File : 1 file of 500 MB NFS write : 45 seconds SMB write : 45 seconds File : 1 file of 3.5 GB NFS write : 323 seconds SMB write : 324 seconds
Read operations
Files : 6998 files of 10 KB each NFS read : 26 seconds SMB read : 58 seconds Files : 240 files of 1 MB each NFS read : 24 seconds SMB read : 28 seconds File : 1 file of 500 MB NFS read : 45 seconds SMB read : 48 seconds File : 1 file of 3.5 GB NFS read : 330 seconds SMB read : 347 seconds
NFS offers better performance and is unbeatable if the files are medium-sized or small. For larger files, the timings of both methods are almost the same.
In the case of sequential read, the performance of NFS and SMB are almost the same when using plain text. However, with encryption, NFS is better than SMB.
And for sequential write, the performance of NFS and SMB are almost the same when using plain text. However, with encryption, NFS is slightly better than SMB.
In the case of a random read, the performance of NFS and SMB are almost the same when using plain text. However, with encryption, NFS is better than SMB.
And for random write, NFS is slightly better than SMB when using plain text and encryption.
If rsync is used for file transfer, NFS is always better than SMB using plain text and encryption.
Differences between NFS and SMB
1. NFS is suitable for Linux users whereas SMB is suitable for Windows users.
2. SMB is not case sensitive where NFS is, this makes a big difference when it comes to a search.
3. NFS generally is faster when we are reading/writing a number of small files, it is also faster for browsing.
4. NFS uses the host-based authentication system. However, SMB provides a user-based authentication.
5. NFS is fast and easy to set up and uses Linux rights which is pretty straightforward. However, its authentication system only uses client IP address and it’s pretty hard to seperate several users from a single machine. SMB is a bit more tedious but allows user-based authentication, printer sharing and can be shared with multi-users.
In trusted home networks, NFS without encryption is the best choice on Linux for maximum performance. The native Windows network file sharing protocol is the preferred protocol for Windows servers.
Conclusion
In short, we saw the comparison between NFS and SMB performance in today’s article.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
NAS Performance: NFS vs. SMB vs. SSHFS
This is a performance comparison of the the three most useful protocols for networks file shares on Linux with the latest software. I have run sequential and random benchmarks and tests with rsync. The main reason for this post is that i could not find a proper test that includes SSHFS.
NAS Setup
The hardware side of the server is based on an Dell mainboard with an Intel i3-3220, so a fairly old 2 core / 4 threads CPU. It also does not support the AES-NI extensions (which would increase the AES performance noticeably) the encryption happens completely in software.
As storage two HDDs in BTRFS RAID1 were used, it does not make a difference though, because the tests are staged to hit almost always the cache on the server, so only the protocol performance counts.
I installed Fedora 30 Server on it and updated it to the latest software versions.
Everything was tested over a local Gigabit Ethernet Network. The client is a quadcore desktop machine running Arch Linux, so this should not be a bottleneck.
SSHFS (also known as SFTP)
Relevant package/version: OpenSSH_8.0p1, OpenSSL 1.1.1c, sshfs 3.5.2
OpenSSH is probably running anyway on all servers, so this is by far the simplest setup: just install sshfs (fuse based) on the clients and mount it. Also it is per default encrypted with ChaCha20-Poly1305. As second test i did choose AES128, because it is the most popular cipher, disabling encryption is not possible (without patching ssh). Then i added some mount options (suggested here) for convenience and ended with:
sshfs -o Ciphers=aes128-ctr -o Compression=no -o ServerAliveCountMax=2 -o ServerAliveInterval=15 remoteuser@server:/mnt/share/ /media/mountpoint
NFSv4
Relevant package/version: Linux Kernel 5.2.8
The plaintext setup is also easy, specify the exports, start the server and open the ports. I used these options on the server: (rw,async,all_squash,anonuid=1000,anongid=1000)
And mounted with: mount.nfs4 -v nas-server:/mnt/share /media/mountpoint
But getting encryption to work can be a nightmare, first setting up kerberos is more complicated than other solutions and then dealing with idmap on both server an client(s)… After that you can choose from different levels, i set sec=krb5p to encrypt all traffic for this test (most secure, slowest).
SMB3
Relevant package/version: Samba 4.10.6
The setup is mostly done with installing, creating the user DB, adding a share to smb.conf and starting the smb service. Encryption is disabled by default, for the encrypted test i set smb encrypt = required on the server globally. It uses AES128-CCM then (visible in smbstatus ).
IDmapping on the client can be simply done as mount option, i used as complete mount command:
mount -t cifs -o username=jk,password=xyz,uid=jk,gid=jk //nas-server/media /media/mountpoint
Test Methodology
The main test block was done with the flexible I/O tester (fio), written by Jens Axboe (current maintainer of the Linux block layer). It has many options, so i made a short script to run reproducible tests:
#!/bin/bash OUT=$HOME/logs fio --name=job-w --rw=write --size=2G --ioengine=libaio --iodepth=4 --bs=128k --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-write.log sleep 5 fio --name=job-r --rw=read --size=2G --ioengine=libaio --iodepth=4 --bs=128K --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-read.log sleep 5 fio --name=job-randw --rw=randwrite --size=2G --ioengine=libaio --iodepth=32 --bs=4k --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-randwrite.log sleep 5 fio --name=job-randr --rw=randread --size=2G --ioengine=libaio --iodepth=32 --bs=4K --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-randread.log
First two are classic read/write sequential tests, with 128 KB block size an a queue depth of 4. The last are small 4 KB random read/writes, but with are 32 deep queue. The direct flag means direct IO, to make sure that no caching happens on the client.
For the real world tests i used rsync in archive mode ( -rlptgoD ) and the included measurements:
rsync —info=progress2 -a sshfs/TMU /tmp/TMU
Synthetic Performance
Sequential
Most are maxing out the network, the only one falling behind in the read test is SMB with encryption enabled, looking at the CPU utilization reveals that it uses only one core/thread, which causes a bottleneck here.
NFS handles the compute intensive encryption better with multiple threads, but using almost 200% CPU and getting a bit weaker on the write test.
SSHFS provides a surprisingly good performance with both encryption options, almost the same as NFS or SMB in plaintext! It also put less stress on the CPU, with up to 75% for the ssh process and 15% for sftp.
Random
On small random accesses NFS is the clear winner, even with encryption enabled very good. SMB almost the same, but only without encryption. SSHFS quite a bit behind.
NFS still the fastest in plaintext, but has a problem again when combining writes with encryption. SSHFS is getting more competitive, even the fastest from the encrypted options, overall in the mid.
The latency mostly resembles the inverse IOPS/bandwith. Only notable point is the pretty good(low) write latency with encrypted NFS, getting most requests a bit faster done than SSHFS in this case.
Real World Performance
This test consists of transfering a folder with rsync from/to the mounted share and a local tmpfs (RAM backed). It contains the installation of a game (Trackmania United Forever) and is about 1,7 GB in size with 2929 files total, so a average file size of 600 KB, but not evenly distributed.
After all no big surprises here, NFS fastest in plaintext, SSHFS fastest in encryption. SMB always somewhat behind NFS.
Conclusion
In trusted home networks NFS without encryption is the best choice on Linux for maximum performance. If you want encryption i would recommend SSHFS, it is a much simpler setup (compared to Kerberos), more cpu efficient and often only slightly slower than plaintext NFS. Samba/SMB is also not too far behind, but only really makes sense in a mixed (Windows/Linux) environment.
Thanks for reading, i hope it was helpful.
Which to use NFS or Samba?
I am setting up a box to be a file server at the house. It will mainly be used to share music, pictures, movies with other linux boxes on the network, and one OS X machine. From what I have read NFS and samba would work in my situation, and as such I am not sure which to choose. What is important to me is the speed transfers between boxes and how difficult it is to setup. Which would you recommend and why?
5 Answers 5
In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. It’s very simple to set up and you can toggle readonly on shares you don’t need to be writeable.
I disagree with Anders. v4 can be just as simple as v3. It only gets complicated if you want to start layering on security through LDAP/gssd. It’s capable of very complex and complete security mechanisms. But you don’t need them. They’re actually turned off by default.
sudo apt-get install nfs-kernel-server
Then edit /etc/exports to configure your shares. Here’s a line from my live version that shares my music:
/media/ned/music 192.168.0.0/255.255.255.0(ro,sync,no_subtree_check)
This shares that path with anybody on 192.168.0.* in a readonly (notice the ro ) way.
When you’ve finished editing, restart NFS:
sudo /etc/init.d/nfs-kernel-server restart
To connect a client, you need the NFS gubbins (not installed by default):
sudo apt-get install nfs-common
And then add a line to /etc/fstab
192.168.0.4:/media/ned/music /media/music nfs ro,hard,intr 0 0
This is actually the NVSv3 client still because I’m lazy but it’s compatible in this scenario. 192.168.0.4 is the NFS server (my desktop in this case). And you’ll need to make sure the mount path ( /media/music here) exists.
It’s much more simple than some older tutorials would have you believe.
It might look more complicated than it really is but it’s solid, predictable and fast. Something you can’t level against Samba. At least, in my experience.