Nvme linux низкая скорость

Nvme linux низкая скорость

Сообщения: 2
Благодарности: 0

Здравствуйте. Может кто сталкивался с такой проблемой.
Есть сервер Dell PowerEdge R420 в него поставили плату для подключения SSD PCIe NVMe .
Столкнулись с проблемой, что при загрузке из под Windows скорость работы SSD нормальная, на запись 2398MB/s и чтение 3270MB/s.
А при загрузке из под Linux скорость лишь треть от того что при загрузке из под винды.

При загрузке под Windows проверял скорость с помощью программы CrystalDiskMark.
При загрузке из под Linux с помощью скрипта

#!/bin/bash -x sync echo 3>/proc/sys/vm/drop_caches dd if=/dev/zero of=/mnt/output oflag=dsync conv=fdatasync bs=50M count=100

Сообщения: 3806
Благодарности: 824

Для отключения данного рекламного блока вам необходимо зарегистрироваться или войти с учетной записью социальной сети.

Сообщения: 6345
Благодарности: 1435

Ну и где результат-то? Вы его забыли показать, что говорит о том, что вы не понимаете, что делаете. Какая фс используется также не сказано, вдруг вы там ntfs через fuse. О том, что dd не тот инструмент, чтоб измерять скорость, умолчим.

Сообщения: 2
Благодарности: 0

Файловая система xfs.
Для сравнения приведу характеристики и тест двух серверов. На одном скорость работы SSD нормальная, а со вторым вот такая фигня.

lshw -short -C system -C processor -C bridge
H/W path Device Class Description
===========================================================
system System Product Name (SKU)
/0/4c processor Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
/0/100 bridge Intel Corporation
/0/100/1 bridge Skylake PCIe Controller (x16)
/0/100/1b bridge 200 Series PCH PCI Express Root Port #17
/0/100/1c.2/0 bridge ASM1083/1085 PCIe to PCI Bridge

nvme list
Node SN Model Namespace Usage Format FW Rev
—————- ——————— —————————————- ——— ————————— —————- ———
/dev/nvme0n1 S3ETNX0J604697N Samsung SSD 960 EVO 1TB 1 637.02 GB / 1.00 TB 512 B + 0 B 2B7QCXE7

lspci -v -s 07:00.0
07:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd Device a801
Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0
Memory at f7000000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [148] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [158] Power Budgeting
Capabilities: [168] #19
Capabilities: [188] Latency Tolerance Reporting
Capabilities: [190] L1 PM Substates
Kernel driver in use: nvme

hdparm -tT —direct /dev/nvme0n1

Timing O_DIRECT cached reads: 2550 MB in 2.00 seconds = 1274.90 MB/sec
Timing O_DIRECT disk reads: 6792 MB in 3.00 seconds = 2263.83 MB/sec

dd if=/dev/zero of=/mnt-samsung-nvme0n1p1/output oflag=dsync conv=fdatasync bs=50M count=100

5242880000 bytes (5.2 GB, 4.9 GiB) copied, 5.17535 s, 1.0 GB/s

lshw -short -C system -C processor -C bridge
H/W path Device Class Description
==========================================================
system PowerEdge R420 (SKU=NotProvided;ModelName=PowerEdge R420)
/0/400 processor Intel(R) Xeon(R) CPU E5-2430L 0 @ 2.00GHz
/0/401 processor Intel(R) Xeon(R) CPU E5-2430L 0 @ 2.00GHz
/0/100 bridge Xeon E5/Core i7 DMI2
/0/100/1 bridge Xeon E5/Core i7 IIO PCI Express Root Port 1a
/0/100/3 bridge Xeon E5/Core i7 IIO PCI Express Root Port 3a in PCI Express Mode
/0/100/11 bridge C600/X79 series chipset PCI Express Virtual Root Port
/0/100/1c bridge C600/X79 series chipset PCI Express Root Port 1
/0/100/1c.4 bridge C600/X79 series chipset PCI Express Root Port 5
/0/100/1c.7 bridge C600/X79 series chipset PCI Express Root Port 8
/0/100/1c.7/0 bridge SH7757 PCIe Switch [PS]
/0/100/1c.7/0/0 bridge SH7757 PCIe Switch [PS]
/0/100/1c.7/0/0/0 bridge SH7757 PCIe-PCI Bridge [PPB]
/0/100/1c.7/0/1 bridge SH7757 PCIe Switch [PS]
/0/100/1e bridge 82801 PCI Bridge
/0/100/1f bridge C600/X79 series chipset LPC Controller
/0/3 bridge Xeon E5/Core i7 IIO PCI Express Root Port 3a in PCI Express Mode

Читайте также:  Convert unix to dos linux

nvme list
Node SN Model Namespace Usage Format FW Rev
—————- ——————— —————————————- ——— ————————— —————- ———
/dev/nvme0n1 S3X3NF0JA01887K Samsung SSD 960 EVO 1TB 1 488.83 MB / 1.00 TB 512 B + 0 B 3B7QCXE7

lspci -v -s 41:00.0
41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd Device a804 (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd Device a801
Flags: bus master, fast devsel, latency 0, IRQ 33, NUMA node 1
Memory at d40fc000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [148] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [158] Power Budgeting
Capabilities: [168] #19
Capabilities: [188] Latency Tolerance Reporting
Capabilities: [190] L1 PM Substates
Kernel driver in use: nvme

hdparm -tT —direct /dev/nvme0n1

/dev/nvme0n1:
Timing O_DIRECT cached reads: 3604 MB in 2.00 seconds = 1804.39 MB/sec
Timing O_DIRECT disk reads: 7902 MB in 3.00 seconds = 2633.84 MB/sec

dd if=/dev/zero of=/mnt/output oflag=dsync conv=fdatasync bs=50M count=100

5242880000 bytes (5.2 GB, 4.9 GiB) copied, 9.60919 s, 546 MB/s

Источник

Производительность NVME SSD на линуксе

Купил на днях NVME SSD. Паспортная скорость 3500 мб/с запись 3000 мб/с чтение. В реальности даже до 2 гб/с не дотягивает.

# dd if=/dev/nvme0n1 of=/dev/null bs=1M status=progress скопійовано 43414192128 байтів (43 GB, 40 GiB), 30 s, 1,4 GB/s^C 41904+0 записів прочитано 41903+0 записів записано скопійовано 43938480128 байтів (44 GB, 41 GiB), 30,3677 s, 1,4 GB/s

Модель SSD, материнской платы и процессора.

Да нет, должно быть нормально, хотя можно и увеличить до 10.

Дело может быть в количестве задействованных каналов PCI и в номере версии шины PCI. Расскажи подробнее про железо.

Jameson ★★★★★ ( 16.09.21 05:33:28 MSK )
Последнее исправление: Jameson 16.09.21 05:33:52 MSK (всего исправлений: 1)

Welcome to the real world. На заборе тоже пишут. В идеальных лабораторных условиях Вы может быть эти циферки и увидите.

Запусти два dd параллельно. Может один не может нагрузить как следует.

Читайте также:  Linux operating system guide

dd не подходит для тестирования диска.

fio --name=read --readonly --rw=read --ioengine=libaio --iodepth=16 --bs=1M --direct=0 --numjobs=16 --runtime=30 --group_reporting --filename=/dev/nvme0n1 

NeOlip ★★ ( 16.09.21 07:26:36 MSK )
Последнее исправление: NeOlip 16.09.21 07:26:56 MSK (всего исправлений: 1)

Эта скорость для кеша указана, ты слишком много копируешь и кеш переполняется, скорость падает. Скопируй сто мегабайтов и замерь скорость.

Дело может быть в количестве задействованных каналов PCI и в номере версии шины PCI

# fio --name=read --readonly --rw=read --ioengine=libaio --iodepth=16 --bs=1M --direct=0 --numjobs=16 --runtime=30 --group_reporting --filename=/dev/nvme0n1 read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16 . fio-3.25 Starting 16 processes Jobs: 16 (f=16): [R(16)][100.0%][r=6095MiB/s][r=6095 IOPS][eta 00m:00s] read: (groupid=0, jobs=16): err= 0: pid=17905: Thu Sep 16 08:32:18 2021 read: IOPS=1657, BW=1658MiB/s (1738MB/s)(48.6GiB/30013msec) slat (usec): min=226, max=68907, avg=3023.76, stdev=4204.64 clat (usec): min=7, max=130782, avg=45621.89, stdev=16134.13 lat (usec): min=554, max=141477, avg=48650.54, stdev=16862.60 clat percentiles (msec): | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 25], 20.00th=[ 32], | 30.00th=[ 37], 40.00th=[ 42], 50.00th=[ 46], 60.00th=[ 51], | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 70], | 99.00th=[ 82], 99.50th=[ 89], 99.90th=[ 105], 99.95th=[ 109], | 99.99th=[ 117] bw ( MiB/s): min= 3924, max= 7608, per=100.00%, avg=5192.81, stdev=61.02, samples=288 iops : min= 3924, max= 7606, avg=5191.33, stdev=61.01, samples=288 lat (usec) : 10=0.01%, 20=0.02%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.02%, 10=0.42%, 20=6.20%, 50=52.28% lat (msec) : 100=40.88%, 250=0.16% cpu : usr=0.17%, sys=37.21%, ctx=864166, majf=0, minf=57081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=99.5%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=49748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=1658MiB/s (1738MB/s), 1658MiB/s-1658MiB/s (1738MB/s-1738MB/s), io=48.6GiB (52.2GB), run=30013-30013msec Disk stats (read/write): nvme0n1: ios=34677/0, merge=0/0, ticks=15156/0, in_queue=15157, util=30.33%

По-прежнему недотягивает до 3500. При этом ЦПУ грузит на все 100%! Может ли процессор быть «бутылочным горлышком»?

metaprog ☆ ( 16.09.21 08:37:32 MSK )
Последнее исправление: metaprog 16.09.21 08:38:55 MSK (всего исправлений: 3)

Источник

NVMe SSD slow write speed

I have a NUC (BEH model) and a M.2 SSD PCIe gen3 NMVe card (Samsung 970 pro 512GB) and I have a slow and fast write speed result in Ubuntu 18.04.3 with two different kernels. I used ukuu for kernel switching and in kernel 5.0+ which comes standard with the Ubuntu installer I get around 600MiB ( sad ) write speed and with a previous kernel version of 4.9.190, I get around 2200MiB with the benchmark tool in Ubuntu. I have tried the latest 5.2 kernel and it is still a problem. I have tried Linux mint 19.2 and I also get the a slow write speed because it is using a later kernel than 4.9. Here is my benchmark result on kernel 4.9.190. I think this and this are related problems and a simple google search indicates lots of SSD write performance issues. Could it be a massive potential linux kernel performance issue? Any help or fix would be greatly welcome!

Читайте также:  Linux mint apt source list

3 Answers 3

It seems that the kernel itself might be ok but somewhere in the benchmark tool (Disks) of ubuntu could be the issue.

Solution (work-around): I created a directory in the disk to be tested and then terminal into the directory and ran two commands on it. The first commands creates a temp file (4GB in size) and tests the write speed of the disk and the second command reads that file and tests the read speed.

The commands: -write: dd if=/dev/zero of=tempfile bs=1M count=4096 conv=fdatasync,notrunc status=progress oflag=direct -read: dd if=tempfile of=/dev/null bs=1M count=4096 status=progress iflag=direct

The problem described is about the same here. I have a computer with ASUS Z10PE motherboard. That one has an in-built M2 NVMe slot. I also added 1 PCIe card that supports 1 NVMe drive. I also modded the BIOS to get bifurcation mode to have one PCIe slot to be divided in 4X4X4X4 so I can fit in the ASUS M2 Hyper PCIe card that allows up to 4 NVMe drives.

If I use GNOME-DISKS tool, that allows to run a performance test, the best case scenario is on the ASUS PCIe card with Samsung PM981 NVMe drives :

  • 3.3GB/s READ speed (as advertised)
  • 600MB/s WRITE speed (about 4 times less than what is advertised ; with a really significant crush in performance when cache get’s filles at about 40GB).

I softraided the Samsung NVMe PM981 dirves on the ASUS PCIe card. Speeds are now as follows :

  • READ : 5.6GB/s (that is OK. even if not double of the single drive) ;
  • WRITE : 1.2GB/s which exactly the double of the single drive performance.

It is like the kernel or MoBo sets the speed at AHCI speed (as it was a SATA drive).

Now if I use the above method, the results are quite different :

dd if=/dev/zero of=tempfile bs=1M count=16384 conv=fdatasync,notrunc status=progress oflag=direct

15183380480 octets (15 GB, 14 GiB) copiés, 5 s, 3,0 GB/s 16384+0 enregistrements lus 16384+0 enregistrements écrits 17179869184 octets (17 GB, 16 GiB) copiés, 5,63686 s, 3,0 GB/s 

dd if=tempfile of=/dev/null bs=1M count=4096 status=progress iflag=direct

4096+0 enregistrements lus 4096+0 enregistrements écrits 4294967296 octets (4,3 GB, 4,0 GiB) copiés, 1,00056 s, 4,3 GB/s 

So it is totally inconsistent between both tools : GNOME-DISKS and dd.

In real world : if I move a really large (about 20GB) file from one NVMe to another one, I hardly get more than 850MB/s even on the softraided drives, which is really MUCH MUCH MUCH slower than expected. Theory would be : 2 X 2400MB/s = 4800MB/s. Reality : 6/7 times less.

You ask me : I think there is a real problem either in MoBo or in Linux.

I’ll have to install Windows just to check if the problem is with the MoBo or with OS.

Источник

Оцените статью
Adblock
detector