Author Topic: Unsatisfactory performance of SSD drives  (Read 5855 times)

DKnoto

  • Jr. Member
  • **
  • Posts: 82
  • Karma: +13/-0
    • View Profile
Unsatisfactory performance of SSD drives
« on: November 01, 2022, 08:57:16 am »
I switched to the Talos II machine from a Dell Precision 7730 laptop. In the laptop I had an older-generation Samsung SSD,
a 970 Pro, connected via an M.2 connector to PCI 3.0. In the Talos II I used a 980 Pro connected to PCI 4.0 via an ICY BOX
PCIe 4.0 x4 - M.2 PCIe NVMe controller (up to 64 Gbit/s). I expected a read transfer of 5-6 GB/s and got 0.9 GB/s.
That's more than three times slower than on a laptop. Screenshots attached.

Is there any way to improve this?
Desktop: Talos II T2P9S01 REV 1.01 | IBM Power 9/18c DD2.3, 02CY646 | AMD Radeon Pro WX7100 | 64GB RAM | SSD 1TB

sharkcz

  • Newbie
  • *
  • Posts: 19
  • Karma: +3/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #1 on: November 08, 2022, 04:35:08 am »
yeah, 0.9 GB/s is quite low, what does "sudo lspci -vv" say about "Link speed" for the SSD device?

For the record, I am getting 1.4GB/s (via hdparm -t) on my Samsung SSD 970 EVO Plus 1TB connected via PCIe 3.0 switch card.

DKnoto

  • Jr. Member
  • **
  • Posts: 82
  • Karma: +13/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #2 on: November 08, 2022, 12:34:13 pm »
Code: [Select]

# lspci -s 0001:01:00.0 -vv

0001:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd Device a801
Device tree node: /sys/firmware/devicetree/base/pciex@600c3c0100000/pci@0/mass-storage@0
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 33
NUMA node: 0
IOMMU group: 1
Region 0: Memory at 600c080000000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
Address: 0000000000000000  Data: 0000
Capabilities: [70] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 16GT/s (ok), Width x4 (ok)
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS- TPHComp- ExtTPHComp-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis+ LTR- OBFF Disabled,
AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: Upstream Port
Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
Vector table: BAR=0 offset=00003000
PBA: BAR=0 offset=00002000
Capabilities: [100 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr+ BadTLP+ BadDLLP+ Rollover- Timeout+ AdvNonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn+ ECRCChkCap+ ECRCChkEn+
MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [168 v1] Alternative Routing-ID Interpretation (ARI)
ARICap: MFVC- ACS-, Next Function: 0
ARICtl: MFVC- ACS-, Function Group: 0
Capabilities: [178 v1] Secondary PCI Express
LnkCtl3: LnkEquIntrruptEn- PerformEqu-
LaneErrStat: 0
Capabilities: [198 v1] Physical Layer 16.0 GT/s <?>
Capabilities: [1bc v1] Lane Margining at the Receiver <?>
Capabilities: [214 v1] Latency Tolerance Reporting
Max snoop latency: 0ns
Max no snoop latency: 0ns
Capabilities: [21c v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
  PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
   T_CommonMode=0us LTR1.2_Threshold=0ns
L1SubCtl2: T_PwrOn=10us
Capabilities: [3a0 v1] Data Link Feature <?>
Kernel driver in use: nvme
Kernel modules: nvme

hdparm gives better results but it is a single reading:

Code: [Select]
# hdparm -t /dev/nvme0n1

/dev/nvme0n1:
 Timing buffered disk reads: 12676 MB in  3.00 seconds = 4225.32 MB/sec

Desktop: Talos II T2P9S01 REV 1.01 | IBM Power 9/18c DD2.3, 02CY646 | AMD Radeon Pro WX7100 | 64GB RAM | SSD 1TB

MPC7500

  • Hero Member
  • *****
  • Posts: 587
  • Karma: +41/-1
    • View Profile
    • Twitter
Re: Unsatisfactory performance of SSD drives
« Reply #3 on: November 09, 2022, 03:30:40 pm »
I have two results:

Crucial BX500, 480GB SATA SSD
Code: [Select]
hdparm -t /dev/sda1

/dev/sda1:
Timing buffered disk reads: 1556 MB in  3.00 seconds = 518.13 MB/sec

via GNOME disk-utility: 545 MB/sec


Samsung 970 EVO Plus NVMe M2, 1TB
Code: [Select]
hdparm -t /dev/nvme0n1

/dev/nvme0n1:
Timing buffered disk reads: 6924 MB in  3.00 seconds = 2307.39 MB/sec

via GNOME disk-utility: 3.5 GB/sec

Kernel 6.0.5 on VoidLinux

Code: [Select]
0002:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller (rev 11)
« Last Edit: November 09, 2022, 03:47:00 pm by MPC7500 »

Woof

  • Jr. Member
  • **
  • Posts: 77
  • Karma: +20/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #4 on: November 30, 2022, 04:18:56 am »
Hmm, this is interesting and having tried the same on my Talos II with Debian I'm getting numbers slower than expected. With a Samsung NVMe PM9A1 (think of it as a 980 Pro with a U.2 connector) I see 3.0GB/s in Disk Utility and:

Code: [Select]
sudo /sbin/hdparm -t /dev/nvme0n1

/dev/nvme0n1:
 Timing buffered disk reads: 2512 MB in  3.00 seconds = 837.20 MB/sec

This is connected via PCIe 4x. We have the same HighPoint card and PM9A1 disks on x64 systems with Debian to try later.

DKnoto

  • Jr. Member
  • **
  • Posts: 82
  • Karma: +13/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #5 on: December 01, 2022, 07:22:41 am »
Good news, Fedora 36 has experienced an amazing progression in this subject over the past month.

Kernel 6.0.9: ~30% increase in read performance over the 6.0.5 version.

Kernel 6.0.10: ~425% increase in read performance over the 6.0.9 version :)
Desktop: Talos II T2P9S01 REV 1.01 | IBM Power 9/18c DD2.3, 02CY646 | AMD Radeon Pro WX7100 | 64GB RAM | SSD 1TB

DKnoto

  • Jr. Member
  • **
  • Posts: 82
  • Karma: +13/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #6 on: February 26, 2023, 08:00:12 am »
After two months of testing, I can present how the performance of my SSD changed
depending on the kernel version. I didn't expect there to be so much variability.
The results in the form of a graph in the attached image ;)
Desktop: Talos II T2P9S01 REV 1.01 | IBM Power 9/18c DD2.3, 02CY646 | AMD Radeon Pro WX7100 | 64GB RAM | SSD 1TB

Woof

  • Jr. Member
  • **
  • Posts: 77
  • Karma: +20/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #7 on: February 26, 2023, 08:20:25 am »
Really useful graph, thanks for taking the time!

vikings.thum

  • Newbie
  • *
  • Posts: 43
  • Karma: +17/-0
    • View Profile
    • Vikings
Re: Unsatisfactory performance of SSD drives
« Reply #8 on: March 07, 2023, 04:35:42 am »
Disk I/O performance is affected by cache, so oflag should be used if you don't want that. That's --direct for hdparm and oflag=direct for dd.
https://shop.vikings.net
XMPP: thum@jabber.vikings.net
Libera.Chat IRC: #vikings (handle: 'thum')

DKnoto

  • Jr. Member
  • **
  • Posts: 82
  • Karma: +13/-0
    • View Profile
Re: Unsatisfactory performance of SSD drives
« Reply #9 on: April 16, 2023, 10:54:57 am »
Today on Fedora 37, I upgraded the kernel to version 6.2.10-200. Not a recommendable experience.
This is the slowest kernel since I've been conducting systematic measurements :( . The results in
the linked graph:
Desktop: Talos II T2P9S01 REV 1.01 | IBM Power 9/18c DD2.3, 02CY646 | AMD Radeon Pro WX7100 | 64GB RAM | SSD 1TB

tle

  • Sr. Member
  • ****
  • Posts: 463
  • Karma: +53/-0
    • View Profile
    • Trung's Personal Website
Re: Unsatisfactory performance of SSD drives
« Reply #10 on: September 19, 2023, 06:18:01 am »
That’s intriguing. Can you tried the 6.5.0?
Faithful Linux enthusiast

My Raptor Blackbird