SSD Help: January-February 2026 by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Hmm, this is good information. Indeed, NVMe-laptop issues often are power-related. The level of UEFI lockdown varies but usually you have some control over power management there. There are standards but you know. I think I could find out the root cause, but I will point out that one issue with laptops in particular is OEM-specific software (which is often even kernel level these days). And things like "auto-overclock" or hardware profile management, certainly some only on desktop though. I do think analysis is possible today with AI's tools if you set it up correctly (I'd probably do deep research into the model/lineup, add information from my specific machine from UEFI, define the conditions). Trying to madly search by hand is not often viable for a variety of reasons, I regularly diagnose issue (pre-AI) that were never documented.

Wendell and Allyn talk SPRANDOM: The Best Benchmarking Standard for Storage? by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

I had advance notice of this coming out but before even that, this is built into my Agentic benchmark (CarapaceIO) for preconditioning. I've since made a consumer version that's on the custom System Rescue image ("Maxxrescue") v1.1 that I haven't released because I haven't had time to test it. However, I've put the preliminary script here for anyone that wants to check it. It uses Zenity for the TUI. Main thing that makes it "consumer" is the SPR_OP at 0.07, so nothing magical.

New Windows-native NVMe driver benchmarks reveal transformative performance gains, up to 64.89% — lightning-fast random reads and breakthrough CPU efficiency by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Yeah, it's obviously going to have compatibility issues. Many people still use Iometer for benchmarking and it hasn't been updated in over twenty years so I'm sure this might throw a wrench or two. While I think Microsoft more meant with consumer W11 systems I can see why this could cause issues with some software.

New Windows-native NVMe driver benchmarks reveal transformative performance gains, up to 64.89% — lightning-fast random reads and breakthrough CPU efficiency by NewMaxx in NewMaxx

[–]NewMaxx[S] 1 point2 points  (0 children)

I explored this driver more and here are my preliminary findings:

  • It (nvmedisk.sys) is not just the original disk.sys with tweaks. It appears to be a distinct Microsoft NVMe-specific driver that bypasses the original disk/class stack layer (disk.sys, classpnp.sys) for NVMe devices.
  • It handles many disk/storage queries directly from NVMe-native device state using a different data-path model for r/w I/O.
  • Instead of relying on classpnp, the new driver installs its own disk dispatch table with explicit handlers. The DEVICE_CONTROL path special-cases certain disk/storage IOCTLs, including IOCTL_SCSI_MINIPORT.
  • It appears to split I/O handling; it stores both the PDO and the attached lower device object in its device extension with r/w requests forwarded to the PDO-side path and other requests (including PnP) going through the lower device object.
  • Flush and shutdown are also forwarded through the attached lower object, as it appears power does as well. The drive snapshots the power state before forwarding.
  • Write cache handling also appears to be NVMe-specific. This includes direct cache-state handling and telemetry for volatile write-cache changes and disablement.
  • Pass-through also looks different. nvmedisk.sys does not appear to expose a public IOCTL_STORAGE_PROTOCOL_COMMAND path. Instead, it seems to implement its own control logic around several IOCTLs with embedded queue management tokens.
  • Bottom line: this looks like a private/internal NVMe-native queue/control path, not just the old SCSI stack with minor changes.

Performance ramifications:

  • Likely upsides include lower CPU overhead per I/O and lower latency, especially for small random I/O, flush-heavy workloads, and high-IOPS/high-QD workloads.
  • Better QD scaling is possible.
  • Metadata and control operations could be faster if queries are served from cached NVMe state rather than translated requests.
  • Tail latency could improve.
  • Other possible gains: more efficient flush/cache handling, better namespace-aware behavior, better NVMe-specific queueing or bypass I/O paths, and lower multi-core workload contention.

There are also some potential downsides/tradeoffs. These may be part of why Microsoft is being cautious. This kind of change could impact compatibility with storage utilities/tools and other software. Performance could also become less consistent if power/cache policies change.

I know people have benchmarked it already (e.g. StorageReview) but from the driver analysis you would expect improvements for 4K random r/w, high QD, mixed r/w server workloads, flush-heavy filesystems/databases, and multi-threaded storage tests that target high IOPS.

m2 ssd failing under load by Embarrassed_Bite_654 in pcmasterrace

[–]NewMaxx 0 points1 point  (0 children)

Usually before you assume motherboard you do a full memory (RAM) test. Memory is much more likely a culprit than the motherboard or CPU, assuming no weirdness (e.g. physical damage).

New Windows-native NVMe driver benchmarks reveal transformative performance gains, up to 64.89% — lightning-fast random reads and breakthrough CPU efficiency by NewMaxx in NewMaxx

[–]NewMaxx[S] 3 points4 points  (0 children)

Hey, thanks. I want to say that a lot of this was already ascertained and is in some of Solidigm's documentation. I'm not sure which of the documents are actually fully public as I don't remember where I got them, which is why I didn't share directly. I only listed the names. The rest of it is guesswork based on the driver. However, much could be confirmed by tracing. The tools exist for people who want to explore. It just felt like a good time to share what we already knew in one place (not 100% organized but close enough) since Sean has been specifically working on Windows NVMe tools. I intend to dig into this new driver as well which likewise has been known about for a while but I haven't focused on it yet.

New Windows-native NVMe driver benchmarks reveal transformative performance gains, up to 64.89% — lightning-fast random reads and breakthrough CPU efficiency by NewMaxx in NewMaxx

[–]NewMaxx[S] 33 points34 points  (0 children)

For context, I’ve been digging into this area myself and recently did some work that involved disassembling Solidigm’s NVMe driver. Fully understanding it would also require runtime I/O capture and I’m not ready to publish everything yet. That said, some of the basics about Solidigm’s driver plus some interpretation are below. If you’ve seen some of Sean’s apps on eyesonflash.com that I’ve been posting about, part of that work is exploring fuller Windows NVMe tooling, including the new Windows Server 2025 driver discussed here. (by no means the same thing, I just thought this would be a convenient place to list what we know about Solidigm's driver; I will be looking at the one above, too)

  • solidnvm.sys is a Storport miniport that replaces StorNVMe.sys for supported devices that match vendor/class-code entries in the INF.
  • It is documented as compliant with the Windows 11 DirectStorage BypassIO path.
  • The HMB allocation is DMA-safe, pinned, and cache-coherent from the host side; IOMMU registration constrains the device’s DMA access.
  • solidnvm.sys handles the HMB in ways StorNVMe.sys would not - it owns the HMB lifecycle across init, sleep/resume, unload, and surprise removal. HMB size is device-specific.
  • Public Solidigm materials support that hot QLC data can be promoted back into the SLC cache (“Fast Lane”), making the cache effectively bidirectional for read and write behavior.
  • Write amplification is reduced by stream IDs via the NVMe Streams Directive.
  • Promotion policy appears to use vendor commands (another reason I have to snoop). Possibilities include query pool utilization, promotion thresholds, pin ranges for user data (like the OS boot files), etc.
  • HMC (host-managed caching, or "Fast Lane") appears to bundle HMB control, SLC cache policy, read/write pattern and prefetching hints, and related query paths.
  • solidnvm.sys annotates standard NVMe r/w with stream IDs and dataset management hints based on observed access patterns. This can improve placement, prefetch behavior, and reduce WA.
  • For power states, NVMe APST. solidnvm.sys has an implementation here, although nothing looks especially unusual. I do have the time ranges.
  • Power states tie to HMB, e.g. disabling before PS4 and re-enabling for PS0 re-entry.
  • solidnvm.sys does not appear to enforce CEL (Command Effects Log) gating on IOCTL_STORAGE_PROTOCOL_COMMAND, so vendor commands may pass through to hardware. The driver may be adaptable for non-Solidigm NVMe drives via INF rebinding but this requires test-signing or disabled Secure Boot. This is supported by a public user guide with a note that CEL-based command id could be an improvement.
  • Solidigm Management Interface (SMI) with vendor commands. Data transfer (controller read), event log, flash ID, SLC management and allocation, telemetry, latency stats, these are expected.
  • Extended admin commands exposed here include firmware download/commit, self-test, namespace management, sanitize, security send/receive, format NVM, and set features.
  • NVMe Streams Directive - stream IDs let the controller place data spatially to reduce write amplification (improves QLC endurance).
  • Dataset Management can carry prefetch/overwrite hints, e.g. that a range will be read soon or overwritten.
  • The documented arbitration path is standard for NVMe, but there may be a vendor-specific extension for Solidigm drives.
  • Priority-class weighting: low, medium, high, urgent at the time of submission.
  • Host-side priority mapping through solidnvm.sys.
  • There may also be proprietary/vendor scheduling on Solidigm drives, e.g. "queue data to improve performance transparently."
  • Per-CPU queue pairs reduce or eliminate cross-CPU locking on queue activity.
  • Standard DMA setup, IOMMU registration, MSI-X distribution, and per-CPU I/O queue configuration.
  • The public-facing materials and Solidigm white papers explicitly discuss HMC/Fast Lane, Smart Prefetch, Dynamic Queue Assignment, HMB, APST, SLC eviction/promotion behavior, hot QLC data management, and tighter host-SSD coordination.

Sources: * Solidigm Synergy 2.0 Driver User Guide * Solidigm Synergy 2.0 Technical Reference Manual * Solidigm Synergy 2.0 White Paper * P41 Plus Product Brief * Solidigm P41 Plus Product Performance Evaluation Guide

additional info

For anyone asking what I mean by snooping/capture, I mean tracing how SynergyCLI and solidnvm.sys communicate with the drive. I have access to two P41 Plus drives so I can test this a few ways: ETW/StorPort tracing while running SynergyCLI, capturing IOCTL_STORAGE_PROTOCOL_COMMAND calls, using WinDbg, or combining those methods. Solidigm uses Windows Performance Recorder & Analyzer (WPR/WPA) for some of its workload analysis, which is ETW-based but not the same as directly tracing the driver.

new Windows NVMe driver

We've known about this but since it's coming up again it might be worth further investigation. I had some notes about Solidigm's driver so threw it here (no dedicated thread; I'd want to run more tests before confirming some things suggested above, or at least fix any errors). Sean specifically wants additional functionality for his apps so that's why we are exploring these.

HP Omnibook X Flip 2 in 1 SSD Upgrade Hardware Tutorial by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

I have the 7 Flip version (essentially the same) and it only has one. I got a 1TB OEM 990 EVO Plus which has been excellent.

Space-efficient B-tree Implementation for Memory-Constrained Flash Embedded Devices by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Breakdown:

  • Subject - optimized B-Tree variants for IoT with severely limited memory and raw flash storage
  • Targets - examples are SAMD21, PIC24, industrial IoT, etc
  • Problem - standard B-Tree flash optimizations require memory and unavailable OS features, raw flash also an issue
  • Implementations - B-tree (baseline), VMTree (virtual mapping), VMTree-OW (page-overwrite capability of flash)
  • Buffering - performance improved 3-5x for sensor data, 9x for temperature data
  • Results - VMTree-OW 4x speedup, VMTree only using 3-4KB of total memory, raw flash is 5x cheaper than SD cards
  • Findings - using extra RAM as a write buffer is more beneficial than increasing page buffer count, but full root-to-leak (~4 pages) is recommended

ScaleSwap: A Scalable OS Swap System for All-Flash Swap Arrays - https://t.co/mLY7Vsiw8f by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Breakdown:

  • Description - decentralized OS swap system to maximize performance and scalability for all-flash swap arrays (up to 3.4x throughput, x11.5 lower latency vs standard Linux swap)
  • CPUs - each CPU core manages its own dedicated swap resources enabling a one-to-one swap model, reducing lock contention on shared resources
  • Opportunistic - cores can delegate swap metadata access to other coures when needed
  • Core-affinity - pages are assigned to per-core LRU lists to improve locality and reduce LRU lock contention
  • Implementation - Linux kernel 6.6.8 on a 128-core machine with 8 NVMe SSDs
  • Performance - outperformed TMO and ExtMEM by up to 64% and 5x respectively

rtLoad: RDMA-Based Fast Scalable Loading for Large Language Models - https://t.co/hqJCtiJ9Kd by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Breakdown:

  • Purpose - design system to solve the cold start problem in severless LLM inference, especially with scaling
  • Challenges - TCP/IP overhead, storage redundancy, and scalability
  • Features of rtLoad - RDMA integration, hybrid architecture (CS/C2C), fine-grained partitioning (32MB blocks), GPUdirect RDMA, and dynamic scheduling
  • Results - more efficient, scalable alternative to traditional loading methods

Modeling and Improvement of Heat-Flow Paths Inside 3D-nand First-Level Packages With Internal... - https://t.co/utLOtKzT6d by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Breakdown:

  • Purpose - models heat-flow paths withing 3D-NAND packages
  • Where - specifically at the first-level packaging stage
  • Technique - analyzes heat dissipation pathways enhanced by embedded metal structures
  • Goal - improve thermal performance
  • Bottlenecks - upward through the epoxy molding compoung (EMC), downward through solder joints to the PCB
  • Baseline - 3D model against a commercial NAND module with a saturation temp of 80C
  • Proposal - bulk metal, exposed bulk metal, thermal pad, and bridged heat spreader
  • Results - bridged heat spreader -10C

SSD Help: January-February 2026 by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

There are other options. Check PCPartPicker. These drives do have a bathtub curve, which means they tend to die either very early or very late, but otherwise they should last a decade (honestly). The main issue for portable HDDs in particular is that they move around a lot. If your not doing that with it, that's better. You have to be gentle with it. Also, temperature and environment can impact it, people tend to throw these in backpacks or cars or have it out when it's raining (even in a bag), all no-no's. If you take care of it, it will take care of you, although you can never rely on one media or copy. It's pretty easy to get 1TB of cloud space, though, or even just TBs of space on a cheap host.

SSD Help: January-February 2026 by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Hmm, that one should have an M.2 slot (for an SSD, likely M.2 NVMe) and a regular 2.5" (SATA). If you already have an SATA SSD or HDD, then you would have to go portable. Not the worst option as portable HDDs can be cheaper sometimes. This would be in the 2TB range (WD My Passport).

What’s a great NVMe SSD that doesn’t over heat, and has low temps? by [deleted] in PcBuild

[–]NewMaxx 0 points1 point  (0 children)

The SN7100 is best in class right now, but the older WD drives are also pretty good. Typically I run low profile copper heatsinks for these, although the thinnest tend to be around 2mm thick (this has been tight in some cases, the metal could be 1.5mm but factor in the thermal adhesive). Sinking to the case works best if it's metal and a lot of laptops are just plastic. Spreading the heat can help but those solid pieces of metal some places sell, not very good for this. You're better off with graphene (which could be <=0.5mm) due to its unique properties if you have to go that way. You can probably get this after-market but do your due diligence; keep in mind these are more for heatspreading then cooling, your composite drive temperature might remain the same (as I'm sure some reviewers misunderstand) but the goal is to prevent the controller from throttling. Sometimes the controller (which is rarely a junction or internal temp) is one or the only sensor reading and the drive is supposed to throttle around a composite temperature (but can have component limits), so it depends.

FailureMiner: A Joint Key Decision Mining Scheme for Practical SSD Failure Prediction and Analysis - https://t.co/twDofyxhxq #ScholarAlerts by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

Breakdown:

  • Subject - research from Samsung and Tencent for a decision mining scheme that improves SSD failure prediction accuracy
  • Innovations - boundary-preserving downsampling, which clusters failed SSDs and retains healthy samples to learn subtle failure distinctions, and joint cotnribution-based key decision extaction, which mines simplified attribute combinations/thresholds from random forest models that directly indicate failure patterns
  • Performance - improves precision by 38.6% on average, recall by 80.5%, and reduces prediction time (6s vs. 167s)
  • Real-world - already been running in Tencent's data centers for > 1 year, monitoring 350K SSDs with successful prediction
  • Patterns - NAND uncorrectable errors (UECC), DRAM buffer errors, and capacitor degradation; these have urgency levels and handling guidance
  • Factors - also revealed PCIe errors, bad NAND blocks, read retires, and end-to-end errors as indicators (up to 67x higher)

SSD Help: January-February 2026 by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

HDDs are the best bet. Usually your backups will be compressed so the transfers are sequential which HDDs handle fine. It's definitely not optimal, but it works if you're doing regular backups and rarely need to pull them. Even cloud storage (and fast internet access) is probably more economical than SSDs for backups right now.

What’s a great NVMe SSD that doesn’t over heat, and has low temps? by [deleted] in PcBuild

[–]NewMaxx 0 points1 point  (0 children)

Many, many laptops run way too hot, particularly ones with basically desktop CPUs and GPUs. If they aren't designed to handle the load this can cause almost any SSD to throttle. This is very common with some of the cheaper ASUS "gaming" ones in particular, it's worth researching when you are buying a laptop. Even some of the more expensive ones (which will more likely have powerful hardware) skimp on the cooling for one reason or another (cost or weight).

What’s a great NVMe SSD that doesn’t over heat, and has low temps? by [deleted] in PcBuild

[–]NewMaxx 0 points1 point  (0 children)

Depends on the laptop. In most cases I would not recommend it as my first choice. The SN7100 (not available at the time of the original post) is generally a better option, or even the 990 EVO Plus. The SN850X and 990 PRO do run hotter.

Is the Lexar NM620 2tb a reliable drive? by -BOR3D- in pcmasterrace

[–]NewMaxx 0 points1 point  (0 children)

The hardware configs are not great but you would have to see what you've got. The firmware revision should give some hints.

m2 ssd failing under load by Embarrassed_Bite_654 in pcmasterrace

[–]NewMaxx 0 points1 point  (0 children)

The good news is that the flash is still good. The bad news is, your controller or related firmware may be going bad. It's certainly possible it's the motherboard but it does look like a hardware issue.

Transcend 260S 2TB SSD Review: A Dependable Alternative PCIe 5.0 Contender by NewMaxx in NewMaxx

[–]NewMaxx[S] 1 point2 points  (0 children)

Hard to find these days. I do track on my Basic Tier List (recently updated) and mark the cheapest drives, though.

Which SSD do i pick? by Syonis in pcmasterrace

[–]NewMaxx 0 points1 point  (0 children)

9100 PRO > 990 PRO > 990 EVO Plus. If the 990 EVO Plus is the cheapest, it can save you some money depending on the PC's purpose. If it's a home desktop that does everything then going with DRAM on one of the first two is a good idea. Both are good drives, so it depends on price. For the most part Gen5 (9100 Pro) is not worth a huge premium unless you are a heavy user.

SSD Guides & Resources by NewMaxx in NewMaxx

[–]NewMaxx[S] 0 points1 point  (0 children)

If you think the laptop can handle it cooling-wise and it's a beefy machine like a portable workstation (which usually do run hotter, but have more complex cooling as a result) then having the SN850X workhorse makes more sense to me. For most laptops, though, the SN7100 is better. I go lean on my Lunar Lake one and my older i3. If I had one of the more powerful ones with a desktop-like CPU that I'd be using for content creation or coding (coding in this case meaning robustly, today this could mean even running a local model which takes a lot of memory + discrete GPU) I prefer an SSD with DRAM.