Hosting marketing pages love a benchmark chart. NVMe drives are 8-16x faster than SATA SSDs in raw throughput, the chart says. Therefore your site will be 8-16x faster, the implication goes. This is rarely true and the reasons are interesting.
I run our storage infrastructure. I've measured this on real customer workloads. Here's what actually happens when you move a typical website from SATA SSD to NVMe.
What the numbers actually say
The headline benchmark numbers are real. A modern SATA SSD does ~550 MB/s sequential read at the drive boundary. A modern NVMe drive does 3,500-7,000 MB/s sequential read. Random IOPS are even more lopsided — SATA tops out around 100k IOPS, NVMe scales to a million.
But your website doesn't read storage sequentially at the drive boundary. It reads it through:
Visitor browser
↓ (TCP, ~30-200ms over the internet)
Web server (Engintron + Apache)
↓ (kernel syscall, microseconds)
PHP-FPM worker
↓ (filesystem cache, in-memory most of the time)
Storage (SSD or NVMe)The storage layer is the last place data comes from, and only when nothing higher up the stack has cached it. For a healthy WordPress install with normal caching, that's well under 5% of requests.
Where NVMe actually moves the needle
NVMe is genuinely faster for these specific things:
1. The first hit after deployment. When you push code or import a backup, the filesystem cache is empty. Every PHP file gets loaded from disk on first execution. SATA SSD: maybe 800ms cold load for a typical WordPress homepage. NVMe: 200ms. After that hit, the kernel caches everything in RAM and the difference disappears until the cache gets evicted.
2. Database commits under load. Every INSERT or UPDATE in MySQL writes to the binary log and waits for fsync() to flush to disk before returning. SATA fsync: ~5ms. NVMe fsync: ~0.5ms. On a busy WooCommerce store doing 50 writes/second during a flash sale, that's 250ms vs 25ms of cumulative wait time per second — visible to checkout users.
3. Backup and restore times. A 50 GB site backup over SATA takes ~90 seconds to read; over NVMe ~12 seconds. Doesn't matter to visitors, matters a lot to ops and disaster recovery RPOs.
4. Image-heavy galleries with cold cache. A photographer's portfolio with 5,000 high-res JPEGs and no CDN — when bots crawl deep pages that aren't in RAM, NVMe cuts the response time from 800ms to 200ms.
Where NVMe doesn't help at all
This is the longer list:
Cached page hits. When WP Rocket, W3 Total Cache, or Engintron's nginx layer has the rendered HTML, the request never touches PHP. The cached bytes come straight out of RAM. SSD type doesn't appear in the path.
API responses. A REST endpoint that hits MySQL, runs some logic, returns JSON. The bottleneck is PHP execution and MySQL query planning. Storage I/O is microseconds against PHP's tens of milliseconds.
Logged-in admin work. wp-admin is dynamic by nature, so caching is harder. But the slowness in admin is overwhelmingly PHP plugins doing too much per request — slow autoloaders, thousands of action hooks, bloated post meta. NVMe shaves a few percent off; killing the offending plugin shaves 80%.
Anything with a CDN in front. Cloudflare, Fastly, BunnyCDN — they all cache HTML, images, CSS, JS at edge servers near the user. The origin disk is irrelevant for cached assets.
Real-world before/after
I migrated a customer's WooCommerce store from a SATA-backed plan to NVMe last quarter. They wanted hard numbers. I gave them this:
| Metric | SATA SSD | NVMe | Difference |
|---|---|---|---|
| TTFB, cached homepage (cold cache) | 95ms | 88ms | -7ms |
| TTFB, cached homepage (warm cache) | 32ms | 30ms | -2ms |
| TTFB, product page (cache miss) | 410ms | 290ms | -120ms |
| TTFB, checkout submit | 720ms | 580ms | -140ms |
| Backup time, 18 GB site | 47s | 9s | -38s |
wp cli plugin update --all | 145s | 38s | -107s |
The visitor-facing numbers (cached + uncached homepages, product pages) are clearly better but not life-changing. The operational numbers (backup, plugin updates, deployment) are 4-5x faster — that's where my team's day-to-day happiness improved.
The real question — when is storage your bottleneck?
If you're not sure whether storage matters for your site, run the test. SSH in and time a real workload:
# Time a representative cold-cache page load:
time curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" \
"https://yourdomain.com/?bypass-cache=$(date +%s)$RANDOM"The query string busts any page cache so you're measuring full PHP+DB execution.
Then look at where the time went using the Query Monitor plugin in WordPress (or PHP's built-in microtime() if you're not on WP). You'll typically see something like:
Total request: 410ms
PHP autoload: 80ms
DB queries: 120ms (44 queries, 12 slow)
Plugin hooks: 180ms
Render: 30msStorage I/O isn't even an explicit line — it's hidden inside the DB queries and PHP autoload. If those are your big numbers, NVMe will move them by a small amount. If "Plugin hooks" is the big number (often it is), faster storage doesn't help. You need fewer hooks.
When NVMe actually wins
The clear cases for caring about storage speed:
1. Database-heavy apps with no page caching. Forums, real-time dashboards, admin panels. Every request hits MySQL, MySQL hits disk for anything not in the buffer pool. Faster fsync = faster commits = lower latency under concurrent load.
2. Batch operations. Importing 10,000 products to WooCommerce, running a backup, restoring from a snapshot, regenerating image thumbnails for an old gallery. Each is a long sequence of disk I/O. NVMe finishes them in a fraction of the time.
3. Tight RAM budget. If your server has 4 GB of RAM and your site + DB are 20 GB, the filesystem cache can only hold a fraction of the working set. Cache misses are frequent, and every miss hits storage. NVMe makes those misses cheap. With 32 GB of RAM and a 20 GB site, the entire dataset lives in cache and storage type matters far less.
4. CI / staging / dev environments. Where you spin up fresh containers all the time, run test suites, check out giant repos. NVMe makes everything snappier and developer time is expensive.
Our setup
For full transparency: every Rivervo plan since mid-2025 runs on NVMe, including the entry-level shared hosting. Not because NVMe is always the deciding factor for end-users, but because the operational benefits (backup speed, deployment speed, our team's quality of life) make it worth the small cost premium.
Whether your site will benefit depends on your bottleneck, not on the storage technology marketing copy. Run the timing test above. If it shows your visitor latency comes from PHP and plugin overhead, faster storage is a rounding error. If it shows database commits or cold cache loads dominating, NVMe will help.
If you want a second opinion on where your specific site's time goes, our support team can run the analysis on your account — it takes about 10 minutes for them to look at the numbers and tell you whether you'd notice an upgrade or whether you should fix something else first.