2M IOPS on a single VM with Nutanix HCI
Published: (Updated: ) by . Leave a Comment on 2M IOPS on a single VM with Nutanix HCI.
How to generate a lot of IOPS to a single VM.
Published: (Updated: ) by . Leave a Comment on 2M IOPS on a single VM with Nutanix HCI.
How to generate a lot of IOPS to a single VM.
Published: (Updated: ) by . Leave a Comment on Effect of POSIX_FADV_SEQUENTIAL and POSIX_FADV_RANDOM on IO performance..
Previously we looked at how the POSIX_FADVISE_DONTNEED hint influences the Linux page cache when doing IO via a filesystem. Here we take a look at two more filesystem hints POSIX_FADV_RANDOM and POSIX_FADV_SEQUENTIAL
Published: (Updated: ) by . Leave a Comment on Using fio to read from Linux buffer-cache.
Sometimes we want to read from the Linux cache rather than the underlying device using fio. There are couple of gotchas that might trip you up. Thankfully fio provides the required work-arounds. TL;DR To get this to work as expected (reads are serviced from buffer cache) – the best way is to use the option […]
Published: (Updated: ) by . Leave a Comment on fio versions < 3.3 may show inflated random write performance.
TL;DR If your storage system implements inline compression, performance results with small IO size random writes with time_based and runtime may be inflated with fio versions < 3.3 due to fio generating unexpectedly compressible data when using fio’s default data pattern. Although unintuitive, performance can often be increased by enabling compression especially if the bottleneck […]
Published: (Updated: ) by . 1 Comment on Specifying Drive letters with fio for Windows..
fio on Windows Download pre-compiled fio binary for Windows Example fio windows file, single drive This will create a 1GB file called fiofile on the F:\ Drive in Windows then read the file. Notice that the specification is “Driveletter” “Backslash” “Colon” “Filename” In fio terms we are “escaping” the : which fio traditionally uses as […]
Published: (Updated: ) by . Leave a Comment on Hunting for bandwidth on a consumer NVMe drive.
The Samsung SSD 970 EVO 500GB claims a sequential read bandwidth of 3400 MB/s this is a story of trying to achieve that number.
Published: (Updated: ) by . Leave a Comment on Beware of tiny working-set-sizes when testing storage performance..
I was recently asked to investigate why Nutanix storage was not as fast as a competing solution in a PoC environment. When I looked at the output from diskspd, the data didn’t quite make sense.
Published: (Updated: ) by . 2 Comments on Using rwmixread and rate_iops in fio.
Creating a mixed read/write workload with fio can be a bit confusing. Assume we want to create a fixed rate workload of 100 IOPS split 70:30 between reads and writes. TL;DR Specify the rate directly with rate_iops=<read-rate>,<write-rate> do not try to use rwmixread with rate_iops. For the example above use. rate_iops=70,30 Additionally older versions of […]
Published: (Updated: ) by . 2 Comments on Understanding fio norandommap and randrepeat parameters.
The parameters norandommap and randrepeat significantly change the way that repeated random IO workloads will be executed, and also can meaningfully change the results of an experiment due to the way that caching works on most storage system.
Published: (Updated: ) by . Leave a Comment on Identifying Optane drives in Linux.
How to identify optane drives in linux OS using lspci.
Published: (Updated: ) by . Leave a Comment on Microsoft diskspd Part 3. Oddities and FAQ.
Tips and tricks for using diskspd especially useful for those familar with tools like fio
Published: (Updated: ) by . Leave a Comment on Microsoft diskspd. Part 2 How to bypass NTFS Cache..
How to ensure performance testing with diskspd is stressing the underlying storage devices, not the OS filesystem.
Published: (Updated: ) by . Leave a Comment on Microsoft diskspd. Part 1 Preparing to test..
How to install and setup diskspd before starting your first performance tests and avoiding wrong results due to null byte issues.
Published: (Updated: ) by .
Published: (Updated: ) by . Leave a Comment on Why does my SSD not issue 1MB IO’s?.
First things First Why do we tend to use 1MB IO sizes for throughput benchmarking? To achieve the maximum throughput on a storage device, we will usually use a large IO size to maximize the amount of data is transferred per IO request. The idea is to make the ratio of data-transfers to IO requests […]
Published: (Updated: ) by .
The real-world achievable SSD performance will vary depending on factors like IO size, queue depth and even CPU clock speed. It’s useful to know what the SSD is capable of delivering in the actual environment in which it’s used. I always start by looking at the performance claimed by the manufacturer. I use these figures […]
Published: (Updated: ) by .
A 2007 paper, that still has lots to say on the subject of benchmarking storage and filesystems. Primarily aimed at researchers and developers, but is relevant to anyone about to embark on a benchmarking effort. The authors are clear on why benchmarks remain important: “Ideally, users could test performance in their own settings using real […]
Published: (Updated: ) by .
Storage bus speeds with example storage endpoints. Bus Lanes End-Point Theoretical Bandwidth (MB/s) Note SAS-3 1 HBA <-> Single SATA Drive 600 SAS3<->SATA 6Gbit SAS-3 1 HBA <-> Single SAS Drive 1200 SAS3<->SAS3 12Gbit SAS-3 4 HBA <-> SAS/SATA Fanout 4800 4 Lane HBA to Breakout (6 SSD)[2] SAS-3 8 HBA <-> SAS/SATA Fanout 8400 […]
Published: (Updated: ) by .
We have started seeing misaligned partitions on Linux guests runnning certain HDFS distributions. How these partitions became mis-aligned is a bit of a mystery, because the only way I know how to do this on Linux is to create a partition using old DOS format like this (using -c=dos and -u=cylinders)
Published: (Updated: ) by .
Often we are presented with a vCenter screenshot, and an observation that there are “high latency spikes”. In the example, the response time is indeed quite high – around 80ms.