How to speed up your X-ray benchmark development cycle by re-using/re-cycling benchmark VMs and more importantly data-sets.Continue reading
Microsoft diskspd. Part 2 How to bypass NTFS Cache.
How to ensure performance testing with diskspd is stressing the underlying storage devices, not the OS filesystem.Continue reading
Microsoft diskspd. Part 1 Preparing to test.
How to install and setup diskspd before starting your first performance tests and avoiding wrong results due to null byte issues.Continue reading
Why does my SSD not issue 1MB IO’s?
First things First
Why do we tend to use 1MB IO sizes for throughput benchmarking?
To achieve the maximum throughput on a storage device, we will usually use a large IO size to maximize the amount of data is transferred per IO request. The idea is to make the ratio of data-transfers to IO requests as large as possible to reduce the CPU overhead of the actual IO request so we can get as close to the device bandwidth as possible. To take advantage of and pre-fetching, and to reduce the need for head movement in rotational devices, a sequential pattern is used.
For historical reasons, many storage testers will use a 1MB IO size for sequential testing. A typical fio command line might look like something this.
fio --name=read --bs=1m --direct=1 --filename=/dev/sdaContinue reading
HammerDB: Avoiding bottlenecks in client.
How to avoid bottlenecks in the client generator when measuring database performance with HammerDBContinue reading
Paper: A Nine year study of filesystem and storage benchmarking
A 2007 paper, that still has lots to say on the subject of benchmarking storage and filesystems. Primarily aimed at researchers and developers, but is relevant to anyone about to embark on a benchmarking effort.
- Use a mix of macro and micro benchmarks
- Understand what you are testing, cached results are fine – as long as that is what you had intended.
The authors are clear on why benchmarks remain important:
“Ideally, users could test performance in their own settings using real work- loads. This transfers the responsibility of benchmarking from author to user. However, this is usually impractical because testing multiple systems is time consuming, especially in that exposing the system to real workloads implies learning how to configure the system properly, possibly migrating data and other settings to the new systems, as well as dealing with their respective bugs.”
We cannot expect end-users to be experts in benchmarking. It is out duty as experts to provide the tools (benchmarks) that enable users to make purchasing decisions without requiring years of benchmarking expertise.